1 NCSBN research brief Volume 40 June 2009 The Effect of High-Fidelity Simulation on Nursing Students Knowledge and Performance: A Pilot Study
2 Report of Findings from The Effect of High-Fidelity Simulation on Nursing Students Knowledge and Performance: A Pilot Study Principal Investigators Frank D. Hicks, PhD, RN, Rush University College of Nursing, Chicago, IL Lola Coke, PhD, RN, Rush University College of Nursing, Chicago, IL Suling Li, PhD, RN, National Council of State Boards of Nursing, Chicago, IL National Council of State Boards of Nursing, Inc. (NCSBN )
3 ii Report of Findings from The Effect of High-Fidelity Simulation on Nursing Students Knowledge and Performance: A Pilot Study Mission Statement The National Council of State Boards of Nursing, composed of member boards, provides leadership to advance regulatory excellence for public protection. Copyright 2009 National Council of State Boards of Nursing, Inc. (NCSBN ) All rights reserved. NCSBN, NCLEX, NCLEX-RN, NCLEX-PN and TERCAP are registered trademarks of NCSBN and this document may not be used, reproduced or disseminated to any third party without written permission from NCSBN. Permission is granted to boards of nursing to use or reproduce all or parts of this document for licensure related purposes only. Nonprofit education programs have permission to use or reproduce all or parts of this document for educational purposes only. Use or reproduction of this document for commercial or for-profit use is strictly prohibited. Any authorized reproduction of this document shall display the notice: Copyright by the National Council of State Boards of Nursing, Inc. All rights reserved. Or, if a portion of the document is reproduced or incorporated in other materials, such written materials shall include the following credit: Portions copyrighted by the National Council of State Boards of Nursing, Inc. All rights reserved. Address inquiries in writing to NCSBN Permissions, 111 E. Wacker Drive, Suite 2900, Chicago, IL Suggested Citation: National Council of State Boards of Nursing. (2009). Report of Findings from The Effect of High-Fidelity Simulation on Nursing Students Knowledge and Performance: A Pilot Study. (Research Brief Vol. 40). Chicago: Author. Printed in the United States of America ISBN#
4 TABLE OF CONTENTS iii table of contents List of Tables iv Acknowledgments v Introduction I. Background Advantages Disadvantages Review of the Literature Variables II. Study Goals III. Research Questions IV. Methods Design and Sampling Setting Procedure Simulation only Simulation and clinical Clinical only Outcome Measures Knowledge Acquisition and Retention Clinical Performance Faculty Review of Videotaped Student Performance Self-confidence Protection of Human Research Subjects Data Analysis V. Results Demographics Knowledge Acquisition and Retention Clinical Performance Assessed with Standardized Patients Self-confidence Course Evaluation VI. Limitations of the Study VII. Discussion VIII. Avenues of Future Research IX. References Appendix A: Chest Pain Scenario Appendix B: Shortness of Breath Scenario Appendix C: Level of Consciousness Scenario Appendix D: Self-confidence Scale
5 iv LIST OF TABLES LIST OF TABLES Table 1. Demographics of Participants Table 2. Written Exam Scores Before and After Simulation/Clinical Experiences Table 3. Clinical Performance on Three Patient Care Scenarios Portrayed by Standardized Patients: Tape Review Checklist Rating Table 4. Total Time to Complete Three Patient Care Scenarios Portrayed by Standardized Patients Table 5. Dimensions of Perceived Confidence Before and After Simulation/Clinical Experiences Table 6. Perceived Confidence Before and After Simulation/Clinical Experiences Table 7. Perceived Clinical Experience Table 8. Perceived Simulation Experience (n=37) Table 9. Student Learning Based on Self-Evaluation
6 ACKNOWLEDGMENTS v Acknowledgments This study would not have been possible without the support provided by the Rush University College of Nursing (RUCON) and Rush University Simulation Laboratory (RUSL). The authors acknowledge the contributions of the involved faculty, staff and students who expended time and energy to help implement the study protocol. Specifically, the authors acknowledge the following individuals for their valuable contributions to the study: Marcia Bosek, PhD, RN, former associate professor, RUCON, Chris MacNeal, simulation coordinator, RUCON and Lynn Richter, MSN, RN, APRN-BC, assistant professor, RUCON. The authors also gratefully acknowledge Nancy Spector, PhD, RN, NCSBN, director, innovations, Kevin Kenward, PhD, NCSBN, director, research, Maryann Alexander, PhD, RN, NCSBN, chief officer, nursing regulation, and Mary E. Doherty, JD, BSN, RN, NCSBN, associate, nursing regulation for their support and valuable feedback to the project, and Richard Smiley, MS, NCSBN, statistician, research, for his help with the statistical analysis.
8 INTRODUCTION 1 Introduction Simulation, the art and science of recreating a clinical scenario in an artificial setting, has been an important aspect of nursing program curriculums for decades (Gomez and Gomez, 1987). As an adjunct to clinical experience, simulation has allowed deliberate practice in a controlled environment. Students are able to practice a procedure prior to performance on a live patient. The value of this is unquestionable. Recently, however, high-fidelity simulation, with the increased level of sophistication and realism it brings to the laboratory setting, has elicited the possibility of simulation being used as a substitute for actual clinical experience. This study explores that concept. High-fidelity simulation refers to structured student learning experiences with the use of a technologically advanced computerized mannequin, the Human Patient Simulator (HPS). HPS is anatomically precise and reproduces physiologic responses. Students are administered sequential decision-making events within an environment that mimics a clinical setting. Instructors can control the mannequin s responses and the HPS can respond to interventions provided by the student (Gilley, 1990; Graedler, 1992; Lasater, 2007). Gaba (2004) describes simulation as a strategy not a technology, to mirror, anticipate, or amplify real situations with guided experiences in a fully interactive way. The subject of simulation and its uses in prelicensure education is of prime significance to regulators. From a regulatory perspective, simulation offers advantages. It harbors patients from unnecessary risk and/or discomfort, thus providing an element of public protection. The increased opportunities to encounter infrequent and atypical clinical problems may better prepare new graduates for transition into the workforce. It may also offer an alternative for nursing programs dealing with a limited number of clinical sites or clinical sites that have inadequate learning opportunities. A goal of regulation is to make certain that the future workforce has safe and competent nurses. Thus, regulators are also interested in ensuring that nursing students are provided with optimal education experiences. Despite its obvious benefits, there is a paucity of evidence regarding the efficacy of high-fidelity simulation as a teaching method beyond the acquisition of psychomotor skills. How well simulation assists students in acquiring and integrating knowledge, skills and critical thinking, and how it fares when compared to traditional clinical/real patient encounters, are a few of the questions being asked by both educators and regulators alike. It is unknown as to what degree simulation learning is transferable to the clinical area or if simulation learning is as effective as actual clinical experience in developing professional judgment. There is also little evidence demonstrating how well it assists in building confidence. Because of these questions and the lack of evidence to answer them, it is unknown as to what degree highfidelity simulation would be an appropriate and suitable substitute for real clinical experiences now required by state regulations for nursing programs. As the first step in addressing these questions, it is important to determine if high-fidelity simulation provides an effective learning strategy equivalent to clinical experiences.
9 2 BACKGROUND BACKGROUND Documented throughout the literature are the advantages and disadvantages of high-fidelity simulation. The following list summarizes the pros and cons of this methodology. Advantages No direct risk to patients. Simulation offers opportunities to practice rare and critical events in a safe and controlled environment, affording unlimited practice without risk to patients (Gilley, 1990; Graedler, 1992; Maran and Glavin, 2003, Decker, et al., 2008). The potential to increase the speed of acquisition of clinical skills to a defined level of competence by allowing the opportunity for repetitive practice at the learner s own pace (Maran and Glavin, 2003). Team training. Scenario themes can focus on understanding team collaboration and communication (Ellis, et al., 2008; Lasater, 2007). Can allow a standardized curriculum to be developed as the same exact scenarios can be presented to all learners. Simulation reduces training variability and increases standardization. With simulation, faculty can guarantee the same experience for every student (Maran and Glavin, 2003). Reflective learning by facilitated debriefing of scenarios and video feedback (Lederman, 1992; Gilley, 1990; Hertel and Millis, 2002; Gaba, 2000). Potential to decrease the number and effect of errors through crisis resource management (CRM) training (Ziv, et al., 2003). Disadvantages Equipment fidelity. Fidelity is the degree to which the simulator itself replicates reality. A major limitation of simulation is the fidelity; no matter how high the fidelity is, it is not real. It is often impossible to imitate actual physiological signs or symptoms. For example, it is impossible to display crackles in the lung, when such a symptom would be important to the scenario. In addition to the need for high equipment fidelity, simulation requires psychological fidelity. This reflects the degree to which the trainee perceives the simulation to be a believable representation of the reality it is duplicating. Students may not take it seriously, since mistakes or errors have no real consequences on patient safety. Along with equipment and psychological fidelity, simulation requires environmental fidelity that is, the realism of the environment in which the simulation takes place. For example, emotional stress does not exist in mannequins and standardized patients are not really sick. Cost. A study by McIntosh, et al. (2007) calculated the set-up cost of a simulation center was $876,485 (renovation of existing facility, equipment). Fixed costs per year totaled $361,425. Variable costs totaled $311 per course hour. The economic benefits of increasing the number of billable teaching hours per week are significant until about 21 hours (equivalent of three full or six and a half day courses) per week (averaged over 52 weeks/year) when they started to taper off. Lack of faculty time and training in simulation instruction. Effective use of technology in education depends on faculty readiness to operate the tools for maximum educational impact. It is important to recognize the need for faculty training in the specific demands which simulation-based teaching imposes. Access to the simulator is limited and dependent on the availability of instructors and operators. Simulation also limits the number of students that can be taught at any one time. Negative transfer. Negative transfer occurs if the student learns something incorrectly due to imperfect simulation. This most commonly occurs because the instructor fails to make
10 BACKGROUND 3 clear to the students the differences between the training device and the real situation. These differences are usually due to a lack of physical or equipment fidelity. For example, the different feel of intubating a mannequin or the artificial acceleration of tasks might imprint incorrect clinical practices or procedures (Bond, et al., 2007). Certain assessments are not possible, such as reflexes for example. Swelling and redness does not appear, and the psychologic effects of an illness and the emotional response of the patient cannot be assessed using the simulator (Lasater, 2007). Review of the Literature An examination of the literature related to simulation reveals that despite a growing body of literature, there is a substantial need for more evidence. Most studies lack rigor, have small sample sizes, provide little statistical analysis and conclusions are drawn based on student perception. Studies focusing on student reaction to the HPS report positive responses to the simulation experience. In a review of the literature conducted by Ravert (2002), 75% of the studies reviewed concluded that students pursuing a degree in one of the health professions favored or highly favored simulation as a teaching/learning tool. In a qualitative study by Mikkelson, Reime and Harris (2007), students (N=21) who experienced the simulationbased training stated they had greater awareness of the complexity of a health care condition and it raised their awareness to aspects they had not thought of. Two studies specifically examined the impressions of undergraduate nursing students. Bearnson and Wilker (2005) used an HPS to demonstrate medication side effects to students. Students reacted favorably to the experience and reported increased knowledge of medication side effects and an understanding of the differences in patient responses; it also increased their ability to safely administer medications and their self-confidence in medication administration skills. No tests were administered to establish whether the students perception actually correlated with increased knowledge. No control group was used to compare responses. Statistical analysis beyond calculating the means of nominal data collected from a questionnaire with Likert scales was not performed. In addition, the report does not describe the number of students that participated in the study. Lasater (2008) also examined high-fidelity simulation experiences on nursing students (N=8) and its effect on the development of clinical judgment. Students enrolled in a Nursing Care of the Adult course were given weekly simulation experiences as part of the course. After the completion of the course, eight of the 15 students participated in a focus group and discussed their experience with the HPS. Several themes emerged, including simulation integrates learning; it increased the breadth of experiences the students were exposed to; and the scenarios forced them to anticipate potential problems. The investigator concluded that this last theme, anticipation of potential problems, was an indication that simulation fostered clinical judgment. While the experience may have done so, the small sample size and the lack of outcome data measuring performance makes it difficult to draw a definite correlation between the HPS and clinical judgment. Several recent studies have attempted to determine how simulation compares with traditional clinical experience. The lack of consistency in the results, however, makes drawing conclusions about simulation difficult. Alinar, Hunt, Gordon and Harwood (2006) compared clinical performance of students in traditional clinical settings with those that received clinical experience (control) and clinical plus simulation experience (experimental). Using a pretest/posttest design, undergraduate nursing students (N=99) were randomly assigned to one of these two groups. Students were pre- and posttested using the Objective Structured Clinical Examination (OSCE) method. Students in both groups improved their clinical performance; however, the experimental group improved their performance on the OSCE 14 to 18 percentage points (95% CI ) compared to seven to 18 points (95% CI ) in the control group. The 7.0 percentage point difference
11 4 BACKGROUND between the means (95% CI ) was statistically significant (p< 0.001). Radhakrishnan, Roche and Cunningham (2007) conducted the first study to test nursing student performance with complex two-patient assignment simulations. Undergraduate nursing students (N=12) were randomized into a traditional clinical experience (control) or a traditional clinical experience plus simulation (intervention). Results indicated that the intervention group had statistically significant higher scores in two specific areas of clinical performance: safety (p<= 0.001) and basic assessment skills (p<= 0.009), when tested on a twopatient simulator scenario. There were no significant differences in performance in any other clinical parameters, which included focused assessment, interventions, delegation and communication. This study is limited by a small sample size, but may indicate domains in which simulation offers strengths over other methodologies in preparing students for clinical practice. Moule, WIlford, Sales and Lockyer (2008) report an increase in knowledge and skills after simulation training, but do not compare the effects with traditional clinical experience. This was a two-phase study that examined knowledge acquisition through pre- and posttests. Students (N=50) demonstrated an increase in knowledge after the simulation experience through pre- and posttests, vignettes and OSCE. The study also included a second phase, which included interviews with six mentors of the students who had undergone simulation training. Through narrative reports, the study states that the mentors felt students gained confidence and increased their learning from simulation, but no statistical analysis was done on the data. Their recommendations include incorporating simulation as part of the curriculum. One of the most recent and largest studies to date did not reveal differences between learning in a hospital setting or simulation center. In a randomized controlled trial, Ellis, et al. (2008) studied the effectiveness of simulation in the education of midwives and obstetricians for eclampsia. Subjects (N= 132) were randomized into 24 different teams, which were in turn randomized to a training program at a local hospital or simulation center. Knowledge was measured with pre- and postmultiple choice tests. Posttraining exams revealed a 32% increase in mean multiple choice score after the training, which was statistically significant (p<.001). There were, however, no differences in scores between the groups that had the simulation training and those that did not. Outcomes were also measured by videotaping a standardized eclampsia scenario and grading performance pre- and posttraining. Posttraining videos revealed a statistically significant increase in the completion of tasks with greater speed (p=.012). Improvements, however, were attributed to teamwork training. Simulation training did not elicit additional improvement. Variables Three important variables have been selected to be the focus of this study: knowledge, clinical performance and confidence. Nursing knowledge and clinical performance are the basis for undergraduate education and determine the level of competence with which the new graduate enters the workforce. Because of their importance, they will be observed and measured in this study. Confidence has been shown to be an important variable for undergraduate students. Students who have increased self-confidence have a better chance at succeeding in their clinical goals (Lundberg, 2008) and are more likely to test and use their clinical skills (Clark, Owen and Thoicken, 2004). According to Albert Bandura s classic theory on self-efficacy, individuals with a strong sense of selfefficacy or confidence take on more challenges and recover more quickly from failure. This sense of self-efficacy develops from mastery experience (Bandura, 1977). Thus, repetitive practice should improve self-confidence.
12 STUDY GOALS 5 Study goals Specific goals of the study were to: Examine the differences between traditional clinical experience and simulation as teaching methods in prelicensure nursing education; Analyze how simulation training may impact knowledge, clinical performance and confidence levels of undergraduate students and compare this with traditional clinical experience; and Contribute to the body of knowledge on the uses and limitations of simulation in prelicensure nursing education for both regulators and educators. These aims were accomplished by examining the following: Differences in pre- and postmultiple choice exam scores in three groups of undergraduate nursing students (clinical/simulation/ clinical plus simulation); Differences in clinical performance between three groups of undergraduate nursing students (clinical/simulation/clinical plus simulation); and Differences in confidence levels among three groups of undergraduate students (clinical/ simulation/clinical plus simulation). Despite the fact that simulation is widely accepted as an educational tool, little research has examined the role of simulation in prelicensure nursing education in relation to clinical experiences. Moreover, it is unknown to what degree high-fidelity simulation would be an appropriate and suitable substitute for real clinical experiences required by regulatory rules for these nursing programs. As the first step to address this question, it is important to determine if high-fidelity simulation provides an effective learning strategy.
13 6 RESEARCH QUESTIONS Research Questions 1. Are there differences in knowledge between prelicensure nursing students exposed to a traditional intensive care unit (ICU) clinical experience, those that have had only a simulation experience and those that have had a combination of clinical and simulation experiences? 2. Are there differences in clinical performance between prelicensure nursing students exposed to a traditional ICU clinical experience, those that have had only a simulation experience and those that have had a combination of clinical and simulation experiences? 3. Are there differences in the levels of confidence between prelicensure nursing students exposed to a traditional ICU clinical experience, those that have had only a simulation experience and those that have had a combination of clinical and simulation experiences?
14 METHODS 7 METHODS Design and Sampling The goal of this study was to compare the effectiveness of simulation to actual clinical experience among students enrolled in a prelicensure nursing curriculum. The specific objective was to examine the effectiveness and applicability of high-fidelity simulation alone and in combination with clinical experience on knowledge acquisition/retention, self-confidence and clinical performance of nursing students. A randomized controlled design with repeated measures of pre- and posttreatment design was used to address the aim of this study. The study was reviewed and approved by the Institutional Review Board of NCSBN and that of Rush University Medical Center. All students in two separate cohorts (2006 and 2007) of senior baccalaureate nursing students (n=92) enrolled in a required critical care nursing course at Rush University College of Nursing were asked to participate. A total of 58 (cohort 1=23; cohort 2=25) chose to participate in the study. Students who chose not to participate were assigned to the usual treatment group, which for the course was the simulation experience alone. After giving informed consent and following the successful completion of the three credit hour didactic portion of the course, simple random selection was used to determine group composition to one of the three practicum experiences: 1. Clinical without simulation (30 hours of clinical preceptorship with a critical care nurse); 2. Simulation without actual clinical experiences (30 hours of simulation); or 3. Simulation plus clinical experience (15 hours of simulation and 15 hours of clinical without simulation). Measurements of knowledge acquisition and retention, and self-confidence were taken before beginning the didactic portion of the course and after clinical or simulation experiences, while assessment of clinical performance was taken after clinical or simulation experiences (Figure 1). Setting The study was conducted at Rush University College of Nursing because it not only had full access to a state-of-the-art simulation laboratory, but also offered an undergraduate course in critical care nursing, which was deemed appropriate given the type of experiences that could be offered in the simulation laboratory. RUSL is a state-of-the-art simulation training center equipped with life-sized, computer-controlled virtual patients capable of simulating nearly any acute and critical patient episode, including allergic reactions, drug overdoses, respiratory distress, cardiac decompensation and acute neurological events. Audio and video systems in the adjoining classroom enables trainees and faculty to observe teaching sessions, interact with the instructor and participate in guided performance feedback sessions of the videotaped training session. Qualified faculty with graduate preparation and expertise in acute care clinical practice and simulation instruction implemented the study protocol. Procedure The course utilized for this project was modified from an existing required course that consisted of three credit hours of didactic instruction with no planned simulation or clinical experiences. The new course reflected a 40-hour (didactic) undergraduate critical care course with a 30-hour simulation experience. All didactic content in the course was front-loaded followed by simulation and/or clinical experiences. Didactic teaching was provided to all students in the form of lectures, required readings and case studies. The course occurred in the summer quarters after the students successfully completed courses in foundations of nursing, gerontological nursing, basic medical-surgical nursing and psychiatricmental health nursing, which provided all students with 270 hours of previous clinical experience. The critical care course closely followed the American Association of Critical Care Nursing core curriculum, providing didactic content on all areas relative to critical care nursing practice.
15 8 METHODS The following simulation and/or clinical experiences were offered: Simulation only Students in this group were exposed to a variety of critical care nursing scenarios that emphasized assessment and fundamental intervention. Students spent approximately 30 hours in the simulator working on the following scenarios: A patient with a myocardial infarction (MI) and cardiogenic shock; includes experiences in cardiovascular assessment, homodynamic monitoring and EKG interpretation and arrhythmia interventions; A patient with acute respiratory distress; includes experiences in respiratory assessment, oxygen therapy and intubation/ mechanical ventilation; and A patient with increased intracranial pressure (ICP), includes experiences in neurological assessment, basic intervention for changes in level of consciousness (LOC) and ICP monitoring and interpretation. Simulation and clinical Students in this group spent 15 hours in the simulator working on the scenarios described above and then 15 hours (three five-hour sessions) in a critical care clinical experience providing direct, supervised care to a variety of critically-ill individuals. Every attempt was made to provide students with experiences in assessment of the cardiovascular, respiratory and neurological systems; hemodynamic and arrhythmia monitoring and interventions; and oxygenation and mechanical ventilation. Clinical only Students spent 30 hours in a critical care unit providing direct, supervised care to a variety of critically-ill individuals. Every attempt was made to provide students with experiences in assessment of the cardiovascular, respiratory and neurological systems; hemodynamic and arrhythmia monitoring and interventions; and oxygenation and mechanical ventilation. Each scenario ran for approximately five to 15 minutes, during which time the students were able to ask questions of the patient; perform assessments including vital signs, rhythm strips, heart and lung sounds, and pupil changes; and provide interventions, including starting IVs and assisting with suctioning, intubation, etc. The student performance in each scenario was tape recorded for a debriefing session that followed each scenario for discussion of the case, including critical thinking, group coordination and decision making. To implement the simulation scenarios, relevant factors that facilitated effective simulation-based learning synthesized by Issenberg, et al. in 2005 were adopted. The factors included providing feedback; allowing repetitive practice; offering scenarios that were with a range of difficulty levels and clinical variations; using multiple learning strategies in a controlled environment; defining outcomes or benchmarks prior to implementing scenarios; and using simulators with high-fidelity. Outcome Measures Knowledge Acquisition and Retention Knowledge acquisition and retention was assessed with written examinations before and after clinical and/or simulation experience. The scores from the examination after clinical and/or simulation experience were used as proxy measures for knowledge acquisition and retention, using pretest scores as control. The pre- and postintervention examinations were equivalent in content and were slightly modified from existing examinations for the course. The Cronbach alphas (KR-20) ranged from 0.6 to 0.7, indicating this test produced repeatable measures for this sample. All examinations were graded on a scale from 0 100%, with higher scores indicating higher levels of knowledge. The written examinations consisted of 50 items reflecting the content taught in the course. Clinical Performance Clinical performance was assessed based on the students performance on providing care during three patient care scenarios, which were portrayed by standardized patients. The three patient care
16 METHODS 9 scenarios included: (1) New onset of chest pain; (2) Sudden onset of shortness of breath; and (3) Acute change in level of consciousness. Each of these scenarios were developed to present the student with a situation in which they must assess and begin primary intervention. The scenarios were designed to be approximately 15 to 20 minutes in length. Scores for each scenario were generated from a blueprint, which was developed by four nurses who were experts in the care of critically-ill clients using the nursing process as a guide. These blueprints reflected the following principles: The student asks appropriate, focused questions that provide relevant information leading to problem identification; The student performs appropriate physical assessments to confirm the patient s subjective data and obtain objective data; The student initiates basic nursing interventions appropriate to the patient s condition; and The student evaluates the effectiveness of these interventions. Three labs, with one lab for each of the three standardized patient encounters, were set up to mimic a patient care environment equipped with essential supplies and resources. A patient chart with relevant information on the patient was available at the door of the lab. Before beginning the individual evaluation session, each student was provided with a standardized orientation in verbal and written form. Training of standardized patients and faculty evaluators have been reported elsewhere by the study investigators (Bosek, et al., 2007). The evaluation process lasted from the moment the student entered the room to begin the encounter with the standardized patient until stopped by the faculty evaluator. The faculty evaluator could also act as a physician or other health care provider, as appropriate. Once the evaluation was completed, it could not be modified. No guidance from faculty was provided during the performance. Students were not aware of the manner by which they were being scored. If a student performed a particular technique incorrectly, he or she was provided with constructive feedback after all three standardized patient scenarios had been completed. Performance of the three groups of students was compared to determine if there was an advantage of high-fidelity simulation-based learning over actual clinical experience or vice versa. Faculty Review of Videotaped Student Performance The performance of each student was videotaped for further analysis of professional behaviors, assessment, accuracy of intervention and total time from encounter to implementation of intervention. Two study investigators (who were also faculty members) evaluated all taped scenarios on participants based on a 28, 29 or 30-item evaluation tool, depending on the scenario (Appendix A, B, and C). The tools consisted of four dimensions: (1) Patient-nurse relationship; (2) Symptom recognition; (3) Assessment; and (4) Intervention. Each action was evaluated as Inadequate, Somewhat inadequate, Not attempted, Somewhat adequate, or Adequate. The reliabilities (Cronbach s alphas) of the tools were for chest pain, for shortness of breath and for loss of consciousness scenarios. In addition, the total time from initial contact to the end of the interaction was measured using the video recorder timer. Examinations routinely undergo extensive psychometric evaluation and reliabilities have ranged from 0.6 to 0.7 on these examinations in the past. Self-confidence A Likert-type self-confidence scale (Hicks, 2006) consisting of 12 items was created to measure this variable (Appendix D). The response option of each item ranges from 1 (strongly disagree) to 4 (strongly agree), with higher scores indicating greater selfconfidence. The Cronbach s alphas of the scale on the current data were 0.93 for pretest and 0.96 for posttest, which demonstrates high internal consistency reliability. The items reflected the following four dimensions: 1. Accurately recognizing a change in conditions of patients with common critical care issues; 2. Performing basic physical assessments for condition of patients with common critical care issues;
17 10 METHODS 3. Identifying basic nursing interventions for conditions of patients with common critical care issues; and 4. Evaluating the effectiveness of interventions for conditions of patients with common critical issues. Protection of Human Research Subjects For the purpose of this study, the following procedures were instituted to protect the human subjects: 1. The project coordinator informed prospective subjects in writing of the purposes of the study, the nature of their involvement, and the potential benefits and risks if they chose to participate. The subjects were also informed that taking part in the research was entirely voluntary and they may withdraw from the study at any time without anyone objecting and without penalty or loss of any benefits to which they were otherwise entitled. The subjects were told that they may or may not personally benefit from taking part in the research, but the knowledge obtained may lead to a better understanding of the role of simulation in nursing education. 2. Once the potential subjects were fully informed of their rights, written informed consent was then obtained from individuals who agreed to participate in the study by the project coordinator. 3. To ensure confidentiality, code numbers were assigned to each subject. Only the code number appeared on the questionnaires and records. A master list of names, addresses and code numbers were maintained separately from the collected data, in the event that follow-up was needed. This list was available only to the investigators and was destroyed following completion of the study. Confidentiality was also guaranteed in that data were reported as group data only. Data Analysis Multivariate analyses of variance with repeated measures were performed to examine differences among groups in outcome measures before (i.e., didactic instruction) and after clinical and/or simulation experience. The outcome variables included scores of written examinations, self-confidence and the indicators of the clinical performance with standardized patient scenarios among the three groups. The independent variables included time and group factors. Potential confounding variables and interactions between main factors were examined prior to the analysis of the group differences. All values were represented as mean; standard deviation and mean differences were considered significant for a P value less than Bonferroni corrections were applied as appropriate.
18 RESULTS 11 RESULTS Since two cohorts of students (2006 and 2007) were enrolled in the study, as the first step in the analysis, the cohorts were examined to see if there were any differences between the two groups. No statistical differences were identified. Thus, in the subsequent analysis, the two cohorts of students were combined and analyzed as one group. Demographics Of the 58 students that participated in the study, the majority were female (84.5%) and white (73.7%) with an average age of 26.8 years. More than half of the students (59%) already had a bachelor s degree in other fields and 30% had working experience in heath care. Approximately one-third (30.2%) of the students had some experience in health care, such as working as a nursing assistant or technician in a lab. Three of the students who were assigned to the combo group had previous experience with simulation. There were no statistically significant differences in these demographic variables among the groups (Table 1). These variables had no significant confounding effects in the analyses of group differences. Knowledge Acquisition and Retention Students received didactic instruction for two weeks, after which they took a multiple choice exam to test their knowledge of the material covered. Students then completed a two week practicum, after which their clinical performance was assessed using patient actors. After their encounters with the standardized patients, students again took a multiple choice exam to assess their knowledge retention. Based on written examinations on the content taught in the critical care course, students in all groups had statistically significant lower scores on the postexaminations (p<.000) after a two week period of practicum (Table 2). At the end of simulation and/or clinical experiences, the students retained, on average, 86.3% of the knowledge gained in the didactic portion of the course. The simulation group appeared to retain the least (82.9%) and the clinical group the most (88.5%). However, no significant multivariate differences in change of knowledge were found between the groups. Clinical Performance Assessed with Standardized Patients For purposes of analysis, dichotomous variables were created to indicate the adequateness of the students performance. The inadequate, somewhat adequate and not attempted responses were combined into one group, and the somewhat adequate or adequate responses into another group. As indicated in Table 3, approximately one-third Table 1. Demographics of Participants* Simulation (n=19) Clinical (n=19) Combo (n=20) Total (N=58) Age in years (Mean and SD) 25.7 (6.1) 26.8 (6.7) 28.0 (5.5) (6.1) Female 89.5% 79.0% 85.0% 84.5% White 68.4% 73.4% 75.0% 73.7% With previous degree in other fields 52.6% 47.1% 50.0% 59.0% With previous work experience in health care 22.2% 43.8% 26.3% 30.2% *No significant differences were found among groups in age, gender, race, having previous degrees in other field or having previous working experience in health care. Table 2. Written Exam Scores Before and After Simulation/Clinical Experiences Simulation* (n=19) Clinical* (n=19) Combo* (n=20) Total* (N=58) Exam Scores Mean SD Mean SD Mean SD Mean SD Pre Post *All Ps<0.05; No group difference in both pretest and posttest were found.
19 12 RESULTS Table 3. Clinical Performance on Three Patient Care Scenarios Portrayed by Standardized Patients: Tape Review Checklist Rating 1 Clinical Simulation Combo Probability Inadequate, Somewhat Inadequate or Not Attempted Somewhat Adequate or Adequate Inadequate, Somewhat Inadequate or Not Attempted Somewhat Adequate or Adequate Inadequate, Somewhat Inadequate or Not Attempted Somewhat Adequate or Adequate Chest pain symptoms 57% 43% 61% 39% 50% 50% Chest pain assessment 38% 63% 42% 58% 38% 63% Chest pain intervention 27% 73% 26% 74% 22% 78% Chest pain relationship 28% 72% 33% 62% 24% 76% Overall chest pain 36% 64% 39% 61% 33% 67% Shortness of breath symptoms Shortness of breath assessment Shortness of breath intervention Shortness of breath relationship Overall shortness of breath 32% 68% 45% 50% 39% 61% % 62% 44% 53% 35% 65% % 77% 39% 57% 35% 65% % 76% 28% 66% 19% 81% % 69% 42% 58% 33% 67% Loss of consciousness symptoms Loss of consciousness assessment Loss of consciousness intervention Loss of consciousness relationship Overall loss of consciousness 25% 75% 24% 76% 26% 74% % 61% 43% 57% 38% 62% % 70% 40% 60% 29% 71% % 97% 5% 95% 5% 95% % 71% 33% 66% 28% 71% Overall 32% 68% 34% 66% 32% 68% This table was constructed by counting the number of people who were rated as inadequate, somewhat adequate, not attempted, somewhat adequate or adequate for each of the criterion in the three scenarios. The frequencies for each criterion were then summed to form four dimensions: patient-nurse relationship; symptom recognition; assessment; and intervention. The five categories of adequateness were then collapsed into two categories. The totals for each of the dimensions were then converted into percentages. The percentages for each of the dimensions, therefore, do not represent the percent of students in each group (clinical, simulation, combo); rather, it represents how often students were rated in a particular category. For example, there are four criteria used to assess symptom recognition in the chest pain scenario. The number of students who were rated in each category is presented below. Adding the results for the four criterion resulted in the 19 students in the simulation group being rated unsatisfactory 46 times and satisfactory 30 times. This converts to percentages of 61% and 39%, respectively. Footnote Table A Symptom Recognition Criteria Inadequate Somewhat Inadequate Not Somewhat Attempted Adequate Adequate Total Age in years (Mean and SD) Female White With previous degree in other fields With previous work experience in health care Footnote Table B Symptom Inadequate, Somewhat Somewhat Adequate Recognition Inadequate or Not Attempted or Adequate Frequency Footnote Table C Symptom Inadequate, Somewhat Somewhat Adequate Recognition Inadequate or Not Attempted or Adequate Percentage 61% 39%
20 RESULTS 13 of the time the students in each group were rated by faculty as not performing at satisfactory levels across all scenarios. Students performed best on level of consciousness relationship items and worst on recognizing chest pain symptoms. There were some statistically significant differences between the groups of students at the 95% confidence level. The areas of differences included recognizing chest pain symptoms, chest pain assessment, assessing shortness of breath, shortness of breath intervention and the overall shortness of breath items. While the overall differences between the three groups were not statistically significant, students in the combo and clinical groups were consistently rated higher by faculty reviewing the videotapes than students in the simulation group. Regarding the time needed to complete the scenarios, the students took an average of 9.07 minutes to complete the chest pain scenario; 8.88 minutes for the shortness of breath scenario; and 9.64 minutes for the loss of consciousness scenario. Although the simulation group used a shorter period of time to complete two out of the three scenarios, the differences were not statistically significant (Table 4). Self-confidence After receiving theoretical instruction of the critical care content in the course, on average, students felt somewhat confident in taking care of critically-ill patients (average self-confidence =3.53 on a scale of 1-4). No significant differences in self-confidence were found at pretest among all groups. At pretest, the students were more confident in recognizing symptoms (mean=3.55), conducting assessment (mean=3.70) and evaluation (mean=3.55) than providing intervention (mean=3.30) (all Ps<0.05) (Table 6). This pattern held true for all groups (Table 5). The differences in self-confidence among the different dimensions found at pretest disappeared at posttest after the practicum experiences. Students in the simulation and the combo groups had a statistically significant increase in their selfconfidence level in taking care of patients with acute changes in condition after clinical/simulation experiences (Ps<0.05) (Table 6). No significant change in self-confidence level was found for those in the clinical group. Course Evaluation Undergraduate nursing students were asked to evaluate the three credit-hour didactic/simulation critical care nursing course. Students in the clinical and the combination groups rated their clinical experience with a 23-item questionnaire using a scale of strongly disagree, disagree, agree or strongly agree. For the most part, students either agreed or strongly agreed with the statements provided, indicating an overall positive experience in the course. About one-quarter (23.5%) of the students in the clinical group did not think clinical faculty challenged their critical-thinking or problemsolving skills. Almost one-fifth (18.8%) of the same group did not think clinical faculty were receptive to views that differed from their own and 11.8% did not think clinical expectations were applied equally to all students. Almost one-third (29.5%) of the clinical students either disagreed or strongly disagreed with the statement that the clinical rotation provided them with a variety of learning experiences. There were no statistically significant differences in how the clinical alone and the combination groups rated their clinical experience (Table 7). All students in the simulation and the combo groups were asked about their simulation experience in the course with a 14-item questionnaire using a scale of strongly disagree, disagree, agree or strongly agree. All of the students either agreed or strongly agreed Table 4. Total Time to Complete Three Patient Care Scenarios Portrayed by Standardized Patients* Simulation Clinical Combo All (n=57) Total time to complete chest pain scenario in minutes Total time to complete shortness of breath scenario in minutes Total time to complete loss of consciousness scenario in minutes *No group differences were found in the time needed to complete the scenarios between groups.
Climate Surveys: Useful Tools to Help Colleges and Universities in Their Efforts to Reduce and Prevent Sexual Assault Why are we releasing information about climate surveys? Sexual assault is a significant
Support Materials Contents This resource guide has been developed to support the work of school leaders, teachers and educators who work in and with schools across Australia. guide is designed to enable
Statement on Standards for Continuing Professional Education (CPE) Programs Revised January 2012 Table of Contents Introduction... i Preamble... ii Article I - Definitions... 1 Article II General Guidelines
Royal Australasian College of Surgeons Work-based assessment: a practical guide Building an assessment system around work Tri-Partite Alliance Royal College of Physicians and Surgeons of Canada Royal Australasian
Working with Others: A Position Paper Executive Summary Key Concepts 1. Boards of Nursing regulate nursing practice. 2. State Nurse Practice Acts determine what level of licensed nurse is authorized to
Journal of Information Technology Education Volume 8, 2009 A Comparison of Student Perceptions of their Computer Skills to their Actual Abilities Donna M. Grant, Alisha D. Malloy, and Marianne C. Murphy
Understanding Student Differences RICHARD M. FELDER Department of Chemical Engineering North Carolina State University REBECCA BRENT Education Designs, Inc. ABSTRACT Students have different levels of motivation,
Toward a More Comprehensive Conception of College Readiness 2007 David T. Conley Prepared for the Bill & Melinda Gates Foundation, March 2007 Table of Contents Introduction. 5 An Operational Definition
Meeting Brief n May 2011 BUILDING A STRONGER EVIDENCE BASE FOR EMPLOYEE WELLNESS PROGRAMS NIHCM Foundation n May 2011 TABLE OF CONTENTS Acknowledgements.....................................................
Questionmark White Paper Assessments Through the Learning Process This paper explores how instructors and organizations can use assessments to improve the learning process. It is designed to help readers
Teacher Professional Development: It s Not an Event, It s a Process Sandra H. Harwell, Ph.D. Vice President, Professional Development CORD 2003 CORD Published and distributed by: CORD P.O. Box 21689 Waco,
Characteristics of EXCELLENCE in Higher Education Requirements of Affiliation and Standards for Accreditation Online Version - Revised March 2009 (pages xii and xiii) Middle States Commission on Higher
Good Principals Aren t Born They re Mentored: LEARNING- CENTERED LEADERSHIP PROGRAM Are We Investing Enough to Get the School Leaders We Need? Southern Regional Education Board 592 10th St. N.W. Atlanta,
Diploma Programme The Diploma Programme From principles into practice Diploma Programme The Diploma Programme From principles into practice Diploma Programme The Diploma Programme: From principles into
How to Make the Most of Work Integrated Learning: A Guide for Students, Lecturers & Supervisors Andy Martin & Helen Hughes How to Make the Most of Work Integrated Learning A Guide for Students, Lecturers
Securing Their Future The New South Wales Government s reforms for the Higher School Certificate Foreword It is three and a half decades since the decision was taken to restructure New South Wales secondary
Defining and Testing EMR Usability: Principles and Proposed Methods of EMR Usability Evaluation and Rating HIMSS EHR Usability Task Force June 2009 CONTENTS EXECUTIVE SUMMARY... 1 INTRODUCTION... 2 WHAT
International Journal of Sports Science & Coaching Volume 4 Number 1 2009 93 It s Not What They Do, It s How They Do It: Athlete Experiences of Great Coaching Andrea J. Becker Department of Kinesiology,
Evaluating Your Community-Based Program Part II Putting Your Evaluation Plan to Work A publication of the American Academy of Pediatrics A Cooperative Agreement Program of the Federal Maternal and Child
Time for Training A Review of the impact of the European Working Time Directive on the quality of training Professor Sir John Temple May 2010 Foreword Learning from experience; it takes too long. It often
Report The DNP by 2015 A Study of the Institutional, Political, and Professional Issues that Facilitate or Impede Establishing a Post-Baccalaureate Doctor of Nursing Practice Program David I. Auerbach,
What Works In Character Education: A research-driven guide for educators Character Education Partnership Marvin W. Berkowitz, Ph.D. Melinda C. Bier, Ph.D. University of Missouri-St. Louis February 2005
SEPTEMBER 2011 CANADIAN HEALTH CARE MATTERS BULLETIN 5 How Engaged are Canadians in their Primary Care? Results from the 2010 Commonwealth Fund International Health Policy Survey About the Health Council
The NIHR Research Design Service for Yorkshire & the Humber Introduction to the Research Process Authors Antony Arthur Beverley Hancock This Resource Pack is one of a series produced by The NIHR RDS for
WHAT EDUCATION AND TRAINING DO MENTAL HEALTH NURSES WANT? research reports What education and training do mental health nurses want? A survey of qualified mental health nurses working in acute inpatient
Cognitive Interviewing A How To Guide Developed by: Gordon B. Willis Research Triangle Institute Reducing Survey Error through Research on the Cognitive and Decision Processes in Surveys Short course presented