Affiliations
Harvard Medical School, Boston, Massachusetts
Department of Medicine, Massachusetts General Hospital, Boston, Massachusetts
Given name(s)
Paul F.
Family name
Currier
Degrees
MD

Simulation Resident‐as‐Teacher Program

Article Type
Changed
Tue, 05/16/2017 - 22:49
Display Headline
A simulation‐based resident‐as‐teacher program: The impact on teachers and learners

Residency training, in addition to developing clinical competence among trainees, is charged with improving resident teaching skills. The Liaison Committee on Medical Education and the Accreditation Council for Graduate Medical Education require that residents be provided with training or resources to develop their teaching skills.[1, 2] A variety of resident‐as‐teacher (RaT) programs have been described; however, the optimal format of such programs remains in question.[3] High‐fidelity medical simulation using mannequins has been shown to be an effective teaching tool in various medical specialties[4, 5, 6, 7] and may prove to be useful in teacher training.[8] Teaching in a simulation‐based environment can give participants the opportunity to apply their teaching skills in a clinical environment, as they would on the wards, but in a more controlled, predictable setting and without compromising patient safety. In addition, simulation offers the opportunity to engage in deliberate practice by allowing teachers to facilitate the same case on multiple occasions with different learners. Deliberate practice, which involves task repetition with feedback aimed at improving performance, has been shown to be important in developing expertise.[9]

We previously described the first use of a high‐fidelity simulation curriculum for internal medicine (IM) interns focused on clinical decision‐making skills, in which second‐ and third‐year residents served as facilitators.[10, 11] Herein, we describe a RaT program in which residents participated in a workshop, then served as facilitators in the intern curriculum and received feedback from faculty. We hypothesized that such a program would improve residents' teaching and feedback skills, both in the simulation environment and on the wards.

METHODS

We conducted a single‐group study evaluating teaching and feedback skills among upper‐level resident facilitators before and after participation in the RaT program. We measured residents' teaching skills using pre‐ and post‐program self‐assessments as well as evaluations completed by the intern learners after each session and at the completion of the curriculum.

Setting and Participants

We embedded the RaT program within a simulation curriculum administered July to October of 2013 for all IM interns at Massachusetts General Hospital (interns in the preliminary program who planned to pursue another field after the completion of the intern year were excluded) (n = 52). We invited postgraduate year (PGY) II and III residents (n = 102) to participate in the IM simulation program as facilitators via email. The curriculum consisted of 8 cases focusing on acute clinical scenarios encountered on the general medicine wards. The cases were administered during 1‐hour sessions 4 mornings per week from 7 AM to 8 AM prior to clinical duties. Interns completed the curriculum over 4 sessions during their outpatient rotation. The case topics were (1) hypertensive emergency, (2) post‐procedure bleed, (3) congestive heart failure, (4) atrial fibrillation with rapid ventricular response, (5) altered mental status/alcohol withdrawal, (6) nonsustained ventricular tachycardia heralding acute coronary syndrome, (7) cardiac tamponade, and (8) anaphylaxis. During each session, groups of 2 to 3 interns worked through 2 cases using a high‐fidelity mannequin (Laerdal 3G, Wappingers Falls, NY) with 2 resident facilitators. One facilitator operated the mannequin, while the other served as a nurse. Each case was followed by a structured debriefing led by 1 of the resident facilitators (facilitators switched roles for the second case). The number of sessions facilitated varied for each resident based on individual schedules and preferences.

Four senior residents who were appointed as simulation leaders (G.A.A., J.K.H., R.K., Z.S.) and 2 faculty advisors (P.F.C., E.M.M.) administered the program. Simulation resident leaders scheduled facilitators and interns and participated in a portion of simulation sessions as facilitators, but they were not analyzed as participants for the purposes of this study. The curriculum was administered without interfering with clinical duties, and no additional time was protected for interns or residents participating in the curriculum.

Resident‐as‐Teacher Program Structure

We invited participating resident facilitators to attend a 1‐hour interactive workshop prior to serving as facilitators. The workshop focused on building learner‐centered and small‐group teaching skills, as well as introducing residents to a 5‐stage debriefing framework developed by the authors and based on simulation debriefing best practices (Table 1).[12, 13, 14]

Stages of Debriefing
Stage of DebriefingActionRationale
  • NOTE: *To standardize the learner experience, all interns received an e‐mail after each session describing the key learning objectives and takeaway points with references from the medical literature for each case.

Emotional responseElicit learners' emotions about the caseIt is important to acknowledge and address both positive and negative emotions that arise during the case before debriefing the specific medical and communications aspects of the case. Unaddressed emotional responses may hinder subsequent debriefing.
Objectives*Elicit learners' objectives and combine them with the stated learning objectives of the case to determine debriefing objectivesThe limited amount of time allocated for debriefing (1520 minutes) does not allow the facilitator to cover all aspects of medical management and communication skills in a particular case. Focusing on the most salient objectives, including those identified by the learners, allows the facilitator to engage in learner‐centered debriefing.
AnalysisAnalyze the learners' approach to the caseAnalyzing the learners' approach to the case using the advocacy‐inquiry method[11] seeks to uncover the learner's assumptions/frameworks behind the decision made during the case. This approach allows the facilitator to understand the learners' thought process and target teaching points to more precisely address the learners' needs.
TeachingAddress knowledge gaps and incorrect assumptionsLearner‐centered debriefing within a limited timeframe requires teaching to be brief and targeted toward the defined objectives. It should also address the knowledge gaps and incorrect assumptions uncovered during the analysis phase.
SummarySummarize key takeawaysSummarizing highlights the key points of the debriefing and can be used to suggest further exploration of topics through self‐study (if necessary).

Resident facilitators were observed by simulation faculty and simulation resident leaders throughout the intern curriculum and given structured feedback either in‐person immediately after completion of the simulation session or via a detailed same‐day e‐mail if the time allotted for feedback was not sufficient. Feedback was structured by the 5 stages of debriefing described in Table 1, and included soliciting residents' observations on the teaching experience and specific behaviors observed by faculty during the scenarios. E‐mail feedback (also structured by stages of debriefing and including observed behaviors) was typically followed by verbal feedback during the next simulation session.

The RaT program was composed of 3 elements: the workshop, case facilitation, and direct observation with feedback. Because we felt that the opportunity for directly observed teaching and feedback in a ward‐like controlled environment was a unique advantage offered by the simulation setting, we included all residents who served as facilitators in the analysis, regardless of whether or not they had attended the workshop.

Evaluation Instruments

Survey instruments were developed by the investigators, reviewed by several experts in simulation, pilot tested among residents not participating in the simulation program, and revised by the investigators.

Pre‐program Facilitator Survey

Prior to the RaT workshop, resident facilitators completed a baseline survey evaluating their preparedness to teach and give feedback on the wards and in a simulation‐based setting on a 5‐point scale (see Supporting Information, Appendix I, in the online version of this article).

Post‐program Facilitator Survey

Approximately 3 weeks after completion of the intern simulation curriculum, resident facilitators were asked to complete an online post‐program survey, which remained open for 1 month (residents completed this survey anywhere from 3 weeks to 4 months after their participation in the RaT program depending on the timing of their facilitation). The survey asked residents to evaluate their comfort with their current post‐program teaching skills as well as their pre‐program skills in retrospect, as previous research demonstrated that learners may overestimate their skills prior to training programs.[15] Resident facilitators could complete the surveys nonanonymously to allow for matched‐pairs analysis of the change in teaching skills over the course of the program (see Supporting Information, Appendix II, in the online version of this article).

Intern Evaluation of Facilitator Debriefing Skills

After each case, intern learners were asked to anonymously evaluate the teaching effectiveness of the lead resident facilitator using the adapted Debriefing Assessment for Simulation in Healthcare (DASH) instrument.[16] The DASH instrument evaluated the following domains: (1) instructor maintained an engaging context for learning, (2) instructor structured the debriefing in an organized way, (3) instructor provoked in‐depth discussions that led me to reflect on my performance, (4) instructor identified what I did well or poorly and why, (5) instructor helped me see how to improve or how to sustain good performance, (6) overall effectiveness of the simulation session (see Supporting Information, Appendix III, in the online version of this article).

Post‐program Intern Survey

Two months following the completion of the simulation curriculum, intern learners received an anonymous online post‐program evaluation assessing program efficacy and resident facilitator teaching (see Supporting Information, Appendix IV, in the online version of this article).

Statistical Analysis

Teaching skills and learners' DASH ratings were compared using the Student t test, Pearson 2 test, and Fisher exact test as appropriate. Pre‐ and post‐program rating of teaching skills was undertaken in aggregate and as a matched‐pairs analysis.

The study was approved by the Partners Institutional Review Board.

RESULTS

Forty‐one resident facilitators participated in 118 individual simulation sessions encompassing 236 case scenarios. Thirty‐four residents completed the post‐program facilitator survey and were included in the analysis. Of these, 26 (76%) participated in the workshop and completed the pre‐program survey. Twenty‐three of the 34 residents (68%) completed the post‐program evaluation nonanonymously (13 PGY‐II, 10 PGY‐III). Of these, 16 completed the pre‐program survey nonanonymously. The average number of sessions facilitated by each resident was 3.9 (range, 112).

Pre‐ and Post‐program Self‐Assessment of Residents' Teaching Skills

Participation in the simulation RaT program led to improvements in resident facilitators' self‐reported teaching skills across multiple domains (Table 2). These results were consistent when using the retrospective pre‐program assessment in matched‐pairs analysis (n=34) and when performing the analysis using the true pre‐program preparedness compared to post‐program comfort with teaching skills in a non‐matched‐pairs fashion (n = 26) and matched‐pairs fashion (n = 16). We report P values for the more conservative estimates using the retrospective pre‐program assessment matched‐pairs analysis. The most significant improvements occurred in residents' ability to teach in a simulated environment (2.81 to 4.16, P < 0.001 [5‐point scale]) and give feedback (3.35 to 3.77, P < 0.001).

Pre‐ and Post‐program Self‐Assessment of Resident Facilitators Teaching Skills*
 Pre‐program Rating (n = 34)Post‐program Rating (n = 34)P Value
  • NOTE: *Survey data were collected before participation in the workshop and 3 weeks after completion of the 4‐month curriculum. Five‐point Likert scale: very uncomfortable (1) to very comfortable (5).

Teaching on rounds3.754.030.005
Teaching on wards outside rounds3.834.070.007
Teaching in simulation2.814.16<0.001
Giving feedback3.353.77<0.001

Resident facilitators reported that participation in the RaT program had a significant impact on their teaching skills both within and outside of the simulation environment (Table 3). However, the greatest gains were seen in the domain of teaching in simulation. It was also noted that participation in the program improved resident facilitators' medical knowledge.

Resident Facilitators' Perceived Improvement in Skills Due to Resident‐as‐Teacher Program
CategoryNot at AllSlightly ImprovedModerately ImprovedGreatly ImprovedNot Sure
Teaching on rounds, n = 344 (12%)12 (35%)13 (38%)4 (12%)1 (3%)
Teaching on wards outside rounds, n = 343 (9%)13 (38%)12 (35%)5 (15%)1 (3%)
Teaching in simulation, n = 340 (0%)4 (12%)7 (21%)23 (68%)0 (0%)
Giving feedback, n = 344 (12%)10 (29%)12 (35%)6 (18%)2 (6%)
Medical knowledge, n = 342 (6%)11 (32%)18 (53%)3 (9%)0 (0%)

Subgroup analyses were performed comparing the perceived improvement in teaching and feedback skills among those who did or did not attend the facilitator workshop, those who facilitated 5 or more versus less than 5 sessions, and those who received or did not receive direct observation and feedback from faculty. Although numerically greater gains were seen across all 4 domains among those who attended the workshop, facilitated 5 or more sessions, or received feedback from faculty, only teaching on rounds and on the wards outside rounds reached statistical significance (Table 4). It should be noted that all residents who facilitated 5 or more sessions also attended the workshop and received feedback from faculty. We also compared perceived improvement among PGY‐II and PGY‐III residents. In contrast to PGY‐II residents, who demonstrated an improvement in all 4 domains, PGY‐III residents only demonstrated improvement in simulation‐based teaching.

Pre‐ and Post‐program Self‐Assessment of Resident Facilitators Teaching Skills According to Number of Sessions Facilitated, Workshop Attendance, Receipt of Feedback, and PGY Year
 Pre‐programPost‐programP ValuePre‐programPost‐programP Value
 Facilitated Less Than 5 Sessions (n = 18)Facilitated 5 or More Sessions (n = 11)
 Did Not Attend Workshop (n = 10)Attended Workshop (n = 22)
 Received Feedback From Resident Leaders Only (n = 11)Received Faculty Feedback (n = 21)
 PGY‐II (n = 13)PGY‐III (n = 9)
  • NOTE: Abbreviations: PGY, postgraduate year.

Teaching on rounds3.683.790.163.854.380.01
Teaching on wards outside rounds3.8240.083.854.150.04
Teaching in simulation2.894.06<0.012.694.31<0.01
Giving feedback3.333.670.013.383.920.01
Teaching on rounds44.10.343.644<0.01
Teaching on wards outside rounds441.003.764.1<0.01
Teaching in simulation2.894.11<0.012.774.18<0.01
Giving feedback3.563.780.173.273.77<0.01
Teaching on rounds3.553.820.193.864.140.01
Teaching on wards outside rounds441.003.754.1<0.01
Teaching in simulation2.73.8<0.012.864.33<0.01
Giving feedback3.23.60.043.433.86<0.01
Teaching on rounds3.383.850.034.224.221
Teaching on wards outside rounds3.543.850.044.144.141
Teaching in simulation2.464.15<0.013.134.13<0.01
Giving feedback3.233.620.023.53.880.08

Intern Learners' Assessment of Resident Facilitators and the Program Overall

During the course of the program, intern learners completed 166 DASH ratings evaluating 34 resident facilitators (see Supporting Information, Appendix V, in the online version of this article). Ratings for the 6 DASH items ranged from 6.49 to 6.73 (7‐point scale), demonstrating a high level of facilitator efficacy across multiple domains. No differences in DASH scores were noted among subgroups of resident facilitators described in the previous paragraph.

Thirty‐eight of 52 intern learners (73%) completed the post‐program survey.

Resident Facilitators' Use of Specific Teaching Skills During Debriefing as Rated by Intern Learners
Facilitator BehaviorsVery Often, >75%Often, >50%Sometimes, 25%50%Rarely, <25%Never
Elicited emotional reactions, n = 3818 (47%)16 (42%)4 (11%)0 (0%)0 (0%)
Elicited objectives from learner, n = 3726 (69%)8 (22%)2 (6%)1 (3%)0 (0%)
Asked to share clinical reasoning, n = 3821 (56%)13 (33%)4 (11%)0 (0%)0 (0%)
Summarized learning points, n = 3831 (81%)7 (19%)0 (0%)0 (0%)0 (0%)
Spoke for less than half of the session, n = 388 (22%)17 (44%)11 (28%)2 (6%)0 (0%)

All intern learners rated the overall simulation experience as either excellent (81%) or good (19%) on the post‐program evaluation (4 or 5 on a 5‐point Likert scale, respectively). All interns strongly agreed (72%) or agreed (28%) that the simulation sessions improved their ability to manage acute clinical scenarios. Interns reported that resident facilitators frequently utilized specific debriefing techniques covered in the RaT curriculum during the debriefing sessions (Table 5).

DISCUSSION

We describe a unique RaT program embedded within a high‐fidelity medical simulation curriculum for IM interns. Our study demonstrates that resident facilitators noted an improvement in their teaching and feedback skills, both in the simulation setting and on the wards. Intern learners rated residents' teaching skills and the overall simulation curriculum highly, suggesting that residents were effective teachers.

The use of simulation in trainee‐as‐teacher curricula holds promise because it can provide an opportunity to teach in an environment closely approximating the wards, where trainees have the most opportunities to teach. However, in contrast to true ward‐based teaching, simulation can provide predictable scenarios in a controlled environment, which eliminates the distractions and unpredictability that exist on the wards, without compromising patient safety. Recently, Tofil et al. described the first use of simulation in a trainee‐as‐teacher program.[17] The investigators utilized a 1‐time simulation‐based teaching session, during which pediatric fellows completed a teacher‐training workshop, developed and served as facilitators in a simulated case, and received feedback. The use of simulation allowed fellows an opportunity to apply newly acquired skills in a controlled environment and receive feedback, which has been shown to improve teaching skills.[18]

The experience from our program expands on that of Tofil et al., as well as previously described trainee‐as‐teacher curricula, by introducing a component of deliberate practice that is unique to the simulation setting and has been absent from most previously described RaT programs.[3] Most residents had the opportunity to facilitate the same case on multiple occasions, allowing them to receive feedback and make adjustments. Residents who facilitated 5 or more sessions demonstrated more improvement, particularly in teaching outside of simulation, than residents who facilitated fewer sessions. It is notable that PGY‐II resident facilitators reported an improvement in their teaching skills on the wards, though less pronounced as compared to teaching in the simulation‐based environment, suggesting that benefits of the program may extend to nonsimulation‐based settings. Additional studies focusing on objective evaluation of ward‐based teaching are needed to further explore this phenomenon. Finally, the self‐reported improvements in medical knowledge by resident facilitators may serve as another benefit of our program.

Analysis of learner‐level data collected in the postcurriculum intern survey and DASH ratings provides additional support for the effectiveness of the RaT program. The majority of intern learners reported that resident facilitators used the techniques covered in our program frequently during debriefings. In addition, DASH scores clustered around maximum efficacy for all facilitators, suggesting that residents were effective teachers. Although we cannot directly assess whether the differences demonstrated in resident facilitators' self‐assessments translated to their teaching or were significant from the learners' perspective, these results support the hypothesis that self‐assessed improvements in teaching and feedback skills were significant.

In addition to improving resident teaching skills, our program had a positive impact on intern learners as evidenced by intern evaluations of the simulation curriculum. While utilizing relatively few faculty resources, our program was able to deliver an extensive and well‐received simulation curriculum to over 50 interns. The fact that 40% of second‐ and third‐year residents volunteered to teach in the program despite the early morning timing of the sessions speaks to the interest that trainees have in teaching in this setting. This model can serve as an important and efficient learning platform in residency training programs. It may be particularly salient to IM training programs where implementation of simulation curricula is challenging due to large numbers of residents and limited faculty resources. The barriers to and lessons learned from our experience with implementing the simulation curriculum have been previously described.[10, 11]

Our study has several limitations. Changes in residents' teaching skills were self‐assessed, which may be inaccurate as learners may overestimate their abilities.[19] Although we collected data on the experiences of intern learners that supported residents' self‐assessment, further studies using more objective measures (such as the Objective Structured Teaching Exercise[20]) should be undertaken. We did not objectively assess improvement of residents' teaching skills on the wards, with the exception of the residents' self assessment. Due to the timing of survey administration, some residents had as little as 1 month between completion of the curriculum and responding to the post‐curriculum survey, limiting their ability to evaluate their teaching skills on the wards. The transferability of the skills gained in simulation‐based teaching to teaching on the wards deserves further study. We cannot definitively attribute perceived improvement of teaching skills to the RaT program without a control group. However, the frequent use of recommended techniques during debriefing, which are not typically taught in other settings, supports the efficacy of the RaT program.

Our study did not allow us to determine which of the 3 components of the RaT program (workshop, facilitation practice, or direct observation and feedback) had the greatest impact on teaching skills or DASH ratings, as those who facilitated more sessions also completed the other components of the program. Furthermore, there may have been a selection bias among facilitators who facilitated more sessions. Because only 16 of 34 participants completed both the pre‐program and post‐program self‐assessments in a non‐anonymous fashion, we were not able to analyze the effect of pre‐program factors, such as prior teaching experience, on program outcomes. It should also be noted that allowing resident facilitators the option to complete the survey non‐anonymously could have biased our results. The simulation curriculum was conducted in a single center, and resident facilitators were self‐selecting; therefore, our results may not be generalizable. Finally, the DASH instrument was only administered after the RaT workshop and was likely limited further by the ceiling effect created by the learners' high satisfaction with the simulation program overall.

In summary, our simulation‐based RaT program improved resident facilitators' self‐reported teaching and feedback skills. Simulation‐based training provided an opportunity for deliberate practice of teaching skills in a controlled environment, which was a unique component of the program. The impact of deliberate practice on resident teaching skills and optimal methods to incorporate deliberate practice in RaT programs deserves further study. Our curriculum design may serve as a model for the development of simulation programs that can be employed to improve both intern learning and resident teaching skills.

Acknowledgements

The authors acknowledge Deborah Navedo, PhD, Assistant Professor, Massachusetts General Hospital Institute of Health Professions, and Emily M. Hayden, MD, Assistant Professor, Department of Emergency Medicine, Massachusetts General Hospital and Harvard Medical School, for their assistance with development of the RaT curriculum. The authors thank Dr. Jenny Rudolph, Senior Director, Institute for Medical Simulation at the Center for Medical Simulation, for her help in teaching us to use the DASH instrument. The authors also thank Dr. Daniel Hunt, MD, Associate Professor, Department of Medicine, Massachusetts General Hospital and Harvard Medical School, for his thoughtful review of this manuscript.

Disclosure: Nothing to report.

Files
References
  1. Liaison Committee on Medical Education. Functions and structure of a medical school: standards for accreditation of medical education programs leading to the M.D. degree. Washington, DC, and Chicago, IL: Association of American Medical Colleges and American Medical Association; 2000.
  2. Accreditation Council for Graduate Medical Education. ACGME program requirements for graduate medical education in pediatrics. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/2013-PR-FAQ-PIF/320_pediatrics_07012013.pdf. Accessed June 18, 2014.
  3. Hill AG, Yu TC, Barrow M, Hattie J. A systematic review of resident‐as‐teacher programmes. Med Educ. 2009;43(12):11291140.
  4. Okuda Y, Bryson EO, DeMaria S, et al. The utility of simulation in medical education: what is the evidence? Mt Sinai J Med. 2009;76:330343.
  5. Okuda Y, Bond WF, Bonfante G, et al. National growth in simulation training within emergency medicine residency programs, 2003–2008. Acad Emerg Med. 2008;15:11131116.
  6. Fernandez R, Wang E, Vozenilek JA, et al. Simulation center accreditation and programmatic benchmarks: a review for emergency medicine. Acad Emerg Med. 2010;17(10):10931103.
  7. Cook DA. How much evidence does it take? A cumulative meta‐analysis of outcomes of simulation‐based education. Med Educ. 2014;48(8):750760.
  8. Farrell SE, Pacella C, Egan D, et al.. Resident‐as‐teacher: a suggested curriculum for emergency medicine. Acad Emerg Med. 2006;13(6):677679.
  9. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004;79(10 suppl):S70S81.
  10. Miloslavsky EM, Hayden EM, Currier PF, Mathai SK, Contreras‐Valdes F, Gordon JA. Pilot program utilizing medical simulation in clinical decision making training for internal medicine interns. J Grad Med Educ. 2012;4:490495.
  11. Mathai SK, Miloslavsky EM, Contreras‐Valdes FM, et al. How we implemented a resident‐led medical simulation curriculum in a large internal medicine residency program. Med Teach. 2014;36(4):279283.
  12. Rudolph JW, Simon R, Dufresne RL, Raemer DB. There's no such thing as “nonjudgmental” debriefing: a theory and method for debriefing with good judgment. Simul Healthc. 2006;1:4955.
  13. Minehart RD, Rudolph JW, Pian‐Smith MCM, Raemer DB. Improving faculty feedback to resident trainees during a simulated case: a randomized, controlled trial of an educational intervention. Anesthesiology. 2014;120(1):160171.
  14. Gardner R. Introduction to debriefing. Semin Perinatol. 2013:37(3)166174.
  15. Howard G, Dailey PR. Response‐shift bias: a source of contamination of self‐report measures. J Appl Psychol. 1979;4:93106.
  16. Center for Medical Simulation. Debriefing assessment for simulation in healthcare. Available at: http://www.harvardmedsim.org/debriefing‐assesment‐simulation‐healthcare.php. Accessed June 18, 2014.
  17. Tofil NM, Peterson DT, Harrington KF, et al. A novel iterative‐learner simulation model: fellows as teachers. J Grad Med Educ. 2014;6(1):127132.
  18. Regan‐Smith M, Hirschmann K, Iobst W. Direct observation of faculty with feedback: an effective means of improving patient‐centered and learner‐centered teaching skills. Teach Learn Med. 2007;19(3):278286.
  19. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self assessments. J Pers Soc Psychol. 1999;77:11211134.
  20. Morrison EH, Boker JR, Hollingshead J, et al. Reliability and validity of an objective structured teaching examination for generalist resident teachers. Acad Med. 2002;77(10 suppl):S29S32.
Article PDF
Issue
Journal of Hospital Medicine - 10(12)
Publications
Page Number
767-772
Sections
Files
Files
Article PDF
Article PDF

Residency training, in addition to developing clinical competence among trainees, is charged with improving resident teaching skills. The Liaison Committee on Medical Education and the Accreditation Council for Graduate Medical Education require that residents be provided with training or resources to develop their teaching skills.[1, 2] A variety of resident‐as‐teacher (RaT) programs have been described; however, the optimal format of such programs remains in question.[3] High‐fidelity medical simulation using mannequins has been shown to be an effective teaching tool in various medical specialties[4, 5, 6, 7] and may prove to be useful in teacher training.[8] Teaching in a simulation‐based environment can give participants the opportunity to apply their teaching skills in a clinical environment, as they would on the wards, but in a more controlled, predictable setting and without compromising patient safety. In addition, simulation offers the opportunity to engage in deliberate practice by allowing teachers to facilitate the same case on multiple occasions with different learners. Deliberate practice, which involves task repetition with feedback aimed at improving performance, has been shown to be important in developing expertise.[9]

We previously described the first use of a high‐fidelity simulation curriculum for internal medicine (IM) interns focused on clinical decision‐making skills, in which second‐ and third‐year residents served as facilitators.[10, 11] Herein, we describe a RaT program in which residents participated in a workshop, then served as facilitators in the intern curriculum and received feedback from faculty. We hypothesized that such a program would improve residents' teaching and feedback skills, both in the simulation environment and on the wards.

METHODS

We conducted a single‐group study evaluating teaching and feedback skills among upper‐level resident facilitators before and after participation in the RaT program. We measured residents' teaching skills using pre‐ and post‐program self‐assessments as well as evaluations completed by the intern learners after each session and at the completion of the curriculum.

Setting and Participants

We embedded the RaT program within a simulation curriculum administered July to October of 2013 for all IM interns at Massachusetts General Hospital (interns in the preliminary program who planned to pursue another field after the completion of the intern year were excluded) (n = 52). We invited postgraduate year (PGY) II and III residents (n = 102) to participate in the IM simulation program as facilitators via email. The curriculum consisted of 8 cases focusing on acute clinical scenarios encountered on the general medicine wards. The cases were administered during 1‐hour sessions 4 mornings per week from 7 AM to 8 AM prior to clinical duties. Interns completed the curriculum over 4 sessions during their outpatient rotation. The case topics were (1) hypertensive emergency, (2) post‐procedure bleed, (3) congestive heart failure, (4) atrial fibrillation with rapid ventricular response, (5) altered mental status/alcohol withdrawal, (6) nonsustained ventricular tachycardia heralding acute coronary syndrome, (7) cardiac tamponade, and (8) anaphylaxis. During each session, groups of 2 to 3 interns worked through 2 cases using a high‐fidelity mannequin (Laerdal 3G, Wappingers Falls, NY) with 2 resident facilitators. One facilitator operated the mannequin, while the other served as a nurse. Each case was followed by a structured debriefing led by 1 of the resident facilitators (facilitators switched roles for the second case). The number of sessions facilitated varied for each resident based on individual schedules and preferences.

Four senior residents who were appointed as simulation leaders (G.A.A., J.K.H., R.K., Z.S.) and 2 faculty advisors (P.F.C., E.M.M.) administered the program. Simulation resident leaders scheduled facilitators and interns and participated in a portion of simulation sessions as facilitators, but they were not analyzed as participants for the purposes of this study. The curriculum was administered without interfering with clinical duties, and no additional time was protected for interns or residents participating in the curriculum.

Resident‐as‐Teacher Program Structure

We invited participating resident facilitators to attend a 1‐hour interactive workshop prior to serving as facilitators. The workshop focused on building learner‐centered and small‐group teaching skills, as well as introducing residents to a 5‐stage debriefing framework developed by the authors and based on simulation debriefing best practices (Table 1).[12, 13, 14]

Stages of Debriefing
Stage of DebriefingActionRationale
  • NOTE: *To standardize the learner experience, all interns received an e‐mail after each session describing the key learning objectives and takeaway points with references from the medical literature for each case.

Emotional responseElicit learners' emotions about the caseIt is important to acknowledge and address both positive and negative emotions that arise during the case before debriefing the specific medical and communications aspects of the case. Unaddressed emotional responses may hinder subsequent debriefing.
Objectives*Elicit learners' objectives and combine them with the stated learning objectives of the case to determine debriefing objectivesThe limited amount of time allocated for debriefing (1520 minutes) does not allow the facilitator to cover all aspects of medical management and communication skills in a particular case. Focusing on the most salient objectives, including those identified by the learners, allows the facilitator to engage in learner‐centered debriefing.
AnalysisAnalyze the learners' approach to the caseAnalyzing the learners' approach to the case using the advocacy‐inquiry method[11] seeks to uncover the learner's assumptions/frameworks behind the decision made during the case. This approach allows the facilitator to understand the learners' thought process and target teaching points to more precisely address the learners' needs.
TeachingAddress knowledge gaps and incorrect assumptionsLearner‐centered debriefing within a limited timeframe requires teaching to be brief and targeted toward the defined objectives. It should also address the knowledge gaps and incorrect assumptions uncovered during the analysis phase.
SummarySummarize key takeawaysSummarizing highlights the key points of the debriefing and can be used to suggest further exploration of topics through self‐study (if necessary).

Resident facilitators were observed by simulation faculty and simulation resident leaders throughout the intern curriculum and given structured feedback either in‐person immediately after completion of the simulation session or via a detailed same‐day e‐mail if the time allotted for feedback was not sufficient. Feedback was structured by the 5 stages of debriefing described in Table 1, and included soliciting residents' observations on the teaching experience and specific behaviors observed by faculty during the scenarios. E‐mail feedback (also structured by stages of debriefing and including observed behaviors) was typically followed by verbal feedback during the next simulation session.

The RaT program was composed of 3 elements: the workshop, case facilitation, and direct observation with feedback. Because we felt that the opportunity for directly observed teaching and feedback in a ward‐like controlled environment was a unique advantage offered by the simulation setting, we included all residents who served as facilitators in the analysis, regardless of whether or not they had attended the workshop.

Evaluation Instruments

Survey instruments were developed by the investigators, reviewed by several experts in simulation, pilot tested among residents not participating in the simulation program, and revised by the investigators.

Pre‐program Facilitator Survey

Prior to the RaT workshop, resident facilitators completed a baseline survey evaluating their preparedness to teach and give feedback on the wards and in a simulation‐based setting on a 5‐point scale (see Supporting Information, Appendix I, in the online version of this article).

Post‐program Facilitator Survey

Approximately 3 weeks after completion of the intern simulation curriculum, resident facilitators were asked to complete an online post‐program survey, which remained open for 1 month (residents completed this survey anywhere from 3 weeks to 4 months after their participation in the RaT program depending on the timing of their facilitation). The survey asked residents to evaluate their comfort with their current post‐program teaching skills as well as their pre‐program skills in retrospect, as previous research demonstrated that learners may overestimate their skills prior to training programs.[15] Resident facilitators could complete the surveys nonanonymously to allow for matched‐pairs analysis of the change in teaching skills over the course of the program (see Supporting Information, Appendix II, in the online version of this article).

Intern Evaluation of Facilitator Debriefing Skills

After each case, intern learners were asked to anonymously evaluate the teaching effectiveness of the lead resident facilitator using the adapted Debriefing Assessment for Simulation in Healthcare (DASH) instrument.[16] The DASH instrument evaluated the following domains: (1) instructor maintained an engaging context for learning, (2) instructor structured the debriefing in an organized way, (3) instructor provoked in‐depth discussions that led me to reflect on my performance, (4) instructor identified what I did well or poorly and why, (5) instructor helped me see how to improve or how to sustain good performance, (6) overall effectiveness of the simulation session (see Supporting Information, Appendix III, in the online version of this article).

Post‐program Intern Survey

Two months following the completion of the simulation curriculum, intern learners received an anonymous online post‐program evaluation assessing program efficacy and resident facilitator teaching (see Supporting Information, Appendix IV, in the online version of this article).

Statistical Analysis

Teaching skills and learners' DASH ratings were compared using the Student t test, Pearson 2 test, and Fisher exact test as appropriate. Pre‐ and post‐program rating of teaching skills was undertaken in aggregate and as a matched‐pairs analysis.

The study was approved by the Partners Institutional Review Board.

RESULTS

Forty‐one resident facilitators participated in 118 individual simulation sessions encompassing 236 case scenarios. Thirty‐four residents completed the post‐program facilitator survey and were included in the analysis. Of these, 26 (76%) participated in the workshop and completed the pre‐program survey. Twenty‐three of the 34 residents (68%) completed the post‐program evaluation nonanonymously (13 PGY‐II, 10 PGY‐III). Of these, 16 completed the pre‐program survey nonanonymously. The average number of sessions facilitated by each resident was 3.9 (range, 112).

Pre‐ and Post‐program Self‐Assessment of Residents' Teaching Skills

Participation in the simulation RaT program led to improvements in resident facilitators' self‐reported teaching skills across multiple domains (Table 2). These results were consistent when using the retrospective pre‐program assessment in matched‐pairs analysis (n=34) and when performing the analysis using the true pre‐program preparedness compared to post‐program comfort with teaching skills in a non‐matched‐pairs fashion (n = 26) and matched‐pairs fashion (n = 16). We report P values for the more conservative estimates using the retrospective pre‐program assessment matched‐pairs analysis. The most significant improvements occurred in residents' ability to teach in a simulated environment (2.81 to 4.16, P < 0.001 [5‐point scale]) and give feedback (3.35 to 3.77, P < 0.001).

Pre‐ and Post‐program Self‐Assessment of Resident Facilitators Teaching Skills*
 Pre‐program Rating (n = 34)Post‐program Rating (n = 34)P Value
  • NOTE: *Survey data were collected before participation in the workshop and 3 weeks after completion of the 4‐month curriculum. Five‐point Likert scale: very uncomfortable (1) to very comfortable (5).

Teaching on rounds3.754.030.005
Teaching on wards outside rounds3.834.070.007
Teaching in simulation2.814.16<0.001
Giving feedback3.353.77<0.001

Resident facilitators reported that participation in the RaT program had a significant impact on their teaching skills both within and outside of the simulation environment (Table 3). However, the greatest gains were seen in the domain of teaching in simulation. It was also noted that participation in the program improved resident facilitators' medical knowledge.

Resident Facilitators' Perceived Improvement in Skills Due to Resident‐as‐Teacher Program
CategoryNot at AllSlightly ImprovedModerately ImprovedGreatly ImprovedNot Sure
Teaching on rounds, n = 344 (12%)12 (35%)13 (38%)4 (12%)1 (3%)
Teaching on wards outside rounds, n = 343 (9%)13 (38%)12 (35%)5 (15%)1 (3%)
Teaching in simulation, n = 340 (0%)4 (12%)7 (21%)23 (68%)0 (0%)
Giving feedback, n = 344 (12%)10 (29%)12 (35%)6 (18%)2 (6%)
Medical knowledge, n = 342 (6%)11 (32%)18 (53%)3 (9%)0 (0%)

Subgroup analyses were performed comparing the perceived improvement in teaching and feedback skills among those who did or did not attend the facilitator workshop, those who facilitated 5 or more versus less than 5 sessions, and those who received or did not receive direct observation and feedback from faculty. Although numerically greater gains were seen across all 4 domains among those who attended the workshop, facilitated 5 or more sessions, or received feedback from faculty, only teaching on rounds and on the wards outside rounds reached statistical significance (Table 4). It should be noted that all residents who facilitated 5 or more sessions also attended the workshop and received feedback from faculty. We also compared perceived improvement among PGY‐II and PGY‐III residents. In contrast to PGY‐II residents, who demonstrated an improvement in all 4 domains, PGY‐III residents only demonstrated improvement in simulation‐based teaching.

Pre‐ and Post‐program Self‐Assessment of Resident Facilitators Teaching Skills According to Number of Sessions Facilitated, Workshop Attendance, Receipt of Feedback, and PGY Year
 Pre‐programPost‐programP ValuePre‐programPost‐programP Value
 Facilitated Less Than 5 Sessions (n = 18)Facilitated 5 or More Sessions (n = 11)
 Did Not Attend Workshop (n = 10)Attended Workshop (n = 22)
 Received Feedback From Resident Leaders Only (n = 11)Received Faculty Feedback (n = 21)
 PGY‐II (n = 13)PGY‐III (n = 9)
  • NOTE: Abbreviations: PGY, postgraduate year.

Teaching on rounds3.683.790.163.854.380.01
Teaching on wards outside rounds3.8240.083.854.150.04
Teaching in simulation2.894.06<0.012.694.31<0.01
Giving feedback3.333.670.013.383.920.01
Teaching on rounds44.10.343.644<0.01
Teaching on wards outside rounds441.003.764.1<0.01
Teaching in simulation2.894.11<0.012.774.18<0.01
Giving feedback3.563.780.173.273.77<0.01
Teaching on rounds3.553.820.193.864.140.01
Teaching on wards outside rounds441.003.754.1<0.01
Teaching in simulation2.73.8<0.012.864.33<0.01
Giving feedback3.23.60.043.433.86<0.01
Teaching on rounds3.383.850.034.224.221
Teaching on wards outside rounds3.543.850.044.144.141
Teaching in simulation2.464.15<0.013.134.13<0.01
Giving feedback3.233.620.023.53.880.08

Intern Learners' Assessment of Resident Facilitators and the Program Overall

During the course of the program, intern learners completed 166 DASH ratings evaluating 34 resident facilitators (see Supporting Information, Appendix V, in the online version of this article). Ratings for the 6 DASH items ranged from 6.49 to 6.73 (7‐point scale), demonstrating a high level of facilitator efficacy across multiple domains. No differences in DASH scores were noted among subgroups of resident facilitators described in the previous paragraph.

Thirty‐eight of 52 intern learners (73%) completed the post‐program survey.

Resident Facilitators' Use of Specific Teaching Skills During Debriefing as Rated by Intern Learners
Facilitator BehaviorsVery Often, >75%Often, >50%Sometimes, 25%50%Rarely, <25%Never
Elicited emotional reactions, n = 3818 (47%)16 (42%)4 (11%)0 (0%)0 (0%)
Elicited objectives from learner, n = 3726 (69%)8 (22%)2 (6%)1 (3%)0 (0%)
Asked to share clinical reasoning, n = 3821 (56%)13 (33%)4 (11%)0 (0%)0 (0%)
Summarized learning points, n = 3831 (81%)7 (19%)0 (0%)0 (0%)0 (0%)
Spoke for less than half of the session, n = 388 (22%)17 (44%)11 (28%)2 (6%)0 (0%)

All intern learners rated the overall simulation experience as either excellent (81%) or good (19%) on the post‐program evaluation (4 or 5 on a 5‐point Likert scale, respectively). All interns strongly agreed (72%) or agreed (28%) that the simulation sessions improved their ability to manage acute clinical scenarios. Interns reported that resident facilitators frequently utilized specific debriefing techniques covered in the RaT curriculum during the debriefing sessions (Table 5).

DISCUSSION

We describe a unique RaT program embedded within a high‐fidelity medical simulation curriculum for IM interns. Our study demonstrates that resident facilitators noted an improvement in their teaching and feedback skills, both in the simulation setting and on the wards. Intern learners rated residents' teaching skills and the overall simulation curriculum highly, suggesting that residents were effective teachers.

The use of simulation in trainee‐as‐teacher curricula holds promise because it can provide an opportunity to teach in an environment closely approximating the wards, where trainees have the most opportunities to teach. However, in contrast to true ward‐based teaching, simulation can provide predictable scenarios in a controlled environment, which eliminates the distractions and unpredictability that exist on the wards, without compromising patient safety. Recently, Tofil et al. described the first use of simulation in a trainee‐as‐teacher program.[17] The investigators utilized a 1‐time simulation‐based teaching session, during which pediatric fellows completed a teacher‐training workshop, developed and served as facilitators in a simulated case, and received feedback. The use of simulation allowed fellows an opportunity to apply newly acquired skills in a controlled environment and receive feedback, which has been shown to improve teaching skills.[18]

The experience from our program expands on that of Tofil et al., as well as previously described trainee‐as‐teacher curricula, by introducing a component of deliberate practice that is unique to the simulation setting and has been absent from most previously described RaT programs.[3] Most residents had the opportunity to facilitate the same case on multiple occasions, allowing them to receive feedback and make adjustments. Residents who facilitated 5 or more sessions demonstrated more improvement, particularly in teaching outside of simulation, than residents who facilitated fewer sessions. It is notable that PGY‐II resident facilitators reported an improvement in their teaching skills on the wards, though less pronounced as compared to teaching in the simulation‐based environment, suggesting that benefits of the program may extend to nonsimulation‐based settings. Additional studies focusing on objective evaluation of ward‐based teaching are needed to further explore this phenomenon. Finally, the self‐reported improvements in medical knowledge by resident facilitators may serve as another benefit of our program.

Analysis of learner‐level data collected in the postcurriculum intern survey and DASH ratings provides additional support for the effectiveness of the RaT program. The majority of intern learners reported that resident facilitators used the techniques covered in our program frequently during debriefings. In addition, DASH scores clustered around maximum efficacy for all facilitators, suggesting that residents were effective teachers. Although we cannot directly assess whether the differences demonstrated in resident facilitators' self‐assessments translated to their teaching or were significant from the learners' perspective, these results support the hypothesis that self‐assessed improvements in teaching and feedback skills were significant.

In addition to improving resident teaching skills, our program had a positive impact on intern learners as evidenced by intern evaluations of the simulation curriculum. While utilizing relatively few faculty resources, our program was able to deliver an extensive and well‐received simulation curriculum to over 50 interns. The fact that 40% of second‐ and third‐year residents volunteered to teach in the program despite the early morning timing of the sessions speaks to the interest that trainees have in teaching in this setting. This model can serve as an important and efficient learning platform in residency training programs. It may be particularly salient to IM training programs where implementation of simulation curricula is challenging due to large numbers of residents and limited faculty resources. The barriers to and lessons learned from our experience with implementing the simulation curriculum have been previously described.[10, 11]

Our study has several limitations. Changes in residents' teaching skills were self‐assessed, which may be inaccurate as learners may overestimate their abilities.[19] Although we collected data on the experiences of intern learners that supported residents' self‐assessment, further studies using more objective measures (such as the Objective Structured Teaching Exercise[20]) should be undertaken. We did not objectively assess improvement of residents' teaching skills on the wards, with the exception of the residents' self assessment. Due to the timing of survey administration, some residents had as little as 1 month between completion of the curriculum and responding to the post‐curriculum survey, limiting their ability to evaluate their teaching skills on the wards. The transferability of the skills gained in simulation‐based teaching to teaching on the wards deserves further study. We cannot definitively attribute perceived improvement of teaching skills to the RaT program without a control group. However, the frequent use of recommended techniques during debriefing, which are not typically taught in other settings, supports the efficacy of the RaT program.

Our study did not allow us to determine which of the 3 components of the RaT program (workshop, facilitation practice, or direct observation and feedback) had the greatest impact on teaching skills or DASH ratings, as those who facilitated more sessions also completed the other components of the program. Furthermore, there may have been a selection bias among facilitators who facilitated more sessions. Because only 16 of 34 participants completed both the pre‐program and post‐program self‐assessments in a non‐anonymous fashion, we were not able to analyze the effect of pre‐program factors, such as prior teaching experience, on program outcomes. It should also be noted that allowing resident facilitators the option to complete the survey non‐anonymously could have biased our results. The simulation curriculum was conducted in a single center, and resident facilitators were self‐selecting; therefore, our results may not be generalizable. Finally, the DASH instrument was only administered after the RaT workshop and was likely limited further by the ceiling effect created by the learners' high satisfaction with the simulation program overall.

In summary, our simulation‐based RaT program improved resident facilitators' self‐reported teaching and feedback skills. Simulation‐based training provided an opportunity for deliberate practice of teaching skills in a controlled environment, which was a unique component of the program. The impact of deliberate practice on resident teaching skills and optimal methods to incorporate deliberate practice in RaT programs deserves further study. Our curriculum design may serve as a model for the development of simulation programs that can be employed to improve both intern learning and resident teaching skills.

Acknowledgements

The authors acknowledge Deborah Navedo, PhD, Assistant Professor, Massachusetts General Hospital Institute of Health Professions, and Emily M. Hayden, MD, Assistant Professor, Department of Emergency Medicine, Massachusetts General Hospital and Harvard Medical School, for their assistance with development of the RaT curriculum. The authors thank Dr. Jenny Rudolph, Senior Director, Institute for Medical Simulation at the Center for Medical Simulation, for her help in teaching us to use the DASH instrument. The authors also thank Dr. Daniel Hunt, MD, Associate Professor, Department of Medicine, Massachusetts General Hospital and Harvard Medical School, for his thoughtful review of this manuscript.

Disclosure: Nothing to report.

Residency training, in addition to developing clinical competence among trainees, is charged with improving resident teaching skills. The Liaison Committee on Medical Education and the Accreditation Council for Graduate Medical Education require that residents be provided with training or resources to develop their teaching skills.[1, 2] A variety of resident‐as‐teacher (RaT) programs have been described; however, the optimal format of such programs remains in question.[3] High‐fidelity medical simulation using mannequins has been shown to be an effective teaching tool in various medical specialties[4, 5, 6, 7] and may prove to be useful in teacher training.[8] Teaching in a simulation‐based environment can give participants the opportunity to apply their teaching skills in a clinical environment, as they would on the wards, but in a more controlled, predictable setting and without compromising patient safety. In addition, simulation offers the opportunity to engage in deliberate practice by allowing teachers to facilitate the same case on multiple occasions with different learners. Deliberate practice, which involves task repetition with feedback aimed at improving performance, has been shown to be important in developing expertise.[9]

We previously described the first use of a high‐fidelity simulation curriculum for internal medicine (IM) interns focused on clinical decision‐making skills, in which second‐ and third‐year residents served as facilitators.[10, 11] Herein, we describe a RaT program in which residents participated in a workshop, then served as facilitators in the intern curriculum and received feedback from faculty. We hypothesized that such a program would improve residents' teaching and feedback skills, both in the simulation environment and on the wards.

METHODS

We conducted a single‐group study evaluating teaching and feedback skills among upper‐level resident facilitators before and after participation in the RaT program. We measured residents' teaching skills using pre‐ and post‐program self‐assessments as well as evaluations completed by the intern learners after each session and at the completion of the curriculum.

Setting and Participants

We embedded the RaT program within a simulation curriculum administered July to October of 2013 for all IM interns at Massachusetts General Hospital (interns in the preliminary program who planned to pursue another field after the completion of the intern year were excluded) (n = 52). We invited postgraduate year (PGY) II and III residents (n = 102) to participate in the IM simulation program as facilitators via email. The curriculum consisted of 8 cases focusing on acute clinical scenarios encountered on the general medicine wards. The cases were administered during 1‐hour sessions 4 mornings per week from 7 AM to 8 AM prior to clinical duties. Interns completed the curriculum over 4 sessions during their outpatient rotation. The case topics were (1) hypertensive emergency, (2) post‐procedure bleed, (3) congestive heart failure, (4) atrial fibrillation with rapid ventricular response, (5) altered mental status/alcohol withdrawal, (6) nonsustained ventricular tachycardia heralding acute coronary syndrome, (7) cardiac tamponade, and (8) anaphylaxis. During each session, groups of 2 to 3 interns worked through 2 cases using a high‐fidelity mannequin (Laerdal 3G, Wappingers Falls, NY) with 2 resident facilitators. One facilitator operated the mannequin, while the other served as a nurse. Each case was followed by a structured debriefing led by 1 of the resident facilitators (facilitators switched roles for the second case). The number of sessions facilitated varied for each resident based on individual schedules and preferences.

Four senior residents who were appointed as simulation leaders (G.A.A., J.K.H., R.K., Z.S.) and 2 faculty advisors (P.F.C., E.M.M.) administered the program. Simulation resident leaders scheduled facilitators and interns and participated in a portion of simulation sessions as facilitators, but they were not analyzed as participants for the purposes of this study. The curriculum was administered without interfering with clinical duties, and no additional time was protected for interns or residents participating in the curriculum.

Resident‐as‐Teacher Program Structure

We invited participating resident facilitators to attend a 1‐hour interactive workshop prior to serving as facilitators. The workshop focused on building learner‐centered and small‐group teaching skills, as well as introducing residents to a 5‐stage debriefing framework developed by the authors and based on simulation debriefing best practices (Table 1).[12, 13, 14]

Stages of Debriefing
Stage of DebriefingActionRationale
  • NOTE: *To standardize the learner experience, all interns received an e‐mail after each session describing the key learning objectives and takeaway points with references from the medical literature for each case.

Emotional responseElicit learners' emotions about the caseIt is important to acknowledge and address both positive and negative emotions that arise during the case before debriefing the specific medical and communications aspects of the case. Unaddressed emotional responses may hinder subsequent debriefing.
Objectives*Elicit learners' objectives and combine them with the stated learning objectives of the case to determine debriefing objectivesThe limited amount of time allocated for debriefing (1520 minutes) does not allow the facilitator to cover all aspects of medical management and communication skills in a particular case. Focusing on the most salient objectives, including those identified by the learners, allows the facilitator to engage in learner‐centered debriefing.
AnalysisAnalyze the learners' approach to the caseAnalyzing the learners' approach to the case using the advocacy‐inquiry method[11] seeks to uncover the learner's assumptions/frameworks behind the decision made during the case. This approach allows the facilitator to understand the learners' thought process and target teaching points to more precisely address the learners' needs.
TeachingAddress knowledge gaps and incorrect assumptionsLearner‐centered debriefing within a limited timeframe requires teaching to be brief and targeted toward the defined objectives. It should also address the knowledge gaps and incorrect assumptions uncovered during the analysis phase.
SummarySummarize key takeawaysSummarizing highlights the key points of the debriefing and can be used to suggest further exploration of topics through self‐study (if necessary).

Resident facilitators were observed by simulation faculty and simulation resident leaders throughout the intern curriculum and given structured feedback either in‐person immediately after completion of the simulation session or via a detailed same‐day e‐mail if the time allotted for feedback was not sufficient. Feedback was structured by the 5 stages of debriefing described in Table 1, and included soliciting residents' observations on the teaching experience and specific behaviors observed by faculty during the scenarios. E‐mail feedback (also structured by stages of debriefing and including observed behaviors) was typically followed by verbal feedback during the next simulation session.

The RaT program was composed of 3 elements: the workshop, case facilitation, and direct observation with feedback. Because we felt that the opportunity for directly observed teaching and feedback in a ward‐like controlled environment was a unique advantage offered by the simulation setting, we included all residents who served as facilitators in the analysis, regardless of whether or not they had attended the workshop.

Evaluation Instruments

Survey instruments were developed by the investigators, reviewed by several experts in simulation, pilot tested among residents not participating in the simulation program, and revised by the investigators.

Pre‐program Facilitator Survey

Prior to the RaT workshop, resident facilitators completed a baseline survey evaluating their preparedness to teach and give feedback on the wards and in a simulation‐based setting on a 5‐point scale (see Supporting Information, Appendix I, in the online version of this article).

Post‐program Facilitator Survey

Approximately 3 weeks after completion of the intern simulation curriculum, resident facilitators were asked to complete an online post‐program survey, which remained open for 1 month (residents completed this survey anywhere from 3 weeks to 4 months after their participation in the RaT program depending on the timing of their facilitation). The survey asked residents to evaluate their comfort with their current post‐program teaching skills as well as their pre‐program skills in retrospect, as previous research demonstrated that learners may overestimate their skills prior to training programs.[15] Resident facilitators could complete the surveys nonanonymously to allow for matched‐pairs analysis of the change in teaching skills over the course of the program (see Supporting Information, Appendix II, in the online version of this article).

Intern Evaluation of Facilitator Debriefing Skills

After each case, intern learners were asked to anonymously evaluate the teaching effectiveness of the lead resident facilitator using the adapted Debriefing Assessment for Simulation in Healthcare (DASH) instrument.[16] The DASH instrument evaluated the following domains: (1) instructor maintained an engaging context for learning, (2) instructor structured the debriefing in an organized way, (3) instructor provoked in‐depth discussions that led me to reflect on my performance, (4) instructor identified what I did well or poorly and why, (5) instructor helped me see how to improve or how to sustain good performance, (6) overall effectiveness of the simulation session (see Supporting Information, Appendix III, in the online version of this article).

Post‐program Intern Survey

Two months following the completion of the simulation curriculum, intern learners received an anonymous online post‐program evaluation assessing program efficacy and resident facilitator teaching (see Supporting Information, Appendix IV, in the online version of this article).

Statistical Analysis

Teaching skills and learners' DASH ratings were compared using the Student t test, Pearson 2 test, and Fisher exact test as appropriate. Pre‐ and post‐program rating of teaching skills was undertaken in aggregate and as a matched‐pairs analysis.

The study was approved by the Partners Institutional Review Board.

RESULTS

Forty‐one resident facilitators participated in 118 individual simulation sessions encompassing 236 case scenarios. Thirty‐four residents completed the post‐program facilitator survey and were included in the analysis. Of these, 26 (76%) participated in the workshop and completed the pre‐program survey. Twenty‐three of the 34 residents (68%) completed the post‐program evaluation nonanonymously (13 PGY‐II, 10 PGY‐III). Of these, 16 completed the pre‐program survey nonanonymously. The average number of sessions facilitated by each resident was 3.9 (range, 112).

Pre‐ and Post‐program Self‐Assessment of Residents' Teaching Skills

Participation in the simulation RaT program led to improvements in resident facilitators' self‐reported teaching skills across multiple domains (Table 2). These results were consistent when using the retrospective pre‐program assessment in matched‐pairs analysis (n=34) and when performing the analysis using the true pre‐program preparedness compared to post‐program comfort with teaching skills in a non‐matched‐pairs fashion (n = 26) and matched‐pairs fashion (n = 16). We report P values for the more conservative estimates using the retrospective pre‐program assessment matched‐pairs analysis. The most significant improvements occurred in residents' ability to teach in a simulated environment (2.81 to 4.16, P < 0.001 [5‐point scale]) and give feedback (3.35 to 3.77, P < 0.001).

Pre‐ and Post‐program Self‐Assessment of Resident Facilitators Teaching Skills*
 Pre‐program Rating (n = 34)Post‐program Rating (n = 34)P Value
  • NOTE: *Survey data were collected before participation in the workshop and 3 weeks after completion of the 4‐month curriculum. Five‐point Likert scale: very uncomfortable (1) to very comfortable (5).

Teaching on rounds3.754.030.005
Teaching on wards outside rounds3.834.070.007
Teaching in simulation2.814.16<0.001
Giving feedback3.353.77<0.001

Resident facilitators reported that participation in the RaT program had a significant impact on their teaching skills both within and outside of the simulation environment (Table 3). However, the greatest gains were seen in the domain of teaching in simulation. It was also noted that participation in the program improved resident facilitators' medical knowledge.

Resident Facilitators' Perceived Improvement in Skills Due to Resident‐as‐Teacher Program
CategoryNot at AllSlightly ImprovedModerately ImprovedGreatly ImprovedNot Sure
Teaching on rounds, n = 344 (12%)12 (35%)13 (38%)4 (12%)1 (3%)
Teaching on wards outside rounds, n = 343 (9%)13 (38%)12 (35%)5 (15%)1 (3%)
Teaching in simulation, n = 340 (0%)4 (12%)7 (21%)23 (68%)0 (0%)
Giving feedback, n = 344 (12%)10 (29%)12 (35%)6 (18%)2 (6%)
Medical knowledge, n = 342 (6%)11 (32%)18 (53%)3 (9%)0 (0%)

Subgroup analyses were performed comparing the perceived improvement in teaching and feedback skills among those who did or did not attend the facilitator workshop, those who facilitated 5 or more versus less than 5 sessions, and those who received or did not receive direct observation and feedback from faculty. Although numerically greater gains were seen across all 4 domains among those who attended the workshop, facilitated 5 or more sessions, or received feedback from faculty, only teaching on rounds and on the wards outside rounds reached statistical significance (Table 4). It should be noted that all residents who facilitated 5 or more sessions also attended the workshop and received feedback from faculty. We also compared perceived improvement among PGY‐II and PGY‐III residents. In contrast to PGY‐II residents, who demonstrated an improvement in all 4 domains, PGY‐III residents only demonstrated improvement in simulation‐based teaching.

Pre‐ and Post‐program Self‐Assessment of Resident Facilitators Teaching Skills According to Number of Sessions Facilitated, Workshop Attendance, Receipt of Feedback, and PGY Year
 Pre‐programPost‐programP ValuePre‐programPost‐programP Value
 Facilitated Less Than 5 Sessions (n = 18)Facilitated 5 or More Sessions (n = 11)
 Did Not Attend Workshop (n = 10)Attended Workshop (n = 22)
 Received Feedback From Resident Leaders Only (n = 11)Received Faculty Feedback (n = 21)
 PGY‐II (n = 13)PGY‐III (n = 9)
  • NOTE: Abbreviations: PGY, postgraduate year.

Teaching on rounds3.683.790.163.854.380.01
Teaching on wards outside rounds3.8240.083.854.150.04
Teaching in simulation2.894.06<0.012.694.31<0.01
Giving feedback3.333.670.013.383.920.01
Teaching on rounds44.10.343.644<0.01
Teaching on wards outside rounds441.003.764.1<0.01
Teaching in simulation2.894.11<0.012.774.18<0.01
Giving feedback3.563.780.173.273.77<0.01
Teaching on rounds3.553.820.193.864.140.01
Teaching on wards outside rounds441.003.754.1<0.01
Teaching in simulation2.73.8<0.012.864.33<0.01
Giving feedback3.23.60.043.433.86<0.01
Teaching on rounds3.383.850.034.224.221
Teaching on wards outside rounds3.543.850.044.144.141
Teaching in simulation2.464.15<0.013.134.13<0.01
Giving feedback3.233.620.023.53.880.08

Intern Learners' Assessment of Resident Facilitators and the Program Overall

During the course of the program, intern learners completed 166 DASH ratings evaluating 34 resident facilitators (see Supporting Information, Appendix V, in the online version of this article). Ratings for the 6 DASH items ranged from 6.49 to 6.73 (7‐point scale), demonstrating a high level of facilitator efficacy across multiple domains. No differences in DASH scores were noted among subgroups of resident facilitators described in the previous paragraph.

Thirty‐eight of 52 intern learners (73%) completed the post‐program survey.

Resident Facilitators' Use of Specific Teaching Skills During Debriefing as Rated by Intern Learners
Facilitator BehaviorsVery Often, >75%Often, >50%Sometimes, 25%50%Rarely, <25%Never
Elicited emotional reactions, n = 3818 (47%)16 (42%)4 (11%)0 (0%)0 (0%)
Elicited objectives from learner, n = 3726 (69%)8 (22%)2 (6%)1 (3%)0 (0%)
Asked to share clinical reasoning, n = 3821 (56%)13 (33%)4 (11%)0 (0%)0 (0%)
Summarized learning points, n = 3831 (81%)7 (19%)0 (0%)0 (0%)0 (0%)
Spoke for less than half of the session, n = 388 (22%)17 (44%)11 (28%)2 (6%)0 (0%)

All intern learners rated the overall simulation experience as either excellent (81%) or good (19%) on the post‐program evaluation (4 or 5 on a 5‐point Likert scale, respectively). All interns strongly agreed (72%) or agreed (28%) that the simulation sessions improved their ability to manage acute clinical scenarios. Interns reported that resident facilitators frequently utilized specific debriefing techniques covered in the RaT curriculum during the debriefing sessions (Table 5).

DISCUSSION

We describe a unique RaT program embedded within a high‐fidelity medical simulation curriculum for IM interns. Our study demonstrates that resident facilitators noted an improvement in their teaching and feedback skills, both in the simulation setting and on the wards. Intern learners rated residents' teaching skills and the overall simulation curriculum highly, suggesting that residents were effective teachers.

The use of simulation in trainee‐as‐teacher curricula holds promise because it can provide an opportunity to teach in an environment closely approximating the wards, where trainees have the most opportunities to teach. However, in contrast to true ward‐based teaching, simulation can provide predictable scenarios in a controlled environment, which eliminates the distractions and unpredictability that exist on the wards, without compromising patient safety. Recently, Tofil et al. described the first use of simulation in a trainee‐as‐teacher program.[17] The investigators utilized a 1‐time simulation‐based teaching session, during which pediatric fellows completed a teacher‐training workshop, developed and served as facilitators in a simulated case, and received feedback. The use of simulation allowed fellows an opportunity to apply newly acquired skills in a controlled environment and receive feedback, which has been shown to improve teaching skills.[18]

The experience from our program expands on that of Tofil et al., as well as previously described trainee‐as‐teacher curricula, by introducing a component of deliberate practice that is unique to the simulation setting and has been absent from most previously described RaT programs.[3] Most residents had the opportunity to facilitate the same case on multiple occasions, allowing them to receive feedback and make adjustments. Residents who facilitated 5 or more sessions demonstrated more improvement, particularly in teaching outside of simulation, than residents who facilitated fewer sessions. It is notable that PGY‐II resident facilitators reported an improvement in their teaching skills on the wards, though less pronounced as compared to teaching in the simulation‐based environment, suggesting that benefits of the program may extend to nonsimulation‐based settings. Additional studies focusing on objective evaluation of ward‐based teaching are needed to further explore this phenomenon. Finally, the self‐reported improvements in medical knowledge by resident facilitators may serve as another benefit of our program.

Analysis of learner‐level data collected in the postcurriculum intern survey and DASH ratings provides additional support for the effectiveness of the RaT program. The majority of intern learners reported that resident facilitators used the techniques covered in our program frequently during debriefings. In addition, DASH scores clustered around maximum efficacy for all facilitators, suggesting that residents were effective teachers. Although we cannot directly assess whether the differences demonstrated in resident facilitators' self‐assessments translated to their teaching or were significant from the learners' perspective, these results support the hypothesis that self‐assessed improvements in teaching and feedback skills were significant.

In addition to improving resident teaching skills, our program had a positive impact on intern learners as evidenced by intern evaluations of the simulation curriculum. While utilizing relatively few faculty resources, our program was able to deliver an extensive and well‐received simulation curriculum to over 50 interns. The fact that 40% of second‐ and third‐year residents volunteered to teach in the program despite the early morning timing of the sessions speaks to the interest that trainees have in teaching in this setting. This model can serve as an important and efficient learning platform in residency training programs. It may be particularly salient to IM training programs where implementation of simulation curricula is challenging due to large numbers of residents and limited faculty resources. The barriers to and lessons learned from our experience with implementing the simulation curriculum have been previously described.[10, 11]

Our study has several limitations. Changes in residents' teaching skills were self‐assessed, which may be inaccurate as learners may overestimate their abilities.[19] Although we collected data on the experiences of intern learners that supported residents' self‐assessment, further studies using more objective measures (such as the Objective Structured Teaching Exercise[20]) should be undertaken. We did not objectively assess improvement of residents' teaching skills on the wards, with the exception of the residents' self assessment. Due to the timing of survey administration, some residents had as little as 1 month between completion of the curriculum and responding to the post‐curriculum survey, limiting their ability to evaluate their teaching skills on the wards. The transferability of the skills gained in simulation‐based teaching to teaching on the wards deserves further study. We cannot definitively attribute perceived improvement of teaching skills to the RaT program without a control group. However, the frequent use of recommended techniques during debriefing, which are not typically taught in other settings, supports the efficacy of the RaT program.

Our study did not allow us to determine which of the 3 components of the RaT program (workshop, facilitation practice, or direct observation and feedback) had the greatest impact on teaching skills or DASH ratings, as those who facilitated more sessions also completed the other components of the program. Furthermore, there may have been a selection bias among facilitators who facilitated more sessions. Because only 16 of 34 participants completed both the pre‐program and post‐program self‐assessments in a non‐anonymous fashion, we were not able to analyze the effect of pre‐program factors, such as prior teaching experience, on program outcomes. It should also be noted that allowing resident facilitators the option to complete the survey non‐anonymously could have biased our results. The simulation curriculum was conducted in a single center, and resident facilitators were self‐selecting; therefore, our results may not be generalizable. Finally, the DASH instrument was only administered after the RaT workshop and was likely limited further by the ceiling effect created by the learners' high satisfaction with the simulation program overall.

In summary, our simulation‐based RaT program improved resident facilitators' self‐reported teaching and feedback skills. Simulation‐based training provided an opportunity for deliberate practice of teaching skills in a controlled environment, which was a unique component of the program. The impact of deliberate practice on resident teaching skills and optimal methods to incorporate deliberate practice in RaT programs deserves further study. Our curriculum design may serve as a model for the development of simulation programs that can be employed to improve both intern learning and resident teaching skills.

Acknowledgements

The authors acknowledge Deborah Navedo, PhD, Assistant Professor, Massachusetts General Hospital Institute of Health Professions, and Emily M. Hayden, MD, Assistant Professor, Department of Emergency Medicine, Massachusetts General Hospital and Harvard Medical School, for their assistance with development of the RaT curriculum. The authors thank Dr. Jenny Rudolph, Senior Director, Institute for Medical Simulation at the Center for Medical Simulation, for her help in teaching us to use the DASH instrument. The authors also thank Dr. Daniel Hunt, MD, Associate Professor, Department of Medicine, Massachusetts General Hospital and Harvard Medical School, for his thoughtful review of this manuscript.

Disclosure: Nothing to report.

References
  1. Liaison Committee on Medical Education. Functions and structure of a medical school: standards for accreditation of medical education programs leading to the M.D. degree. Washington, DC, and Chicago, IL: Association of American Medical Colleges and American Medical Association; 2000.
  2. Accreditation Council for Graduate Medical Education. ACGME program requirements for graduate medical education in pediatrics. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/2013-PR-FAQ-PIF/320_pediatrics_07012013.pdf. Accessed June 18, 2014.
  3. Hill AG, Yu TC, Barrow M, Hattie J. A systematic review of resident‐as‐teacher programmes. Med Educ. 2009;43(12):11291140.
  4. Okuda Y, Bryson EO, DeMaria S, et al. The utility of simulation in medical education: what is the evidence? Mt Sinai J Med. 2009;76:330343.
  5. Okuda Y, Bond WF, Bonfante G, et al. National growth in simulation training within emergency medicine residency programs, 2003–2008. Acad Emerg Med. 2008;15:11131116.
  6. Fernandez R, Wang E, Vozenilek JA, et al. Simulation center accreditation and programmatic benchmarks: a review for emergency medicine. Acad Emerg Med. 2010;17(10):10931103.
  7. Cook DA. How much evidence does it take? A cumulative meta‐analysis of outcomes of simulation‐based education. Med Educ. 2014;48(8):750760.
  8. Farrell SE, Pacella C, Egan D, et al.. Resident‐as‐teacher: a suggested curriculum for emergency medicine. Acad Emerg Med. 2006;13(6):677679.
  9. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004;79(10 suppl):S70S81.
  10. Miloslavsky EM, Hayden EM, Currier PF, Mathai SK, Contreras‐Valdes F, Gordon JA. Pilot program utilizing medical simulation in clinical decision making training for internal medicine interns. J Grad Med Educ. 2012;4:490495.
  11. Mathai SK, Miloslavsky EM, Contreras‐Valdes FM, et al. How we implemented a resident‐led medical simulation curriculum in a large internal medicine residency program. Med Teach. 2014;36(4):279283.
  12. Rudolph JW, Simon R, Dufresne RL, Raemer DB. There's no such thing as “nonjudgmental” debriefing: a theory and method for debriefing with good judgment. Simul Healthc. 2006;1:4955.
  13. Minehart RD, Rudolph JW, Pian‐Smith MCM, Raemer DB. Improving faculty feedback to resident trainees during a simulated case: a randomized, controlled trial of an educational intervention. Anesthesiology. 2014;120(1):160171.
  14. Gardner R. Introduction to debriefing. Semin Perinatol. 2013:37(3)166174.
  15. Howard G, Dailey PR. Response‐shift bias: a source of contamination of self‐report measures. J Appl Psychol. 1979;4:93106.
  16. Center for Medical Simulation. Debriefing assessment for simulation in healthcare. Available at: http://www.harvardmedsim.org/debriefing‐assesment‐simulation‐healthcare.php. Accessed June 18, 2014.
  17. Tofil NM, Peterson DT, Harrington KF, et al. A novel iterative‐learner simulation model: fellows as teachers. J Grad Med Educ. 2014;6(1):127132.
  18. Regan‐Smith M, Hirschmann K, Iobst W. Direct observation of faculty with feedback: an effective means of improving patient‐centered and learner‐centered teaching skills. Teach Learn Med. 2007;19(3):278286.
  19. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self assessments. J Pers Soc Psychol. 1999;77:11211134.
  20. Morrison EH, Boker JR, Hollingshead J, et al. Reliability and validity of an objective structured teaching examination for generalist resident teachers. Acad Med. 2002;77(10 suppl):S29S32.
References
  1. Liaison Committee on Medical Education. Functions and structure of a medical school: standards for accreditation of medical education programs leading to the M.D. degree. Washington, DC, and Chicago, IL: Association of American Medical Colleges and American Medical Association; 2000.
  2. Accreditation Council for Graduate Medical Education. ACGME program requirements for graduate medical education in pediatrics. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/2013-PR-FAQ-PIF/320_pediatrics_07012013.pdf. Accessed June 18, 2014.
  3. Hill AG, Yu TC, Barrow M, Hattie J. A systematic review of resident‐as‐teacher programmes. Med Educ. 2009;43(12):11291140.
  4. Okuda Y, Bryson EO, DeMaria S, et al. The utility of simulation in medical education: what is the evidence? Mt Sinai J Med. 2009;76:330343.
  5. Okuda Y, Bond WF, Bonfante G, et al. National growth in simulation training within emergency medicine residency programs, 2003–2008. Acad Emerg Med. 2008;15:11131116.
  6. Fernandez R, Wang E, Vozenilek JA, et al. Simulation center accreditation and programmatic benchmarks: a review for emergency medicine. Acad Emerg Med. 2010;17(10):10931103.
  7. Cook DA. How much evidence does it take? A cumulative meta‐analysis of outcomes of simulation‐based education. Med Educ. 2014;48(8):750760.
  8. Farrell SE, Pacella C, Egan D, et al.. Resident‐as‐teacher: a suggested curriculum for emergency medicine. Acad Emerg Med. 2006;13(6):677679.
  9. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004;79(10 suppl):S70S81.
  10. Miloslavsky EM, Hayden EM, Currier PF, Mathai SK, Contreras‐Valdes F, Gordon JA. Pilot program utilizing medical simulation in clinical decision making training for internal medicine interns. J Grad Med Educ. 2012;4:490495.
  11. Mathai SK, Miloslavsky EM, Contreras‐Valdes FM, et al. How we implemented a resident‐led medical simulation curriculum in a large internal medicine residency program. Med Teach. 2014;36(4):279283.
  12. Rudolph JW, Simon R, Dufresne RL, Raemer DB. There's no such thing as “nonjudgmental” debriefing: a theory and method for debriefing with good judgment. Simul Healthc. 2006;1:4955.
  13. Minehart RD, Rudolph JW, Pian‐Smith MCM, Raemer DB. Improving faculty feedback to resident trainees during a simulated case: a randomized, controlled trial of an educational intervention. Anesthesiology. 2014;120(1):160171.
  14. Gardner R. Introduction to debriefing. Semin Perinatol. 2013:37(3)166174.
  15. Howard G, Dailey PR. Response‐shift bias: a source of contamination of self‐report measures. J Appl Psychol. 1979;4:93106.
  16. Center for Medical Simulation. Debriefing assessment for simulation in healthcare. Available at: http://www.harvardmedsim.org/debriefing‐assesment‐simulation‐healthcare.php. Accessed June 18, 2014.
  17. Tofil NM, Peterson DT, Harrington KF, et al. A novel iterative‐learner simulation model: fellows as teachers. J Grad Med Educ. 2014;6(1):127132.
  18. Regan‐Smith M, Hirschmann K, Iobst W. Direct observation of faculty with feedback: an effective means of improving patient‐centered and learner‐centered teaching skills. Teach Learn Med. 2007;19(3):278286.
  19. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self assessments. J Pers Soc Psychol. 1999;77:11211134.
  20. Morrison EH, Boker JR, Hollingshead J, et al. Reliability and validity of an objective structured teaching examination for generalist resident teachers. Acad Med. 2002;77(10 suppl):S29S32.
Issue
Journal of Hospital Medicine - 10(12)
Issue
Journal of Hospital Medicine - 10(12)
Page Number
767-772
Page Number
767-772
Publications
Publications
Article Type
Display Headline
A simulation‐based resident‐as‐teacher program: The impact on teachers and learners
Display Headline
A simulation‐based resident‐as‐teacher program: The impact on teachers and learners
Sections
Article Source

© 2015 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Eli M. Miloslavsky, MD, Massachusetts General Hospital, 55 Fruit St., Suite 2C, Boston, MA 02114; Telephone: 617‐726‐7938; Fax: 617‐643‐1274; E‐mail: emiloslavsky@mgh.harvard.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Knowledge of Selected Medical Procedures

Article Type
Changed
Sun, 05/28/2017 - 21:26
Display Headline
Development of a test to evaluate residents' knowledge of medical procedures

Medical procedures, an essential and highly valued part of medical education, are often undertaught and inconsistently evaluated. Hospitalists play an increasingly important role in developing the skills of resident‐learners. Alumni rate procedure skills as some of the most important skills learned during residency training,1, 2 but frequently identify training in procedural skills as having been insufficient.3, 4 For certification in internal medicine, the American Board of Internal Medicine (ABIM) has identified a limited set of procedures in which it expects all candidates to be cognitively competent with regard to their knowledge of these procedures. Although active participation in procedures is recommended for certification in internal medicine, the demonstration of procedural proficiency is not required.5

Resident competence in performing procedures remains highly variable and procedural complications can be a source of morbidity and mortality.2, 6, 7 A validated tool for the assessment of procedure related knowledge is currently lacking. In existing standardized tests, including the in‐training examination (ITE) and ABIM certification examination, only a fraction of questions pertain to medical procedures. The necessity for a specifically designed, standardized instrument that can objectively measure procedure related knowledge has been highlighted by studies that have demonstrated that there is little correlation between the rate of procedure‐related complications and ABIM/ITE scores.8 A validated tool to assess the knowledge of residents in selected medical procedures could serve to assess the readiness of residents to begin supervised practice and form part of a proficiency assessment.

In this study we aimed to develop a valid and reliable test of procedural knowledge in 3 procedures associated with potentially serious complications.

Methods

Placement of an arterial line, central venous catheter and thoracentesis were selected as the focus for test development. Using the National Board of Medical Examiners question development guidelines, multiple‐choice questions were developed to test residents on specific points of a prepared curriculum. Questions were designed to test the essential cognitive aspects of medical procedures, including indications, contraindications, and the management of complications, with an emphasis on the elements that were considered by a panel of experts to be frequently misunderstood. Questions were written by faculty trained in question writing (G.M.) and assessed for clarity by other members of faculty. Content evidence of the 36‐item examination (12 questions per procedure) was established by a panel of 4 critical care specialists with expertise in medical education. The study was approved by the Institutional Review Board at all sites.

Item performance characteristics were evaluated by administering the test online to a series of 30 trainees and specialty clinicians. Postadministration interviews with the critical care experts were performed to determine whether test questions were clear and appropriate for residents. Following initial testing, 4 test items with the lowest discrimination according to a point‐biserial correlation (Integrity; Castle Rock Research, Canada) were deleted from the test. The resulting 32‐item test contained items of varying difficulty to allow for effective discrimination between examinees (Appendix 1).

The test was then administered to residents beginning rotations in either the medical intensive care unit or in the coronary care unit at 4 medical centers in Massachusetts (Brigham and Women's Hospital; Massachusetts General Hospital; Faulkner Hospital; and North Shore Medical Center). In addition to completing the on‐line, self‐administered examination, participants provided baseline data including year of residency training, anticipated career path, and the number of prior procedures performed. On a 5‐point Likert scale participants estimated their self‐perceived confidence at performing the procedure (with and without supervision) and supervising each of the procedures. Residents were invited to complete a second test before the end of their rotation (2‐4 weeks after the initial test) in order to assess test‐retest reliability. Answers were made available only after the conclusion of the study.

Reliability of the 32‐item instrument was measured by Cronbach's analysis; a value of 0.6 is considered adequate and values of 0.7 or higher indicate good reliability. Pearson's correlation (Pearson's r) was used to compute test‐retest reliability. Univariate analyses were used to assess the association of the demographic variables with the test scores. Comparison of test scores between groups was made using a t test/Wilcoxon rank sum (2 groups) and analysis of variance (ANOVA)/Kruskal‐Wallis (3 or more groups). The associations of number of prior procedures attempted and self‐reported confidence with test scores was explored using Spearman's correlation. Inferences were made at the 0.05 level of significance, using 2‐tailed tests. Statistical analyses were performed using SPSS 15.0 (SPSS, Inc., Chicago, IL).

Results

Of the 192 internal medicine residents who consented to participate in the study between February and June 2006, 188 completed the initial and repeat test. Subject characteristics are detailed in Table 1.

Subject Characteristics
 Number (%)
Total residents192
Males113 (59)
Year of residency training
First101 (52)
Second64 (33)
Third/fourth27 (14)
Anticipated career path
General medicine/primary care26 (14)
Critical care47 (24)
Medical subspecialties54 (28)
Undecided/other65 (34)

Reliability of the 32‐item instrument measured by Cronbach's was 0.79 and its test‐retest reliability was 0.82. The items difficulty mean was 0.52 with a corrected point biserial correlation mean of 0.26. The test was of high difficulty, with a mean overall score of 50% (median 53%, interquartile range 44‐59%). Baseline scores differed significantly by residency program (P = 0.03). Residents with anticipated careers in critical care had significantly higher scores than those with anticipated careers in primary care (median scores critical care 56%, primary care and other nonprocedural medical subspecialties 50%, P = 0.01).

Residents in their final year reported performing a median of 13 arterial lines, 14 central venous lines, and 3 thoracenteses over the course of their residency training (Table 2). Increase in the number of performed procedures (central lines, arterial lines, and thoracenteses) was associated with an increase in test score (Spearman's correlation coefficient 0.35, P < 0.001). Residents in the highest and lowest decile of procedures performed had median scores of 56% and 43%, respectively (P < 0.001). Increasing seniority in residency was associated with an increase in overall test scores (median score by program year 49%, 54%, 50%, and 64%, P = 0.02).

Number of Procedures Performed by Year of Internal Medicine Residency Training
Year of Residency TrainingMedian Number of Procedures (Interquartile Range)
Arterial Line InsertionCentral Venous Line InsertionThoracentesis
First1 (03)1 (04)0 (01)
Second8.5 (618)10 (518)2 (04)
Third/fourth13 (820)14 (1027)3 (26)

Increase in self‐reported confidence was significantly associated with an increase in the number of performed procedures (Spearman's correlation coefficients for central line 0.83, arterial lines 0.76, and thoracentesis 0.78, all P < 0.001) and increasing seniority (0.66, 0.59, and 0.52, respectively, all P < 0.001).

Discussion

The determination of procedural competence has long been a challenge for trainers and internal medicine programs; methods for measuring procedural skills have not been rigorously studied. Procedural competence requires a combination of theoretical knowledge and practical skill. However, given the declining number of procedures performed by internists,4 the new ABIM guidelines mandate cognitive competence in contrast to the demonstration of hands‐on procedural proficiency.

We therefore sought to develop and validate the results of an examination of the theoretical knowledge necessary to perform 3 procedures associated with potentially serious complications. Following establishment of content evidence, item performance characteristics and postadministration interviews were used to develop a 32‐item test. We confirmed the test's internal structure by assessment of reliability and assessed the association of test scores with other variables for which correlation would be expected.

We found that residents performed poorly on test content considered to be important by procedure specialists. These findings highlight the limitations in current procedure training that is frequently sporadic and often variable. The numbers of procedures reported over the duration of residency by residents at these centers were low. It is unclear if the low number of procedures performed was due to limitations in resident content knowledge or if it reflects the increasing use of interventional services with fewer opportunities for experiential learning. Nevertheless, an increasing number of prior procedures was associated with higher self‐reported confidence for all procedures and translated to higher test scores.

This study was limited to 4 teaching hospitals and further studies may be needed to investigate the wider generalizability of the study instrument. However, participants were from 3 distinct internal medicine residency programs that included both community and university hospitals. We relied on resident self‐reports and did not independently verify the number of prior procedures performed. However, similar assumptions have been made in prior studies that physicians who rarely perform procedures are able to provide reasonable estimates of the total number performed.3

The reliability of the 32‐item test (Cronbach's = 0.79) is in the expected range for this length of test and indicates good reliability.9, 10 Given the potential complications associated with advanced medical procedures, there is increasing need to establish criteria for competence. Although we have not established a score threshold, the development of this validated tool to assess procedural knowledge is an important step toward establishing such a goal.

This test may facilitate efforts by hospitalists and others to evaluate the efficacy and refine existing methods of procedure training. Feedback to educators using this assessment tool may assist in the improvement of teaching strategies. In addition, the assessment of cognitive competence in procedure‐related knowledge using a rigorous and reliable means of assessment such as outlined in this study may help identify residents who need further training. Recognition for the necessity for additional training and oversight are likely to be especially important if residents are expected to perform procedures safely yet have fewer opportunities for practice.

Acknowledgements

The authors thank Dr. Stephen Wright, Haley Hamlin, and Matt Johnston for their contributions to the data collection and analysis.

References
  1. Nelson RL,McCaffrey LA,Nobrega FT, et al.Altering residency curriculum in response to a changing practice environment: use of the Mayo internal medicine residency alumni survey.Mayo Clin Proc.1990;65(6):809817.
  2. Mandel JH,Rich EC,Luxenberg MG,Spilane MT,Kern DC,Parrino TA.Preparation for practice in internal medicine. A study of ten years of residency graduates.Arch Intern Med.1988;148(4):853856.
  3. Hicks CM,Gonzalez R,Morton MT,Gibbon RV,Wigton RS,Anderson RJ.Procedural experience and comfort level in internal medicine trainees.J Gen Intern Med.2000;15(10):716722.
  4. Wigton RS.Training internists in procedural skills.Ann Intern Med.1992;116(12 Pt 2):10911093.
  5. ABIM. Policies and Procedures for Certification in Internal Medicine2008. Available at: http://www.abim.org/certification/policies/imss/im.aspx. Accessed August 2009.
  6. Wickstrom GC,Kolar MM,Keyserling TC, et al.Confidence of graduating internal medicine residents to perform ambulatory procedures.J Gen Intern Med.2000;15(6):361365.
  7. Kern DC,Parrino TA,Korst DR.The lasting value of clinical skills.JAMA.1985;254(1):7076.
  8. Durning SJ,Cation LJ,Jackson JL.Are commonly used resident measurements associated with procedural skills in internal medicine residency training?J Gen Intern Med.2007;22(3):357361.
  9. Nunnally JC.Psychometric Theory.New York:McGraw Hill;1978.
  10. Cronbach LJ.Coefficient alpha and the internal structure of tests.Psychometrika.1951;16:297334.
Article PDF
Issue
Journal of Hospital Medicine - 4(7)
Publications
Page Number
430-432
Legacy Keywords
medical procedures, residency training, test development
Sections
Article PDF
Article PDF

Medical procedures, an essential and highly valued part of medical education, are often undertaught and inconsistently evaluated. Hospitalists play an increasingly important role in developing the skills of resident‐learners. Alumni rate procedure skills as some of the most important skills learned during residency training,1, 2 but frequently identify training in procedural skills as having been insufficient.3, 4 For certification in internal medicine, the American Board of Internal Medicine (ABIM) has identified a limited set of procedures in which it expects all candidates to be cognitively competent with regard to their knowledge of these procedures. Although active participation in procedures is recommended for certification in internal medicine, the demonstration of procedural proficiency is not required.5

Resident competence in performing procedures remains highly variable and procedural complications can be a source of morbidity and mortality.2, 6, 7 A validated tool for the assessment of procedure related knowledge is currently lacking. In existing standardized tests, including the in‐training examination (ITE) and ABIM certification examination, only a fraction of questions pertain to medical procedures. The necessity for a specifically designed, standardized instrument that can objectively measure procedure related knowledge has been highlighted by studies that have demonstrated that there is little correlation between the rate of procedure‐related complications and ABIM/ITE scores.8 A validated tool to assess the knowledge of residents in selected medical procedures could serve to assess the readiness of residents to begin supervised practice and form part of a proficiency assessment.

In this study we aimed to develop a valid and reliable test of procedural knowledge in 3 procedures associated with potentially serious complications.

Methods

Placement of an arterial line, central venous catheter and thoracentesis were selected as the focus for test development. Using the National Board of Medical Examiners question development guidelines, multiple‐choice questions were developed to test residents on specific points of a prepared curriculum. Questions were designed to test the essential cognitive aspects of medical procedures, including indications, contraindications, and the management of complications, with an emphasis on the elements that were considered by a panel of experts to be frequently misunderstood. Questions were written by faculty trained in question writing (G.M.) and assessed for clarity by other members of faculty. Content evidence of the 36‐item examination (12 questions per procedure) was established by a panel of 4 critical care specialists with expertise in medical education. The study was approved by the Institutional Review Board at all sites.

Item performance characteristics were evaluated by administering the test online to a series of 30 trainees and specialty clinicians. Postadministration interviews with the critical care experts were performed to determine whether test questions were clear and appropriate for residents. Following initial testing, 4 test items with the lowest discrimination according to a point‐biserial correlation (Integrity; Castle Rock Research, Canada) were deleted from the test. The resulting 32‐item test contained items of varying difficulty to allow for effective discrimination between examinees (Appendix 1).

The test was then administered to residents beginning rotations in either the medical intensive care unit or in the coronary care unit at 4 medical centers in Massachusetts (Brigham and Women's Hospital; Massachusetts General Hospital; Faulkner Hospital; and North Shore Medical Center). In addition to completing the on‐line, self‐administered examination, participants provided baseline data including year of residency training, anticipated career path, and the number of prior procedures performed. On a 5‐point Likert scale participants estimated their self‐perceived confidence at performing the procedure (with and without supervision) and supervising each of the procedures. Residents were invited to complete a second test before the end of their rotation (2‐4 weeks after the initial test) in order to assess test‐retest reliability. Answers were made available only after the conclusion of the study.

Reliability of the 32‐item instrument was measured by Cronbach's analysis; a value of 0.6 is considered adequate and values of 0.7 or higher indicate good reliability. Pearson's correlation (Pearson's r) was used to compute test‐retest reliability. Univariate analyses were used to assess the association of the demographic variables with the test scores. Comparison of test scores between groups was made using a t test/Wilcoxon rank sum (2 groups) and analysis of variance (ANOVA)/Kruskal‐Wallis (3 or more groups). The associations of number of prior procedures attempted and self‐reported confidence with test scores was explored using Spearman's correlation. Inferences were made at the 0.05 level of significance, using 2‐tailed tests. Statistical analyses were performed using SPSS 15.0 (SPSS, Inc., Chicago, IL).

Results

Of the 192 internal medicine residents who consented to participate in the study between February and June 2006, 188 completed the initial and repeat test. Subject characteristics are detailed in Table 1.

Subject Characteristics
 Number (%)
Total residents192
Males113 (59)
Year of residency training
First101 (52)
Second64 (33)
Third/fourth27 (14)
Anticipated career path
General medicine/primary care26 (14)
Critical care47 (24)
Medical subspecialties54 (28)
Undecided/other65 (34)

Reliability of the 32‐item instrument measured by Cronbach's was 0.79 and its test‐retest reliability was 0.82. The items difficulty mean was 0.52 with a corrected point biserial correlation mean of 0.26. The test was of high difficulty, with a mean overall score of 50% (median 53%, interquartile range 44‐59%). Baseline scores differed significantly by residency program (P = 0.03). Residents with anticipated careers in critical care had significantly higher scores than those with anticipated careers in primary care (median scores critical care 56%, primary care and other nonprocedural medical subspecialties 50%, P = 0.01).

Residents in their final year reported performing a median of 13 arterial lines, 14 central venous lines, and 3 thoracenteses over the course of their residency training (Table 2). Increase in the number of performed procedures (central lines, arterial lines, and thoracenteses) was associated with an increase in test score (Spearman's correlation coefficient 0.35, P < 0.001). Residents in the highest and lowest decile of procedures performed had median scores of 56% and 43%, respectively (P < 0.001). Increasing seniority in residency was associated with an increase in overall test scores (median score by program year 49%, 54%, 50%, and 64%, P = 0.02).

Number of Procedures Performed by Year of Internal Medicine Residency Training
Year of Residency TrainingMedian Number of Procedures (Interquartile Range)
Arterial Line InsertionCentral Venous Line InsertionThoracentesis
First1 (03)1 (04)0 (01)
Second8.5 (618)10 (518)2 (04)
Third/fourth13 (820)14 (1027)3 (26)

Increase in self‐reported confidence was significantly associated with an increase in the number of performed procedures (Spearman's correlation coefficients for central line 0.83, arterial lines 0.76, and thoracentesis 0.78, all P < 0.001) and increasing seniority (0.66, 0.59, and 0.52, respectively, all P < 0.001).

Discussion

The determination of procedural competence has long been a challenge for trainers and internal medicine programs; methods for measuring procedural skills have not been rigorously studied. Procedural competence requires a combination of theoretical knowledge and practical skill. However, given the declining number of procedures performed by internists,4 the new ABIM guidelines mandate cognitive competence in contrast to the demonstration of hands‐on procedural proficiency.

We therefore sought to develop and validate the results of an examination of the theoretical knowledge necessary to perform 3 procedures associated with potentially serious complications. Following establishment of content evidence, item performance characteristics and postadministration interviews were used to develop a 32‐item test. We confirmed the test's internal structure by assessment of reliability and assessed the association of test scores with other variables for which correlation would be expected.

We found that residents performed poorly on test content considered to be important by procedure specialists. These findings highlight the limitations in current procedure training that is frequently sporadic and often variable. The numbers of procedures reported over the duration of residency by residents at these centers were low. It is unclear if the low number of procedures performed was due to limitations in resident content knowledge or if it reflects the increasing use of interventional services with fewer opportunities for experiential learning. Nevertheless, an increasing number of prior procedures was associated with higher self‐reported confidence for all procedures and translated to higher test scores.

This study was limited to 4 teaching hospitals and further studies may be needed to investigate the wider generalizability of the study instrument. However, participants were from 3 distinct internal medicine residency programs that included both community and university hospitals. We relied on resident self‐reports and did not independently verify the number of prior procedures performed. However, similar assumptions have been made in prior studies that physicians who rarely perform procedures are able to provide reasonable estimates of the total number performed.3

The reliability of the 32‐item test (Cronbach's = 0.79) is in the expected range for this length of test and indicates good reliability.9, 10 Given the potential complications associated with advanced medical procedures, there is increasing need to establish criteria for competence. Although we have not established a score threshold, the development of this validated tool to assess procedural knowledge is an important step toward establishing such a goal.

This test may facilitate efforts by hospitalists and others to evaluate the efficacy and refine existing methods of procedure training. Feedback to educators using this assessment tool may assist in the improvement of teaching strategies. In addition, the assessment of cognitive competence in procedure‐related knowledge using a rigorous and reliable means of assessment such as outlined in this study may help identify residents who need further training. Recognition for the necessity for additional training and oversight are likely to be especially important if residents are expected to perform procedures safely yet have fewer opportunities for practice.

Acknowledgements

The authors thank Dr. Stephen Wright, Haley Hamlin, and Matt Johnston for their contributions to the data collection and analysis.

Medical procedures, an essential and highly valued part of medical education, are often undertaught and inconsistently evaluated. Hospitalists play an increasingly important role in developing the skills of resident‐learners. Alumni rate procedure skills as some of the most important skills learned during residency training,1, 2 but frequently identify training in procedural skills as having been insufficient.3, 4 For certification in internal medicine, the American Board of Internal Medicine (ABIM) has identified a limited set of procedures in which it expects all candidates to be cognitively competent with regard to their knowledge of these procedures. Although active participation in procedures is recommended for certification in internal medicine, the demonstration of procedural proficiency is not required.5

Resident competence in performing procedures remains highly variable and procedural complications can be a source of morbidity and mortality.2, 6, 7 A validated tool for the assessment of procedure related knowledge is currently lacking. In existing standardized tests, including the in‐training examination (ITE) and ABIM certification examination, only a fraction of questions pertain to medical procedures. The necessity for a specifically designed, standardized instrument that can objectively measure procedure related knowledge has been highlighted by studies that have demonstrated that there is little correlation between the rate of procedure‐related complications and ABIM/ITE scores.8 A validated tool to assess the knowledge of residents in selected medical procedures could serve to assess the readiness of residents to begin supervised practice and form part of a proficiency assessment.

In this study we aimed to develop a valid and reliable test of procedural knowledge in 3 procedures associated with potentially serious complications.

Methods

Placement of an arterial line, central venous catheter and thoracentesis were selected as the focus for test development. Using the National Board of Medical Examiners question development guidelines, multiple‐choice questions were developed to test residents on specific points of a prepared curriculum. Questions were designed to test the essential cognitive aspects of medical procedures, including indications, contraindications, and the management of complications, with an emphasis on the elements that were considered by a panel of experts to be frequently misunderstood. Questions were written by faculty trained in question writing (G.M.) and assessed for clarity by other members of faculty. Content evidence of the 36‐item examination (12 questions per procedure) was established by a panel of 4 critical care specialists with expertise in medical education. The study was approved by the Institutional Review Board at all sites.

Item performance characteristics were evaluated by administering the test online to a series of 30 trainees and specialty clinicians. Postadministration interviews with the critical care experts were performed to determine whether test questions were clear and appropriate for residents. Following initial testing, 4 test items with the lowest discrimination according to a point‐biserial correlation (Integrity; Castle Rock Research, Canada) were deleted from the test. The resulting 32‐item test contained items of varying difficulty to allow for effective discrimination between examinees (Appendix 1).

The test was then administered to residents beginning rotations in either the medical intensive care unit or in the coronary care unit at 4 medical centers in Massachusetts (Brigham and Women's Hospital; Massachusetts General Hospital; Faulkner Hospital; and North Shore Medical Center). In addition to completing the on‐line, self‐administered examination, participants provided baseline data including year of residency training, anticipated career path, and the number of prior procedures performed. On a 5‐point Likert scale participants estimated their self‐perceived confidence at performing the procedure (with and without supervision) and supervising each of the procedures. Residents were invited to complete a second test before the end of their rotation (2‐4 weeks after the initial test) in order to assess test‐retest reliability. Answers were made available only after the conclusion of the study.

Reliability of the 32‐item instrument was measured by Cronbach's analysis; a value of 0.6 is considered adequate and values of 0.7 or higher indicate good reliability. Pearson's correlation (Pearson's r) was used to compute test‐retest reliability. Univariate analyses were used to assess the association of the demographic variables with the test scores. Comparison of test scores between groups was made using a t test/Wilcoxon rank sum (2 groups) and analysis of variance (ANOVA)/Kruskal‐Wallis (3 or more groups). The associations of number of prior procedures attempted and self‐reported confidence with test scores was explored using Spearman's correlation. Inferences were made at the 0.05 level of significance, using 2‐tailed tests. Statistical analyses were performed using SPSS 15.0 (SPSS, Inc., Chicago, IL).

Results

Of the 192 internal medicine residents who consented to participate in the study between February and June 2006, 188 completed the initial and repeat test. Subject characteristics are detailed in Table 1.

Subject Characteristics
 Number (%)
Total residents192
Males113 (59)
Year of residency training
First101 (52)
Second64 (33)
Third/fourth27 (14)
Anticipated career path
General medicine/primary care26 (14)
Critical care47 (24)
Medical subspecialties54 (28)
Undecided/other65 (34)

Reliability of the 32‐item instrument measured by Cronbach's was 0.79 and its test‐retest reliability was 0.82. The items difficulty mean was 0.52 with a corrected point biserial correlation mean of 0.26. The test was of high difficulty, with a mean overall score of 50% (median 53%, interquartile range 44‐59%). Baseline scores differed significantly by residency program (P = 0.03). Residents with anticipated careers in critical care had significantly higher scores than those with anticipated careers in primary care (median scores critical care 56%, primary care and other nonprocedural medical subspecialties 50%, P = 0.01).

Residents in their final year reported performing a median of 13 arterial lines, 14 central venous lines, and 3 thoracenteses over the course of their residency training (Table 2). Increase in the number of performed procedures (central lines, arterial lines, and thoracenteses) was associated with an increase in test score (Spearman's correlation coefficient 0.35, P < 0.001). Residents in the highest and lowest decile of procedures performed had median scores of 56% and 43%, respectively (P < 0.001). Increasing seniority in residency was associated with an increase in overall test scores (median score by program year 49%, 54%, 50%, and 64%, P = 0.02).

Number of Procedures Performed by Year of Internal Medicine Residency Training
Year of Residency TrainingMedian Number of Procedures (Interquartile Range)
Arterial Line InsertionCentral Venous Line InsertionThoracentesis
First1 (03)1 (04)0 (01)
Second8.5 (618)10 (518)2 (04)
Third/fourth13 (820)14 (1027)3 (26)

Increase in self‐reported confidence was significantly associated with an increase in the number of performed procedures (Spearman's correlation coefficients for central line 0.83, arterial lines 0.76, and thoracentesis 0.78, all P < 0.001) and increasing seniority (0.66, 0.59, and 0.52, respectively, all P < 0.001).

Discussion

The determination of procedural competence has long been a challenge for trainers and internal medicine programs; methods for measuring procedural skills have not been rigorously studied. Procedural competence requires a combination of theoretical knowledge and practical skill. However, given the declining number of procedures performed by internists,4 the new ABIM guidelines mandate cognitive competence in contrast to the demonstration of hands‐on procedural proficiency.

We therefore sought to develop and validate the results of an examination of the theoretical knowledge necessary to perform 3 procedures associated with potentially serious complications. Following establishment of content evidence, item performance characteristics and postadministration interviews were used to develop a 32‐item test. We confirmed the test's internal structure by assessment of reliability and assessed the association of test scores with other variables for which correlation would be expected.

We found that residents performed poorly on test content considered to be important by procedure specialists. These findings highlight the limitations in current procedure training that is frequently sporadic and often variable. The numbers of procedures reported over the duration of residency by residents at these centers were low. It is unclear if the low number of procedures performed was due to limitations in resident content knowledge or if it reflects the increasing use of interventional services with fewer opportunities for experiential learning. Nevertheless, an increasing number of prior procedures was associated with higher self‐reported confidence for all procedures and translated to higher test scores.

This study was limited to 4 teaching hospitals and further studies may be needed to investigate the wider generalizability of the study instrument. However, participants were from 3 distinct internal medicine residency programs that included both community and university hospitals. We relied on resident self‐reports and did not independently verify the number of prior procedures performed. However, similar assumptions have been made in prior studies that physicians who rarely perform procedures are able to provide reasonable estimates of the total number performed.3

The reliability of the 32‐item test (Cronbach's = 0.79) is in the expected range for this length of test and indicates good reliability.9, 10 Given the potential complications associated with advanced medical procedures, there is increasing need to establish criteria for competence. Although we have not established a score threshold, the development of this validated tool to assess procedural knowledge is an important step toward establishing such a goal.

This test may facilitate efforts by hospitalists and others to evaluate the efficacy and refine existing methods of procedure training. Feedback to educators using this assessment tool may assist in the improvement of teaching strategies. In addition, the assessment of cognitive competence in procedure‐related knowledge using a rigorous and reliable means of assessment such as outlined in this study may help identify residents who need further training. Recognition for the necessity for additional training and oversight are likely to be especially important if residents are expected to perform procedures safely yet have fewer opportunities for practice.

Acknowledgements

The authors thank Dr. Stephen Wright, Haley Hamlin, and Matt Johnston for their contributions to the data collection and analysis.

References
  1. Nelson RL,McCaffrey LA,Nobrega FT, et al.Altering residency curriculum in response to a changing practice environment: use of the Mayo internal medicine residency alumni survey.Mayo Clin Proc.1990;65(6):809817.
  2. Mandel JH,Rich EC,Luxenberg MG,Spilane MT,Kern DC,Parrino TA.Preparation for practice in internal medicine. A study of ten years of residency graduates.Arch Intern Med.1988;148(4):853856.
  3. Hicks CM,Gonzalez R,Morton MT,Gibbon RV,Wigton RS,Anderson RJ.Procedural experience and comfort level in internal medicine trainees.J Gen Intern Med.2000;15(10):716722.
  4. Wigton RS.Training internists in procedural skills.Ann Intern Med.1992;116(12 Pt 2):10911093.
  5. ABIM. Policies and Procedures for Certification in Internal Medicine2008. Available at: http://www.abim.org/certification/policies/imss/im.aspx. Accessed August 2009.
  6. Wickstrom GC,Kolar MM,Keyserling TC, et al.Confidence of graduating internal medicine residents to perform ambulatory procedures.J Gen Intern Med.2000;15(6):361365.
  7. Kern DC,Parrino TA,Korst DR.The lasting value of clinical skills.JAMA.1985;254(1):7076.
  8. Durning SJ,Cation LJ,Jackson JL.Are commonly used resident measurements associated with procedural skills in internal medicine residency training?J Gen Intern Med.2007;22(3):357361.
  9. Nunnally JC.Psychometric Theory.New York:McGraw Hill;1978.
  10. Cronbach LJ.Coefficient alpha and the internal structure of tests.Psychometrika.1951;16:297334.
References
  1. Nelson RL,McCaffrey LA,Nobrega FT, et al.Altering residency curriculum in response to a changing practice environment: use of the Mayo internal medicine residency alumni survey.Mayo Clin Proc.1990;65(6):809817.
  2. Mandel JH,Rich EC,Luxenberg MG,Spilane MT,Kern DC,Parrino TA.Preparation for practice in internal medicine. A study of ten years of residency graduates.Arch Intern Med.1988;148(4):853856.
  3. Hicks CM,Gonzalez R,Morton MT,Gibbon RV,Wigton RS,Anderson RJ.Procedural experience and comfort level in internal medicine trainees.J Gen Intern Med.2000;15(10):716722.
  4. Wigton RS.Training internists in procedural skills.Ann Intern Med.1992;116(12 Pt 2):10911093.
  5. ABIM. Policies and Procedures for Certification in Internal Medicine2008. Available at: http://www.abim.org/certification/policies/imss/im.aspx. Accessed August 2009.
  6. Wickstrom GC,Kolar MM,Keyserling TC, et al.Confidence of graduating internal medicine residents to perform ambulatory procedures.J Gen Intern Med.2000;15(6):361365.
  7. Kern DC,Parrino TA,Korst DR.The lasting value of clinical skills.JAMA.1985;254(1):7076.
  8. Durning SJ,Cation LJ,Jackson JL.Are commonly used resident measurements associated with procedural skills in internal medicine residency training?J Gen Intern Med.2007;22(3):357361.
  9. Nunnally JC.Psychometric Theory.New York:McGraw Hill;1978.
  10. Cronbach LJ.Coefficient alpha and the internal structure of tests.Psychometrika.1951;16:297334.
Issue
Journal of Hospital Medicine - 4(7)
Issue
Journal of Hospital Medicine - 4(7)
Page Number
430-432
Page Number
430-432
Publications
Publications
Article Type
Display Headline
Development of a test to evaluate residents' knowledge of medical procedures
Display Headline
Development of a test to evaluate residents' knowledge of medical procedures
Legacy Keywords
medical procedures, residency training, test development
Legacy Keywords
medical procedures, residency training, test development
Sections
Article Source

Copyright © 2009 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Division of Endocrinology, Diabetes and Hypertension, Brigham and Women's Hospital, 221 Longwood Avenue, Boston, MA 02114
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media