Affiliations
Department of Medicine, University of Chicago Medicine
Email
jfarnan@medicine.bsd.uchicago.edu
Given name(s)
Jeanne M.
Family name
Farnan
Degrees
MD, MHPE

Use of simulation to assess incoming interns’ recognition of opportunities to choose wisely

Article Type
Changed
Wed, 07/19/2017 - 13:43
Display Headline
Use of simulation to assess incoming interns’ recognition of opportunities to choose wisely

In recent years, the American Board of Internal Medicine (ABIM) Foundation’s Choosing Wisely™ campaign has advanced the dialogue on cost-consciousness by identifying potential examples of overuse in clinical practice.1 Eliminating low-value care can decrease costs, improve quality, and potentially decrease patient harm.2 In fact, there is growing consensus among health leaders and educators on the need for a physician workforce that is conscious of high-value care.3,4 The Institute of Medicine has issued a call-to-action for graduate medical education (GME) to emphasize value-based care,5 and the Accreditation Council for Graduate Medical Education has outlined expectations that residents receive formal and experiential training on overuse as a part of its Clinical Learning Environment Review.6

However, recent reports highlight a lack of emphasis on value-based care in medical education.7 For example, few residency program directors believe that residents are prepared to incorporate value and cost into their medical decisions.8 In 2012, only 15% of medicine residencies reported having formal curricula addressing value, although many were developing one.8 Of the curricula reported, most were didactic in nature and did not include an assessment component.8

Experiential learning through simulation is one promising method to teach clinicians-in-training to practice value-based care. Simulation-based training promotes situational awareness (defined as being cognizant of one’s working environment), a concept that is crucial for recognizing both low-value and unsafe care.9,10 Simulated training exercises are often included in GME orientation “boot-camps,” which have typically addressed safety.11 The incorporation of value into existing GME boot-camp exercises could provide a promising model for the addition of value-based training to GME.

At the University of Chicago, we had successfully implemented the “Room of Horrors,” a simulation for entering interns to promote the detection of patient safety hazards.11 Here, we describe a modification to this simulation to embed low-value hazards in addition to traditional patient safety hazards. The aim of this study is to assess the entering interns’ recognition of low-value care and their ability to recognize unsafe care in a simulation designed to promote situational awareness.

METHODS

Setting and Participants

The simulation was conducted during GME orientation at a large, urban academic medical institution. One hundred and twenty-five entering postgraduate year one (PGY1) interns participated in the simulation, which was a required component of a multiday orientation “boot-camp” experience. All eligible interns participated in the simulation, representing 13 specialty programs and 60 medical schools. Interns entering into pathology were excluded because of infrequent patient contact. Participating interns were divided into 7 specialty groups for analysis in order to preserve the anonymity of interns in smaller residency programs (surgical subspecialties combined with general surgery, medicine-pediatrics grouped with internal medicine). The University of Chicago Institutional Review Board deemed this study exempt from review.

 

 

Program Description

A simulation of an inpatient hospital room, known as the “Room of Horrors,” was constructed in collaboration with the University of Chicago Simulation Center and adapted from a previous version of the exercise.11 The simulation consisted of a mock door chart highlighting the patient had been admitted for diarrhea (Clostridium difficile positive) following a recent hospitalization for pneumonia. A clinical scenario was constructed by using a patient mannequin and an accompanying door chart that listed information on the patient’s hospital course, allergies, and medications. In addition to the 8 patient safety hazards utilized in the prior version, our team selected 4 low-value hazards to be included in the simulation.

Safety and Low-Value Hazards Simulated in the “Room of Horrors”
Table 1

The 8 safety hazards have been detailed in a prior study and were previously selected from Medicare’s Hospital-Acquired Conditions (HAC) Reduction Program and Agency for Healthcare Research and Quality (AHRQ) Patient Safety Indicators.11-13 Each of the hazards was represented either physically in the simulation room and/or was indicated on the patient’s chart. For example, the latex allergy hazard was represented by latex gloves at the bedside despite an allergy indicated on the patient’s chart and wristband. A complete list of the 8 safety hazards and their representations in the simulation is shown in Table 1.

The Choosing Wisely™ lists were reviewed to identify low-value hazards for addition to the simulation.14 Our team selected 3 low-value hazards from the Society of Hospital Medicine (SHM) list,15 including (1) arbitrary blood transfusion despite the patient’s stable hemoglobin level of 8.0 g/dL and absence of cardiac symptoms,16 (2) addition of a proton pump inhibitor (PPI) for stress ulcer prophylaxis in a patient without high risk for gastrointestinal (GI) complications who was not on a PPI prior to admission, and (3) placement of a urinary catheter without medical indication. We had originally selected continuous telemetry monitoring as a fourth hazard from the SHM list, but were unable to operationalize, as it was difficult to simulate continuous telemetry on a mannequin. Because many inpatients are older than 65 years, we reviewed the American Geriatrics Society list17 and selected our fourth low-value hazard: (4) unnecessary use of physical restraints to manage behavioral symptoms in a hospitalized patient with delirium. Several of these hazards were also quality and safety priorities at our institution, including the overuse of urinary catheters, physical restraints, and blood transfusions. All 4 low-value hazards were referenced in the patient’s door chart, and 3 were also physically represented in the room via presence of a hanging unit of blood, Foley catheter, and upper-arm restraints (Table 1). See Appendix for a photograph of the simulation setup.

Each intern was allowed 10 minutes inside the simulation room. During this time, they were instructed to read the 1-page door chart, inspect the simulation room, and write down as many potential low-value and safety hazards as they could identify on a free-response form (see Appendix). Upon exiting the room, they were allotted 5 additional minutes to complete their free-response answers and provide written feedback on the simulation. The simulation was conducted in 3 simulated hospital rooms over the course of 2 days, and the correct answers were provided via e-mail after all interns had completed the exercise.

To assess prior training and safety knowledge, interns were asked to complete a 3-question preassessment on a ScanTronTM (Tustin, CA) form. The preassessment asked interns whether they had received training on hospital safety during medical school (yes, no, or unsure), if they were satisfied with the hospital safety training they received during medical school (strongly disagree to strongly agree on a Likert scale), and if they were confident in their ability to identify potential hazards in a hospital setting (strongly disagree to strongly agree). Interns were also given the opportunity to provide feedback on the simulation experience on the ScanTronTM (Tustin, CA) form.

One month after participating in the simulation, interns were asked to complete an online follow-up survey on MedHubTM (Ann Arbor, MI), which included 2 Likert-scale questions (strongly disagree to strongly agree) assessing the simulation’s impact on their experience mitigating hospital hazards during the first month of internship.

Data Analysis

Interns’ free-response answers were manually coded, and descriptive statistics were used to summarize the mean percent correct for each hazard. A paired t test was used to compare intern identification of low-value vs safety hazards. T tests were used to compare hazard identification for interns entering highly procedural-intensive specialties (ie, surgical specialties, emergency medicine, anesthesia, obstetrics/gynecology) and those entering less procedural-intensive specialties (ie, internal medicine, pediatrics, psychiatry), as well as among those graduating from “Top 30” medical schools (based on US News & World Report Medical School Rankings18) and our own institution. One-way analysis of variance (ANOVA) calculations were used to test for differences in hazard identification based on interns’ prior hospital safety training, with interns who rated their satisfaction with prior training or confidence in identifying hazards as a “4” or a “5” considered “satisfied” and “confident,” respectively. Responses to the MedHubTM (Ann Arbor, MI) survey were dichotomized with “strongly agree” and “agree” considered positive responses. Statistical significance was defined at P < .05. All data analysis was conducted using Stata 14TM software (College Station, TX).

 

 

RESULTS

Intern Characteristics

Characteristics of Interns Participating in the “Room of Horrors” Simulation
Table 2

One hundred twenty-five entering PGY1 interns participated in the simulation, representing 60 medical schools and 7 different specialty groups (Table 2). Thirty-five percent (44/125) were graduates from “Top 30” medical schools, and 8.8% (11/125) graduated from our own institution. Seventy-four percent (89/121) had received prior hospital safety training during medical school, and 62.9% (56/89) were satisfied with their training. A majority of interns (64.2%, 79/123) felt confident in their ability to identify potential hazards in a hospital setting, although confidence was much higher among those with prior safety training (71.9%, 64/89) compared to those without prior training or who were unsure about their training (40.6%, 13/32; P = .09, t test).

Distribution of interns’ performance in the “Room of Horrors” simulation, based on the percentage of hazards correctly identified. N = 125.
Figure 1

Identification of Hazards

The mean percentage of hazards correctly identified by interns during the simulation was 50.4% (standard deviation [SD] 11.8%), with a normal distribution (Figure 1). Interns identified a significantly lower percentage of low-value hazards than safety hazards in the simulation (mean 19.2% [SD 18.6%] vs 66.0% [SD 16.0%], respectively; P < .001, paired t test). Interns also identified significantly more room-based errors than chart-based errors (mean 58.6% [SD 13.4%] vs 9.6% [SD 19.8%], respectively; P < .001, paired t test). The 3 most commonly identified hazards were unavailability of hand hygiene (120/125, 96.0%), presence of latex gloves despite the patient’s allergy (111/125, 88.8%), and fall risk due to the lowered bed rail (107/125, 85.6%). More than half of interns identified the incorrect name on the patient’s wristband and IV bag (91/125, 72.8%), a lack of isolation precautions (90/125, 72.0%), administration of penicillin despite the patient’s allergy (67/125, 53.6%), and unnecessary restraints (64/125, 51.2%). Less than half of interns identified the wrong medication being administered (50/125, 40.0%), unnecessary Foley catheter (25/125, 20.0%), and absence of venous thromboembolism (VTE) prophylaxis (24/125, 19.2%). Few interns identified the unnecessary blood transfusion (7/125, 5.6%), and no one identified the unnecessary stress ulcer prophylaxis (0/125, 0.0%; Figure 2).

Percentage of interns who correctly identified each hazard, with low-value hazards indicated by an asterisk (*). N = 125.
Figure 2

Predictors of Hazard Identification

Interns who self-reported as confident in their ability to identify hazards were not any more likely to correctly identify hazards than those who were not confident (50.9% overall hazard identification vs 49.6%, respectively; P = .56, t test). Interns entering into less procedural-intensive specialties identified significantly more safety hazards than those entering highly procedural-intensive specialties (mean 69.1% [SD 16.9%] vs 61.8% [SD 13.7%], respectively; P = .01, t test). However, there was no statistically significant difference in their identification of low-value hazards (mean 19.8% [SD 18.3%] for less procedural-intensive vs 18.4% [SD 19.1%] for highly procedural-intensive; P = .68, t test). There was no statistically significant difference in hazard identification among graduates of “Top 30” medical schools or graduates of our own institution. Prior hospital safety training had no significant impact on interns’ ability to identify safety or low-value hazards. Overall, interns who were satisfied with their prior training identified a mean of 51.8% of hazards present (SD 11.8%), interns who were not satisfied with their prior training identified 51.5% (SD 12.7%), interns with no prior training identified 48.7% (SD 11.7%), and interns who were unsure about their prior training identified 47.4% (SD 11.5%) [F(3,117) = .79; P = .51, ANOVA]. There was also no significant association between prior training and the identification of any one of the 12 specific hazards (chi-square tests, all P values > .1).

Intern Feedback and Follow-Up Survey

Debriefing revealed that most interns passively assumed the patient’s chart was correct and did not think they should question the patient’s current care regimen. For example, many interns commented that they did not think to consider the patient’s blood transfusion as unnecessary, even though they were aware of the recommended hemoglobin cutoffs for stable patients.

Interns also provided formal feedback on the simulation through open-ended comments on their ScanTronTM (Tustin, CA) form. For example, one intern wrote that they would “inherently approach every patient room ‘looking’ for safety issues, probably directly because of this exercise.” Another commented that the simulation was “more difficult than I expected, but very necessary to facilitate discussion and learning.” One intern wrote that “I wish I had done this earlier in my career.”

Ninety-six percent of participating interns (120/125) completed an online follow-up survey 1 month after beginning internship. In the survey, 68.9% (82/119) of interns indicated they were more aware of how to identify potential hazards facing hospitalized patients as a result of the simulation. Furthermore, 52.1% (62/119) of interns had taken action during internship to reduce a potential hazard that was present in the simulation.

DISCUSSION

While many GME orientations include simulation and safety training, this study is the first of its kind to incorporate low-value care from Choosing Wisely™ recommendations into simulated training. It is concerning that interns identified significantly fewer low-value hazards than safety hazards in the simulation. In some cases, no interns identified the low-value hazard. For example, while almost all interns identified the hand hygiene hazard, not one could identify the unnecessary stress ulcer prophylaxis. Furthermore, interns who self-reported as confident in their ability to identify hazards did not perform any better in the simulation. Interns entering less procedural-intensive specialties identified more safety hazards overall.

 

 

The simulation was well received by interns. Many commented that the experience was engaging, challenging, and effective in cultivating situational awareness towards low-value care. Our follow-up survey demonstrated the majority of interns reported taking action during their first month of internship to reduce a hazard included in the simulation. Most interns also reported a greater awareness of how to identify hospital hazards as a result of the simulation. These findings suggest that a brief simulation-based experience has the potential to create a lasting retention of situational awareness and behavior change.

It is worth exploring why interns identified significantly fewer low-value hazards than safety hazards in the simulation. One hypothesis is that interns were less attuned to low-value hazards, which may reflect a lacking emphasis on value-based care in undergraduate medical education (UME). It is especially concerning that so few interns identified the catheter-associated urinary tract infection (CAUTI) risk, as interns are primarily responsible for recognizing and removing an unnecessary catheter. Although the risks of low-value care should be apparent to most trainees, the process of recognizing and deliberately stopping or avoiding low-value care can be challenging for young clinicians.19 To promote value-based thinking among entering residents, UME programs should teach students to question the utility of the interventions their patients are receiving. One promising framework for doing so is the Subjective, Objective, Assessment, Plan- (SOAP)-V, in which a V for “Value” is added to the traditional SOAP note.20 SOAP-V notes serve as a cognitive forcing function that requires students to pause and assess the value and cost-consciousness of their patients’ care.20

The results from the “Room of Horrors” simulation can also guide health leaders and educators in identifying institutional areas of focus towards providing high-value and safe care. For example, at the University of Chicago we launched an initiative to improve the inappropriate use of urinary catheters after learning that few of our incoming interns recognized this during the simulation. Institutions could use this model to raise awareness of initiatives and redirect resources from areas that trainees perform well in (eg, hand hygiene) to areas that need improvement (eg, recognition of low-value care). Given the simulation’s low cost and minimal material requirements, it could be easily integrated into existing training programs with the support of an institution’s simulation center.

This study’s limitations include its conduction at single-institution, although the participants represented graduates of 60 different institutions. Furthermore, while the 12 hazards included in the simulation represent patient safety and value initiatives from a wide array of medical societies, they were not intended to be comprehensive and were not tailored to specific specialties. The simulation included only 4 low-value hazards, and future iterations of this exercise should aim to include an equal number of safety and low-value hazards. Furthermore, the evaluation of interns’ prior hospital safety training relied on self-reporting, and the specific context and content of each interns’ training was not examined. Finally, at this point we are unable to provide objective longitudinal data assessing the simulation’s impact on clinical practice and patient outcomes. Subsequent work will assess the sustained impact of the simulation by correlating with institutional data on measurable occurrences of low-value care.

In conclusion, interns identified significantly fewer low-value hazards than safety hazards in an inpatient simulation designed to promote situational awareness. Our results suggest that interns are on the lookout for errors of omission (eg, absence of hand hygiene, absence of isolation precautions) but are often blinded to errors of commission, such that when patients are started on therapies there is an assumption that the therapies are correct and necessary (eg, blood transfusions, stress ulcer prophylaxis). These findings suggest poor awareness of low-value care among incoming interns and highlight the need for additional training in both UME and GME to place a greater emphasis on preventing low-value care.

Disclosure

Dr. Arora is a member of the American Board of Medicine Board of Directors and has received grant funding from ABIM Foundation via Costs of Care for the Teaching Value Choosing Wisely™ Challenge. Dr. Farnan, Dr. Arora, and Ms. Hirsch receive grant funds from Accreditation Council of Graduate Medical Education as part of the Pursuing Excellence Initiative. Dr. Arora and Dr. Farnan also receive grant funds from the American Medical Association Accelerating Change in Medical Education initiative. Kathleen Wiest and Lukas Matern were funded through matching funds of the Pritzker Summer Research Program for NIA T35AG029795.

Files
References

1. Colla CH, Morden NE, Sequist TD, Schpero WL, Rosenthal MB. Choosing wisely: prevalence and correlates of low-value health care services in the United States. J Gen Intern Med. 2015;30(2):221-228. doi:10.1007/s11606-014-3070-z. PubMed
2. Elshaug AG, McWilliams JM, Landon BE. The value of low-value lists. JAMA. 2013;309(8):775-776. doi:10.1001/jama.2013.828. PubMed
3. Cooke M. Cost consciousness in patient care--what is medical education’s responsibility? N Engl J Med. 2010;362(14):1253-1255. doi:10.1056/NEJMp0911502. PubMed
4. Weinberger SE. Providing high-value, cost-conscious care: a critical seventh general competency for physicians. Ann Intern Med. 2011;155(6):386-388. doi:10.7326/0003-4819-155-6-201109200-00007. PubMed
5. Graduate Medical Education That Meets the Nation’s Health Needs. Institute of Medicine. http://www.nationalacademies.org/hmd/Reports/2014/Graduate-Medical-Education-That-Meets-the-Nations-Health-Needs.aspx. Accessed May 25, 2016.
6. Accreditation Council for Graduate Medical Education. CLER Pathways to Excellence. https://www.acgme.org/acgmeweb/Portals/0/PDFs/CLER/CLER_Brochure.pdf. Accessed July 15, 2015.
7. Varkey P, Murad MH, Braun C, Grall KJH, Saoji V. A review of cost-effectiveness, cost-containment and economics curricula in graduate medical education. J Eval Clin Pract. 2010;16(6):1055-1062. doi:10.1111/j.1365-2753.2009.01249.x. PubMed
8. Patel MS, Reed DA, Loertscher L, McDonald FS, Arora VM. Teaching residents to provide cost-conscious care: a national survey of residency program directors. JAMA Intern Med. 2014;174(3):470-472. doi:10.1001/jamainternmed.2013.13222. PubMed
9. Cohen NL. Using the ABCs of situational awareness for patient safety. Nursing. 2013;43(4):64-65. doi:10.1097/01.NURSE.0000428332.23978.82. PubMed
10. Varkey P, Karlapudi S, Rose S, Swensen S. A patient safety curriculum for graduate medical education: results from a needs assessment of educators and patient safety experts. Am J Med Qual. 2009;24(3):214-221. doi:10.1177/1062860609332905. PubMed
11. Farnan JM, Gaffney S, Poston JT, et al. Patient safety room of horrors: a novel method to assess medical students and entering residents’ ability to identify hazards of hospitalisation. BMJ Qual Saf. 2016;25(3):153-158. doi:10.1136/bmjqs-2015-004621. PubMed
12. Centers for Medicare and Medicaid Services Hospital-acquired condition reduction program. Medicare.gov. https://www.medicare.gov/hospitalcompare/HAC-reduction-program.html. Accessed August 1, 2015.
13. Agency for Healthcare Research and Quality. Patient Safety Indicators Overview. http://www. qualityindicators.ahrq.gov/modules/psi_overview.aspx. Accessed August 20, 2015.
14. ABIM Foundation. Choosing Wisely. http://www.choosingwisely.org. Accessed August 21, 2015.
15. ABIM Foundation. Society of Hospital Medicine – Adult Hospital Medicine List. Choosing Wisely. http://www.choosingwisely.org/societies/ society-of-hospital-medicine-adult/. Accessed August 21, 2015.
16. Carson JL, Grossman BJ, Kleinman S, et al. Red blood cell transfusion: A clinical practice guideline from the AABB*. Ann Intern Med. 2012;157(1):49-58. PubMed
17. ABIM Foundation. American Geriatrics Society List. Choosing Wisely. http://www.choosingwisely.org/societies/american-geriatrics-society/. Accessed August 21, 2015.
18. The Best Medical Schools for Research, Ranked. http://grad-schools.usnews.rankingsandreviews.com/best-graduate-schools/top-medical-schools/research-rankings?int=af3309&int=b3b50a&int=b14409. Accessed June 7, 2016.
19. Roman BR, Asch DA. Faded promises: The challenge of deadopting low-value care. Ann Intern Med. 2014;161(2):149-150. doi:10.7326/M14-0212. PubMed
20. Moser EM, Huang GC, Packer CD, et al. SOAP-V: Introducing a method to empower medical students to be change agents in bending the cost curve. J Hosp Med. 2016;11(3):217-220. doi:10.1002/jhm.2489. PubMed

Article PDF
Issue
Journal of Hospital Medicine 12(7)
Publications
Topics
Page Number
493-497
Sections
Files
Files
Article PDF
Article PDF

In recent years, the American Board of Internal Medicine (ABIM) Foundation’s Choosing Wisely™ campaign has advanced the dialogue on cost-consciousness by identifying potential examples of overuse in clinical practice.1 Eliminating low-value care can decrease costs, improve quality, and potentially decrease patient harm.2 In fact, there is growing consensus among health leaders and educators on the need for a physician workforce that is conscious of high-value care.3,4 The Institute of Medicine has issued a call-to-action for graduate medical education (GME) to emphasize value-based care,5 and the Accreditation Council for Graduate Medical Education has outlined expectations that residents receive formal and experiential training on overuse as a part of its Clinical Learning Environment Review.6

However, recent reports highlight a lack of emphasis on value-based care in medical education.7 For example, few residency program directors believe that residents are prepared to incorporate value and cost into their medical decisions.8 In 2012, only 15% of medicine residencies reported having formal curricula addressing value, although many were developing one.8 Of the curricula reported, most were didactic in nature and did not include an assessment component.8

Experiential learning through simulation is one promising method to teach clinicians-in-training to practice value-based care. Simulation-based training promotes situational awareness (defined as being cognizant of one’s working environment), a concept that is crucial for recognizing both low-value and unsafe care.9,10 Simulated training exercises are often included in GME orientation “boot-camps,” which have typically addressed safety.11 The incorporation of value into existing GME boot-camp exercises could provide a promising model for the addition of value-based training to GME.

At the University of Chicago, we had successfully implemented the “Room of Horrors,” a simulation for entering interns to promote the detection of patient safety hazards.11 Here, we describe a modification to this simulation to embed low-value hazards in addition to traditional patient safety hazards. The aim of this study is to assess the entering interns’ recognition of low-value care and their ability to recognize unsafe care in a simulation designed to promote situational awareness.

METHODS

Setting and Participants

The simulation was conducted during GME orientation at a large, urban academic medical institution. One hundred and twenty-five entering postgraduate year one (PGY1) interns participated in the simulation, which was a required component of a multiday orientation “boot-camp” experience. All eligible interns participated in the simulation, representing 13 specialty programs and 60 medical schools. Interns entering into pathology were excluded because of infrequent patient contact. Participating interns were divided into 7 specialty groups for analysis in order to preserve the anonymity of interns in smaller residency programs (surgical subspecialties combined with general surgery, medicine-pediatrics grouped with internal medicine). The University of Chicago Institutional Review Board deemed this study exempt from review.

 

 

Program Description

A simulation of an inpatient hospital room, known as the “Room of Horrors,” was constructed in collaboration with the University of Chicago Simulation Center and adapted from a previous version of the exercise.11 The simulation consisted of a mock door chart highlighting the patient had been admitted for diarrhea (Clostridium difficile positive) following a recent hospitalization for pneumonia. A clinical scenario was constructed by using a patient mannequin and an accompanying door chart that listed information on the patient’s hospital course, allergies, and medications. In addition to the 8 patient safety hazards utilized in the prior version, our team selected 4 low-value hazards to be included in the simulation.

Safety and Low-Value Hazards Simulated in the “Room of Horrors”
Table 1

The 8 safety hazards have been detailed in a prior study and were previously selected from Medicare’s Hospital-Acquired Conditions (HAC) Reduction Program and Agency for Healthcare Research and Quality (AHRQ) Patient Safety Indicators.11-13 Each of the hazards was represented either physically in the simulation room and/or was indicated on the patient’s chart. For example, the latex allergy hazard was represented by latex gloves at the bedside despite an allergy indicated on the patient’s chart and wristband. A complete list of the 8 safety hazards and their representations in the simulation is shown in Table 1.

The Choosing Wisely™ lists were reviewed to identify low-value hazards for addition to the simulation.14 Our team selected 3 low-value hazards from the Society of Hospital Medicine (SHM) list,15 including (1) arbitrary blood transfusion despite the patient’s stable hemoglobin level of 8.0 g/dL and absence of cardiac symptoms,16 (2) addition of a proton pump inhibitor (PPI) for stress ulcer prophylaxis in a patient without high risk for gastrointestinal (GI) complications who was not on a PPI prior to admission, and (3) placement of a urinary catheter without medical indication. We had originally selected continuous telemetry monitoring as a fourth hazard from the SHM list, but were unable to operationalize, as it was difficult to simulate continuous telemetry on a mannequin. Because many inpatients are older than 65 years, we reviewed the American Geriatrics Society list17 and selected our fourth low-value hazard: (4) unnecessary use of physical restraints to manage behavioral symptoms in a hospitalized patient with delirium. Several of these hazards were also quality and safety priorities at our institution, including the overuse of urinary catheters, physical restraints, and blood transfusions. All 4 low-value hazards were referenced in the patient’s door chart, and 3 were also physically represented in the room via presence of a hanging unit of blood, Foley catheter, and upper-arm restraints (Table 1). See Appendix for a photograph of the simulation setup.

Each intern was allowed 10 minutes inside the simulation room. During this time, they were instructed to read the 1-page door chart, inspect the simulation room, and write down as many potential low-value and safety hazards as they could identify on a free-response form (see Appendix). Upon exiting the room, they were allotted 5 additional minutes to complete their free-response answers and provide written feedback on the simulation. The simulation was conducted in 3 simulated hospital rooms over the course of 2 days, and the correct answers were provided via e-mail after all interns had completed the exercise.

To assess prior training and safety knowledge, interns were asked to complete a 3-question preassessment on a ScanTronTM (Tustin, CA) form. The preassessment asked interns whether they had received training on hospital safety during medical school (yes, no, or unsure), if they were satisfied with the hospital safety training they received during medical school (strongly disagree to strongly agree on a Likert scale), and if they were confident in their ability to identify potential hazards in a hospital setting (strongly disagree to strongly agree). Interns were also given the opportunity to provide feedback on the simulation experience on the ScanTronTM (Tustin, CA) form.

One month after participating in the simulation, interns were asked to complete an online follow-up survey on MedHubTM (Ann Arbor, MI), which included 2 Likert-scale questions (strongly disagree to strongly agree) assessing the simulation’s impact on their experience mitigating hospital hazards during the first month of internship.

Data Analysis

Interns’ free-response answers were manually coded, and descriptive statistics were used to summarize the mean percent correct for each hazard. A paired t test was used to compare intern identification of low-value vs safety hazards. T tests were used to compare hazard identification for interns entering highly procedural-intensive specialties (ie, surgical specialties, emergency medicine, anesthesia, obstetrics/gynecology) and those entering less procedural-intensive specialties (ie, internal medicine, pediatrics, psychiatry), as well as among those graduating from “Top 30” medical schools (based on US News & World Report Medical School Rankings18) and our own institution. One-way analysis of variance (ANOVA) calculations were used to test for differences in hazard identification based on interns’ prior hospital safety training, with interns who rated their satisfaction with prior training or confidence in identifying hazards as a “4” or a “5” considered “satisfied” and “confident,” respectively. Responses to the MedHubTM (Ann Arbor, MI) survey were dichotomized with “strongly agree” and “agree” considered positive responses. Statistical significance was defined at P < .05. All data analysis was conducted using Stata 14TM software (College Station, TX).

 

 

RESULTS

Intern Characteristics

Characteristics of Interns Participating in the “Room of Horrors” Simulation
Table 2

One hundred twenty-five entering PGY1 interns participated in the simulation, representing 60 medical schools and 7 different specialty groups (Table 2). Thirty-five percent (44/125) were graduates from “Top 30” medical schools, and 8.8% (11/125) graduated from our own institution. Seventy-four percent (89/121) had received prior hospital safety training during medical school, and 62.9% (56/89) were satisfied with their training. A majority of interns (64.2%, 79/123) felt confident in their ability to identify potential hazards in a hospital setting, although confidence was much higher among those with prior safety training (71.9%, 64/89) compared to those without prior training or who were unsure about their training (40.6%, 13/32; P = .09, t test).

Distribution of interns’ performance in the “Room of Horrors” simulation, based on the percentage of hazards correctly identified. N = 125.
Figure 1

Identification of Hazards

The mean percentage of hazards correctly identified by interns during the simulation was 50.4% (standard deviation [SD] 11.8%), with a normal distribution (Figure 1). Interns identified a significantly lower percentage of low-value hazards than safety hazards in the simulation (mean 19.2% [SD 18.6%] vs 66.0% [SD 16.0%], respectively; P < .001, paired t test). Interns also identified significantly more room-based errors than chart-based errors (mean 58.6% [SD 13.4%] vs 9.6% [SD 19.8%], respectively; P < .001, paired t test). The 3 most commonly identified hazards were unavailability of hand hygiene (120/125, 96.0%), presence of latex gloves despite the patient’s allergy (111/125, 88.8%), and fall risk due to the lowered bed rail (107/125, 85.6%). More than half of interns identified the incorrect name on the patient’s wristband and IV bag (91/125, 72.8%), a lack of isolation precautions (90/125, 72.0%), administration of penicillin despite the patient’s allergy (67/125, 53.6%), and unnecessary restraints (64/125, 51.2%). Less than half of interns identified the wrong medication being administered (50/125, 40.0%), unnecessary Foley catheter (25/125, 20.0%), and absence of venous thromboembolism (VTE) prophylaxis (24/125, 19.2%). Few interns identified the unnecessary blood transfusion (7/125, 5.6%), and no one identified the unnecessary stress ulcer prophylaxis (0/125, 0.0%; Figure 2).

Percentage of interns who correctly identified each hazard, with low-value hazards indicated by an asterisk (*). N = 125.
Figure 2

Predictors of Hazard Identification

Interns who self-reported as confident in their ability to identify hazards were not any more likely to correctly identify hazards than those who were not confident (50.9% overall hazard identification vs 49.6%, respectively; P = .56, t test). Interns entering into less procedural-intensive specialties identified significantly more safety hazards than those entering highly procedural-intensive specialties (mean 69.1% [SD 16.9%] vs 61.8% [SD 13.7%], respectively; P = .01, t test). However, there was no statistically significant difference in their identification of low-value hazards (mean 19.8% [SD 18.3%] for less procedural-intensive vs 18.4% [SD 19.1%] for highly procedural-intensive; P = .68, t test). There was no statistically significant difference in hazard identification among graduates of “Top 30” medical schools or graduates of our own institution. Prior hospital safety training had no significant impact on interns’ ability to identify safety or low-value hazards. Overall, interns who were satisfied with their prior training identified a mean of 51.8% of hazards present (SD 11.8%), interns who were not satisfied with their prior training identified 51.5% (SD 12.7%), interns with no prior training identified 48.7% (SD 11.7%), and interns who were unsure about their prior training identified 47.4% (SD 11.5%) [F(3,117) = .79; P = .51, ANOVA]. There was also no significant association between prior training and the identification of any one of the 12 specific hazards (chi-square tests, all P values > .1).

Intern Feedback and Follow-Up Survey

Debriefing revealed that most interns passively assumed the patient’s chart was correct and did not think they should question the patient’s current care regimen. For example, many interns commented that they did not think to consider the patient’s blood transfusion as unnecessary, even though they were aware of the recommended hemoglobin cutoffs for stable patients.

Interns also provided formal feedback on the simulation through open-ended comments on their ScanTronTM (Tustin, CA) form. For example, one intern wrote that they would “inherently approach every patient room ‘looking’ for safety issues, probably directly because of this exercise.” Another commented that the simulation was “more difficult than I expected, but very necessary to facilitate discussion and learning.” One intern wrote that “I wish I had done this earlier in my career.”

Ninety-six percent of participating interns (120/125) completed an online follow-up survey 1 month after beginning internship. In the survey, 68.9% (82/119) of interns indicated they were more aware of how to identify potential hazards facing hospitalized patients as a result of the simulation. Furthermore, 52.1% (62/119) of interns had taken action during internship to reduce a potential hazard that was present in the simulation.

DISCUSSION

While many GME orientations include simulation and safety training, this study is the first of its kind to incorporate low-value care from Choosing Wisely™ recommendations into simulated training. It is concerning that interns identified significantly fewer low-value hazards than safety hazards in the simulation. In some cases, no interns identified the low-value hazard. For example, while almost all interns identified the hand hygiene hazard, not one could identify the unnecessary stress ulcer prophylaxis. Furthermore, interns who self-reported as confident in their ability to identify hazards did not perform any better in the simulation. Interns entering less procedural-intensive specialties identified more safety hazards overall.

 

 

The simulation was well received by interns. Many commented that the experience was engaging, challenging, and effective in cultivating situational awareness towards low-value care. Our follow-up survey demonstrated the majority of interns reported taking action during their first month of internship to reduce a hazard included in the simulation. Most interns also reported a greater awareness of how to identify hospital hazards as a result of the simulation. These findings suggest that a brief simulation-based experience has the potential to create a lasting retention of situational awareness and behavior change.

It is worth exploring why interns identified significantly fewer low-value hazards than safety hazards in the simulation. One hypothesis is that interns were less attuned to low-value hazards, which may reflect a lacking emphasis on value-based care in undergraduate medical education (UME). It is especially concerning that so few interns identified the catheter-associated urinary tract infection (CAUTI) risk, as interns are primarily responsible for recognizing and removing an unnecessary catheter. Although the risks of low-value care should be apparent to most trainees, the process of recognizing and deliberately stopping or avoiding low-value care can be challenging for young clinicians.19 To promote value-based thinking among entering residents, UME programs should teach students to question the utility of the interventions their patients are receiving. One promising framework for doing so is the Subjective, Objective, Assessment, Plan- (SOAP)-V, in which a V for “Value” is added to the traditional SOAP note.20 SOAP-V notes serve as a cognitive forcing function that requires students to pause and assess the value and cost-consciousness of their patients’ care.20

The results from the “Room of Horrors” simulation can also guide health leaders and educators in identifying institutional areas of focus towards providing high-value and safe care. For example, at the University of Chicago we launched an initiative to improve the inappropriate use of urinary catheters after learning that few of our incoming interns recognized this during the simulation. Institutions could use this model to raise awareness of initiatives and redirect resources from areas that trainees perform well in (eg, hand hygiene) to areas that need improvement (eg, recognition of low-value care). Given the simulation’s low cost and minimal material requirements, it could be easily integrated into existing training programs with the support of an institution’s simulation center.

This study’s limitations include its conduction at single-institution, although the participants represented graduates of 60 different institutions. Furthermore, while the 12 hazards included in the simulation represent patient safety and value initiatives from a wide array of medical societies, they were not intended to be comprehensive and were not tailored to specific specialties. The simulation included only 4 low-value hazards, and future iterations of this exercise should aim to include an equal number of safety and low-value hazards. Furthermore, the evaluation of interns’ prior hospital safety training relied on self-reporting, and the specific context and content of each interns’ training was not examined. Finally, at this point we are unable to provide objective longitudinal data assessing the simulation’s impact on clinical practice and patient outcomes. Subsequent work will assess the sustained impact of the simulation by correlating with institutional data on measurable occurrences of low-value care.

In conclusion, interns identified significantly fewer low-value hazards than safety hazards in an inpatient simulation designed to promote situational awareness. Our results suggest that interns are on the lookout for errors of omission (eg, absence of hand hygiene, absence of isolation precautions) but are often blinded to errors of commission, such that when patients are started on therapies there is an assumption that the therapies are correct and necessary (eg, blood transfusions, stress ulcer prophylaxis). These findings suggest poor awareness of low-value care among incoming interns and highlight the need for additional training in both UME and GME to place a greater emphasis on preventing low-value care.

Disclosure

Dr. Arora is a member of the American Board of Medicine Board of Directors and has received grant funding from ABIM Foundation via Costs of Care for the Teaching Value Choosing Wisely™ Challenge. Dr. Farnan, Dr. Arora, and Ms. Hirsch receive grant funds from Accreditation Council of Graduate Medical Education as part of the Pursuing Excellence Initiative. Dr. Arora and Dr. Farnan also receive grant funds from the American Medical Association Accelerating Change in Medical Education initiative. Kathleen Wiest and Lukas Matern were funded through matching funds of the Pritzker Summer Research Program for NIA T35AG029795.

In recent years, the American Board of Internal Medicine (ABIM) Foundation’s Choosing Wisely™ campaign has advanced the dialogue on cost-consciousness by identifying potential examples of overuse in clinical practice.1 Eliminating low-value care can decrease costs, improve quality, and potentially decrease patient harm.2 In fact, there is growing consensus among health leaders and educators on the need for a physician workforce that is conscious of high-value care.3,4 The Institute of Medicine has issued a call-to-action for graduate medical education (GME) to emphasize value-based care,5 and the Accreditation Council for Graduate Medical Education has outlined expectations that residents receive formal and experiential training on overuse as a part of its Clinical Learning Environment Review.6

However, recent reports highlight a lack of emphasis on value-based care in medical education.7 For example, few residency program directors believe that residents are prepared to incorporate value and cost into their medical decisions.8 In 2012, only 15% of medicine residencies reported having formal curricula addressing value, although many were developing one.8 Of the curricula reported, most were didactic in nature and did not include an assessment component.8

Experiential learning through simulation is one promising method to teach clinicians-in-training to practice value-based care. Simulation-based training promotes situational awareness (defined as being cognizant of one’s working environment), a concept that is crucial for recognizing both low-value and unsafe care.9,10 Simulated training exercises are often included in GME orientation “boot-camps,” which have typically addressed safety.11 The incorporation of value into existing GME boot-camp exercises could provide a promising model for the addition of value-based training to GME.

At the University of Chicago, we had successfully implemented the “Room of Horrors,” a simulation for entering interns to promote the detection of patient safety hazards.11 Here, we describe a modification to this simulation to embed low-value hazards in addition to traditional patient safety hazards. The aim of this study is to assess the entering interns’ recognition of low-value care and their ability to recognize unsafe care in a simulation designed to promote situational awareness.

METHODS

Setting and Participants

The simulation was conducted during GME orientation at a large, urban academic medical institution. One hundred and twenty-five entering postgraduate year one (PGY1) interns participated in the simulation, which was a required component of a multiday orientation “boot-camp” experience. All eligible interns participated in the simulation, representing 13 specialty programs and 60 medical schools. Interns entering into pathology were excluded because of infrequent patient contact. Participating interns were divided into 7 specialty groups for analysis in order to preserve the anonymity of interns in smaller residency programs (surgical subspecialties combined with general surgery, medicine-pediatrics grouped with internal medicine). The University of Chicago Institutional Review Board deemed this study exempt from review.

 

 

Program Description

A simulation of an inpatient hospital room, known as the “Room of Horrors,” was constructed in collaboration with the University of Chicago Simulation Center and adapted from a previous version of the exercise.11 The simulation consisted of a mock door chart highlighting the patient had been admitted for diarrhea (Clostridium difficile positive) following a recent hospitalization for pneumonia. A clinical scenario was constructed by using a patient mannequin and an accompanying door chart that listed information on the patient’s hospital course, allergies, and medications. In addition to the 8 patient safety hazards utilized in the prior version, our team selected 4 low-value hazards to be included in the simulation.

Safety and Low-Value Hazards Simulated in the “Room of Horrors”
Table 1

The 8 safety hazards have been detailed in a prior study and were previously selected from Medicare’s Hospital-Acquired Conditions (HAC) Reduction Program and Agency for Healthcare Research and Quality (AHRQ) Patient Safety Indicators.11-13 Each of the hazards was represented either physically in the simulation room and/or was indicated on the patient’s chart. For example, the latex allergy hazard was represented by latex gloves at the bedside despite an allergy indicated on the patient’s chart and wristband. A complete list of the 8 safety hazards and their representations in the simulation is shown in Table 1.

The Choosing Wisely™ lists were reviewed to identify low-value hazards for addition to the simulation.14 Our team selected 3 low-value hazards from the Society of Hospital Medicine (SHM) list,15 including (1) arbitrary blood transfusion despite the patient’s stable hemoglobin level of 8.0 g/dL and absence of cardiac symptoms,16 (2) addition of a proton pump inhibitor (PPI) for stress ulcer prophylaxis in a patient without high risk for gastrointestinal (GI) complications who was not on a PPI prior to admission, and (3) placement of a urinary catheter without medical indication. We had originally selected continuous telemetry monitoring as a fourth hazard from the SHM list, but were unable to operationalize, as it was difficult to simulate continuous telemetry on a mannequin. Because many inpatients are older than 65 years, we reviewed the American Geriatrics Society list17 and selected our fourth low-value hazard: (4) unnecessary use of physical restraints to manage behavioral symptoms in a hospitalized patient with delirium. Several of these hazards were also quality and safety priorities at our institution, including the overuse of urinary catheters, physical restraints, and blood transfusions. All 4 low-value hazards were referenced in the patient’s door chart, and 3 were also physically represented in the room via presence of a hanging unit of blood, Foley catheter, and upper-arm restraints (Table 1). See Appendix for a photograph of the simulation setup.

Each intern was allowed 10 minutes inside the simulation room. During this time, they were instructed to read the 1-page door chart, inspect the simulation room, and write down as many potential low-value and safety hazards as they could identify on a free-response form (see Appendix). Upon exiting the room, they were allotted 5 additional minutes to complete their free-response answers and provide written feedback on the simulation. The simulation was conducted in 3 simulated hospital rooms over the course of 2 days, and the correct answers were provided via e-mail after all interns had completed the exercise.

To assess prior training and safety knowledge, interns were asked to complete a 3-question preassessment on a ScanTronTM (Tustin, CA) form. The preassessment asked interns whether they had received training on hospital safety during medical school (yes, no, or unsure), if they were satisfied with the hospital safety training they received during medical school (strongly disagree to strongly agree on a Likert scale), and if they were confident in their ability to identify potential hazards in a hospital setting (strongly disagree to strongly agree). Interns were also given the opportunity to provide feedback on the simulation experience on the ScanTronTM (Tustin, CA) form.

One month after participating in the simulation, interns were asked to complete an online follow-up survey on MedHubTM (Ann Arbor, MI), which included 2 Likert-scale questions (strongly disagree to strongly agree) assessing the simulation’s impact on their experience mitigating hospital hazards during the first month of internship.

Data Analysis

Interns’ free-response answers were manually coded, and descriptive statistics were used to summarize the mean percent correct for each hazard. A paired t test was used to compare intern identification of low-value vs safety hazards. T tests were used to compare hazard identification for interns entering highly procedural-intensive specialties (ie, surgical specialties, emergency medicine, anesthesia, obstetrics/gynecology) and those entering less procedural-intensive specialties (ie, internal medicine, pediatrics, psychiatry), as well as among those graduating from “Top 30” medical schools (based on US News & World Report Medical School Rankings18) and our own institution. One-way analysis of variance (ANOVA) calculations were used to test for differences in hazard identification based on interns’ prior hospital safety training, with interns who rated their satisfaction with prior training or confidence in identifying hazards as a “4” or a “5” considered “satisfied” and “confident,” respectively. Responses to the MedHubTM (Ann Arbor, MI) survey were dichotomized with “strongly agree” and “agree” considered positive responses. Statistical significance was defined at P < .05. All data analysis was conducted using Stata 14TM software (College Station, TX).

 

 

RESULTS

Intern Characteristics

Characteristics of Interns Participating in the “Room of Horrors” Simulation
Table 2

One hundred twenty-five entering PGY1 interns participated in the simulation, representing 60 medical schools and 7 different specialty groups (Table 2). Thirty-five percent (44/125) were graduates from “Top 30” medical schools, and 8.8% (11/125) graduated from our own institution. Seventy-four percent (89/121) had received prior hospital safety training during medical school, and 62.9% (56/89) were satisfied with their training. A majority of interns (64.2%, 79/123) felt confident in their ability to identify potential hazards in a hospital setting, although confidence was much higher among those with prior safety training (71.9%, 64/89) compared to those without prior training or who were unsure about their training (40.6%, 13/32; P = .09, t test).

Distribution of interns’ performance in the “Room of Horrors” simulation, based on the percentage of hazards correctly identified. N = 125.
Figure 1

Identification of Hazards

The mean percentage of hazards correctly identified by interns during the simulation was 50.4% (standard deviation [SD] 11.8%), with a normal distribution (Figure 1). Interns identified a significantly lower percentage of low-value hazards than safety hazards in the simulation (mean 19.2% [SD 18.6%] vs 66.0% [SD 16.0%], respectively; P < .001, paired t test). Interns also identified significantly more room-based errors than chart-based errors (mean 58.6% [SD 13.4%] vs 9.6% [SD 19.8%], respectively; P < .001, paired t test). The 3 most commonly identified hazards were unavailability of hand hygiene (120/125, 96.0%), presence of latex gloves despite the patient’s allergy (111/125, 88.8%), and fall risk due to the lowered bed rail (107/125, 85.6%). More than half of interns identified the incorrect name on the patient’s wristband and IV bag (91/125, 72.8%), a lack of isolation precautions (90/125, 72.0%), administration of penicillin despite the patient’s allergy (67/125, 53.6%), and unnecessary restraints (64/125, 51.2%). Less than half of interns identified the wrong medication being administered (50/125, 40.0%), unnecessary Foley catheter (25/125, 20.0%), and absence of venous thromboembolism (VTE) prophylaxis (24/125, 19.2%). Few interns identified the unnecessary blood transfusion (7/125, 5.6%), and no one identified the unnecessary stress ulcer prophylaxis (0/125, 0.0%; Figure 2).

Percentage of interns who correctly identified each hazard, with low-value hazards indicated by an asterisk (*). N = 125.
Figure 2

Predictors of Hazard Identification

Interns who self-reported as confident in their ability to identify hazards were not any more likely to correctly identify hazards than those who were not confident (50.9% overall hazard identification vs 49.6%, respectively; P = .56, t test). Interns entering into less procedural-intensive specialties identified significantly more safety hazards than those entering highly procedural-intensive specialties (mean 69.1% [SD 16.9%] vs 61.8% [SD 13.7%], respectively; P = .01, t test). However, there was no statistically significant difference in their identification of low-value hazards (mean 19.8% [SD 18.3%] for less procedural-intensive vs 18.4% [SD 19.1%] for highly procedural-intensive; P = .68, t test). There was no statistically significant difference in hazard identification among graduates of “Top 30” medical schools or graduates of our own institution. Prior hospital safety training had no significant impact on interns’ ability to identify safety or low-value hazards. Overall, interns who were satisfied with their prior training identified a mean of 51.8% of hazards present (SD 11.8%), interns who were not satisfied with their prior training identified 51.5% (SD 12.7%), interns with no prior training identified 48.7% (SD 11.7%), and interns who were unsure about their prior training identified 47.4% (SD 11.5%) [F(3,117) = .79; P = .51, ANOVA]. There was also no significant association between prior training and the identification of any one of the 12 specific hazards (chi-square tests, all P values > .1).

Intern Feedback and Follow-Up Survey

Debriefing revealed that most interns passively assumed the patient’s chart was correct and did not think they should question the patient’s current care regimen. For example, many interns commented that they did not think to consider the patient’s blood transfusion as unnecessary, even though they were aware of the recommended hemoglobin cutoffs for stable patients.

Interns also provided formal feedback on the simulation through open-ended comments on their ScanTronTM (Tustin, CA) form. For example, one intern wrote that they would “inherently approach every patient room ‘looking’ for safety issues, probably directly because of this exercise.” Another commented that the simulation was “more difficult than I expected, but very necessary to facilitate discussion and learning.” One intern wrote that “I wish I had done this earlier in my career.”

Ninety-six percent of participating interns (120/125) completed an online follow-up survey 1 month after beginning internship. In the survey, 68.9% (82/119) of interns indicated they were more aware of how to identify potential hazards facing hospitalized patients as a result of the simulation. Furthermore, 52.1% (62/119) of interns had taken action during internship to reduce a potential hazard that was present in the simulation.

DISCUSSION

While many GME orientations include simulation and safety training, this study is the first of its kind to incorporate low-value care from Choosing Wisely™ recommendations into simulated training. It is concerning that interns identified significantly fewer low-value hazards than safety hazards in the simulation. In some cases, no interns identified the low-value hazard. For example, while almost all interns identified the hand hygiene hazard, not one could identify the unnecessary stress ulcer prophylaxis. Furthermore, interns who self-reported as confident in their ability to identify hazards did not perform any better in the simulation. Interns entering less procedural-intensive specialties identified more safety hazards overall.

 

 

The simulation was well received by interns. Many commented that the experience was engaging, challenging, and effective in cultivating situational awareness towards low-value care. Our follow-up survey demonstrated the majority of interns reported taking action during their first month of internship to reduce a hazard included in the simulation. Most interns also reported a greater awareness of how to identify hospital hazards as a result of the simulation. These findings suggest that a brief simulation-based experience has the potential to create a lasting retention of situational awareness and behavior change.

It is worth exploring why interns identified significantly fewer low-value hazards than safety hazards in the simulation. One hypothesis is that interns were less attuned to low-value hazards, which may reflect a lacking emphasis on value-based care in undergraduate medical education (UME). It is especially concerning that so few interns identified the catheter-associated urinary tract infection (CAUTI) risk, as interns are primarily responsible for recognizing and removing an unnecessary catheter. Although the risks of low-value care should be apparent to most trainees, the process of recognizing and deliberately stopping or avoiding low-value care can be challenging for young clinicians.19 To promote value-based thinking among entering residents, UME programs should teach students to question the utility of the interventions their patients are receiving. One promising framework for doing so is the Subjective, Objective, Assessment, Plan- (SOAP)-V, in which a V for “Value” is added to the traditional SOAP note.20 SOAP-V notes serve as a cognitive forcing function that requires students to pause and assess the value and cost-consciousness of their patients’ care.20

The results from the “Room of Horrors” simulation can also guide health leaders and educators in identifying institutional areas of focus towards providing high-value and safe care. For example, at the University of Chicago we launched an initiative to improve the inappropriate use of urinary catheters after learning that few of our incoming interns recognized this during the simulation. Institutions could use this model to raise awareness of initiatives and redirect resources from areas that trainees perform well in (eg, hand hygiene) to areas that need improvement (eg, recognition of low-value care). Given the simulation’s low cost and minimal material requirements, it could be easily integrated into existing training programs with the support of an institution’s simulation center.

This study’s limitations include its conduction at single-institution, although the participants represented graduates of 60 different institutions. Furthermore, while the 12 hazards included in the simulation represent patient safety and value initiatives from a wide array of medical societies, they were not intended to be comprehensive and were not tailored to specific specialties. The simulation included only 4 low-value hazards, and future iterations of this exercise should aim to include an equal number of safety and low-value hazards. Furthermore, the evaluation of interns’ prior hospital safety training relied on self-reporting, and the specific context and content of each interns’ training was not examined. Finally, at this point we are unable to provide objective longitudinal data assessing the simulation’s impact on clinical practice and patient outcomes. Subsequent work will assess the sustained impact of the simulation by correlating with institutional data on measurable occurrences of low-value care.

In conclusion, interns identified significantly fewer low-value hazards than safety hazards in an inpatient simulation designed to promote situational awareness. Our results suggest that interns are on the lookout for errors of omission (eg, absence of hand hygiene, absence of isolation precautions) but are often blinded to errors of commission, such that when patients are started on therapies there is an assumption that the therapies are correct and necessary (eg, blood transfusions, stress ulcer prophylaxis). These findings suggest poor awareness of low-value care among incoming interns and highlight the need for additional training in both UME and GME to place a greater emphasis on preventing low-value care.

Disclosure

Dr. Arora is a member of the American Board of Medicine Board of Directors and has received grant funding from ABIM Foundation via Costs of Care for the Teaching Value Choosing Wisely™ Challenge. Dr. Farnan, Dr. Arora, and Ms. Hirsch receive grant funds from Accreditation Council of Graduate Medical Education as part of the Pursuing Excellence Initiative. Dr. Arora and Dr. Farnan also receive grant funds from the American Medical Association Accelerating Change in Medical Education initiative. Kathleen Wiest and Lukas Matern were funded through matching funds of the Pritzker Summer Research Program for NIA T35AG029795.

References

1. Colla CH, Morden NE, Sequist TD, Schpero WL, Rosenthal MB. Choosing wisely: prevalence and correlates of low-value health care services in the United States. J Gen Intern Med. 2015;30(2):221-228. doi:10.1007/s11606-014-3070-z. PubMed
2. Elshaug AG, McWilliams JM, Landon BE. The value of low-value lists. JAMA. 2013;309(8):775-776. doi:10.1001/jama.2013.828. PubMed
3. Cooke M. Cost consciousness in patient care--what is medical education’s responsibility? N Engl J Med. 2010;362(14):1253-1255. doi:10.1056/NEJMp0911502. PubMed
4. Weinberger SE. Providing high-value, cost-conscious care: a critical seventh general competency for physicians. Ann Intern Med. 2011;155(6):386-388. doi:10.7326/0003-4819-155-6-201109200-00007. PubMed
5. Graduate Medical Education That Meets the Nation’s Health Needs. Institute of Medicine. http://www.nationalacademies.org/hmd/Reports/2014/Graduate-Medical-Education-That-Meets-the-Nations-Health-Needs.aspx. Accessed May 25, 2016.
6. Accreditation Council for Graduate Medical Education. CLER Pathways to Excellence. https://www.acgme.org/acgmeweb/Portals/0/PDFs/CLER/CLER_Brochure.pdf. Accessed July 15, 2015.
7. Varkey P, Murad MH, Braun C, Grall KJH, Saoji V. A review of cost-effectiveness, cost-containment and economics curricula in graduate medical education. J Eval Clin Pract. 2010;16(6):1055-1062. doi:10.1111/j.1365-2753.2009.01249.x. PubMed
8. Patel MS, Reed DA, Loertscher L, McDonald FS, Arora VM. Teaching residents to provide cost-conscious care: a national survey of residency program directors. JAMA Intern Med. 2014;174(3):470-472. doi:10.1001/jamainternmed.2013.13222. PubMed
9. Cohen NL. Using the ABCs of situational awareness for patient safety. Nursing. 2013;43(4):64-65. doi:10.1097/01.NURSE.0000428332.23978.82. PubMed
10. Varkey P, Karlapudi S, Rose S, Swensen S. A patient safety curriculum for graduate medical education: results from a needs assessment of educators and patient safety experts. Am J Med Qual. 2009;24(3):214-221. doi:10.1177/1062860609332905. PubMed
11. Farnan JM, Gaffney S, Poston JT, et al. Patient safety room of horrors: a novel method to assess medical students and entering residents’ ability to identify hazards of hospitalisation. BMJ Qual Saf. 2016;25(3):153-158. doi:10.1136/bmjqs-2015-004621. PubMed
12. Centers for Medicare and Medicaid Services Hospital-acquired condition reduction program. Medicare.gov. https://www.medicare.gov/hospitalcompare/HAC-reduction-program.html. Accessed August 1, 2015.
13. Agency for Healthcare Research and Quality. Patient Safety Indicators Overview. http://www. qualityindicators.ahrq.gov/modules/psi_overview.aspx. Accessed August 20, 2015.
14. ABIM Foundation. Choosing Wisely. http://www.choosingwisely.org. Accessed August 21, 2015.
15. ABIM Foundation. Society of Hospital Medicine – Adult Hospital Medicine List. Choosing Wisely. http://www.choosingwisely.org/societies/ society-of-hospital-medicine-adult/. Accessed August 21, 2015.
16. Carson JL, Grossman BJ, Kleinman S, et al. Red blood cell transfusion: A clinical practice guideline from the AABB*. Ann Intern Med. 2012;157(1):49-58. PubMed
17. ABIM Foundation. American Geriatrics Society List. Choosing Wisely. http://www.choosingwisely.org/societies/american-geriatrics-society/. Accessed August 21, 2015.
18. The Best Medical Schools for Research, Ranked. http://grad-schools.usnews.rankingsandreviews.com/best-graduate-schools/top-medical-schools/research-rankings?int=af3309&int=b3b50a&int=b14409. Accessed June 7, 2016.
19. Roman BR, Asch DA. Faded promises: The challenge of deadopting low-value care. Ann Intern Med. 2014;161(2):149-150. doi:10.7326/M14-0212. PubMed
20. Moser EM, Huang GC, Packer CD, et al. SOAP-V: Introducing a method to empower medical students to be change agents in bending the cost curve. J Hosp Med. 2016;11(3):217-220. doi:10.1002/jhm.2489. PubMed

References

1. Colla CH, Morden NE, Sequist TD, Schpero WL, Rosenthal MB. Choosing wisely: prevalence and correlates of low-value health care services in the United States. J Gen Intern Med. 2015;30(2):221-228. doi:10.1007/s11606-014-3070-z. PubMed
2. Elshaug AG, McWilliams JM, Landon BE. The value of low-value lists. JAMA. 2013;309(8):775-776. doi:10.1001/jama.2013.828. PubMed
3. Cooke M. Cost consciousness in patient care--what is medical education’s responsibility? N Engl J Med. 2010;362(14):1253-1255. doi:10.1056/NEJMp0911502. PubMed
4. Weinberger SE. Providing high-value, cost-conscious care: a critical seventh general competency for physicians. Ann Intern Med. 2011;155(6):386-388. doi:10.7326/0003-4819-155-6-201109200-00007. PubMed
5. Graduate Medical Education That Meets the Nation’s Health Needs. Institute of Medicine. http://www.nationalacademies.org/hmd/Reports/2014/Graduate-Medical-Education-That-Meets-the-Nations-Health-Needs.aspx. Accessed May 25, 2016.
6. Accreditation Council for Graduate Medical Education. CLER Pathways to Excellence. https://www.acgme.org/acgmeweb/Portals/0/PDFs/CLER/CLER_Brochure.pdf. Accessed July 15, 2015.
7. Varkey P, Murad MH, Braun C, Grall KJH, Saoji V. A review of cost-effectiveness, cost-containment and economics curricula in graduate medical education. J Eval Clin Pract. 2010;16(6):1055-1062. doi:10.1111/j.1365-2753.2009.01249.x. PubMed
8. Patel MS, Reed DA, Loertscher L, McDonald FS, Arora VM. Teaching residents to provide cost-conscious care: a national survey of residency program directors. JAMA Intern Med. 2014;174(3):470-472. doi:10.1001/jamainternmed.2013.13222. PubMed
9. Cohen NL. Using the ABCs of situational awareness for patient safety. Nursing. 2013;43(4):64-65. doi:10.1097/01.NURSE.0000428332.23978.82. PubMed
10. Varkey P, Karlapudi S, Rose S, Swensen S. A patient safety curriculum for graduate medical education: results from a needs assessment of educators and patient safety experts. Am J Med Qual. 2009;24(3):214-221. doi:10.1177/1062860609332905. PubMed
11. Farnan JM, Gaffney S, Poston JT, et al. Patient safety room of horrors: a novel method to assess medical students and entering residents’ ability to identify hazards of hospitalisation. BMJ Qual Saf. 2016;25(3):153-158. doi:10.1136/bmjqs-2015-004621. PubMed
12. Centers for Medicare and Medicaid Services Hospital-acquired condition reduction program. Medicare.gov. https://www.medicare.gov/hospitalcompare/HAC-reduction-program.html. Accessed August 1, 2015.
13. Agency for Healthcare Research and Quality. Patient Safety Indicators Overview. http://www. qualityindicators.ahrq.gov/modules/psi_overview.aspx. Accessed August 20, 2015.
14. ABIM Foundation. Choosing Wisely. http://www.choosingwisely.org. Accessed August 21, 2015.
15. ABIM Foundation. Society of Hospital Medicine – Adult Hospital Medicine List. Choosing Wisely. http://www.choosingwisely.org/societies/ society-of-hospital-medicine-adult/. Accessed August 21, 2015.
16. Carson JL, Grossman BJ, Kleinman S, et al. Red blood cell transfusion: A clinical practice guideline from the AABB*. Ann Intern Med. 2012;157(1):49-58. PubMed
17. ABIM Foundation. American Geriatrics Society List. Choosing Wisely. http://www.choosingwisely.org/societies/american-geriatrics-society/. Accessed August 21, 2015.
18. The Best Medical Schools for Research, Ranked. http://grad-schools.usnews.rankingsandreviews.com/best-graduate-schools/top-medical-schools/research-rankings?int=af3309&int=b3b50a&int=b14409. Accessed June 7, 2016.
19. Roman BR, Asch DA. Faded promises: The challenge of deadopting low-value care. Ann Intern Med. 2014;161(2):149-150. doi:10.7326/M14-0212. PubMed
20. Moser EM, Huang GC, Packer CD, et al. SOAP-V: Introducing a method to empower medical students to be change agents in bending the cost curve. J Hosp Med. 2016;11(3):217-220. doi:10.1002/jhm.2489. PubMed

Issue
Journal of Hospital Medicine 12(7)
Issue
Journal of Hospital Medicine 12(7)
Page Number
493-497
Page Number
493-497
Publications
Publications
Topics
Article Type
Display Headline
Use of simulation to assess incoming interns’ recognition of opportunities to choose wisely
Display Headline
Use of simulation to assess incoming interns’ recognition of opportunities to choose wisely
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Vineet Arora, The University of Chicago Medicine, 5841 S Maryland Ave, MC 2007, Chicago, IL 60637, Telephone: 773-702-8157, Fax: 773-834-2238, E-mail: varora@medicine.bsd.uchicago.edu
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Analysis of Hospitalist Discontinuity

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
A qualitative analysis of patients' experience with hospitalist service handovers

Studies examining the importance of continuity of care have shown that patients who maintain a continuous relationship with a single physician have improved outcomes.[1, 2] However, most of these studies were performed in the outpatient, rather than the inpatient setting. With over 35 million patients admitted to hospitals in 2013, along with the significant increase in hospital discontinuity over recent years, the impact of inpatient continuity of care on quality outcomes and patient satisfaction is becoming increasingly relevant.[3, 4]

Service handoffs, when a physician hands over treatment responsibility for a panel of patients and is not expected to return, are a type of handoff that contributes to inpatient discontinuity. In particular, service handoffs between hospitalists are an especially common and inherently risky type of transition, as there is a severing of an established relationship during a patient's hospitalization. Unfortunately, due to the lack of evidence on the effects of service handoffs, current guidelines are limited in their recommendations.[5] Whereas several recent studies have begun to explore the effects of these handoffs, no prior study has examined this issue from a patient's perspective.[6, 7, 8]

Patients are uniquely positioned to inform us about their experiences in care transitions. Furthermore, with patient satisfaction now affecting Medicare reimbursement rates, patient experiences while in the hospital are becoming even more significant.[9] Despite this emphasis toward more patient‐centered care, no study has explored the hospitalized patient's experience with hospitalist service handoffs. Our goal was to qualitatively assess the hospitalized patients' experiences with transitions between hospitalists to develop a conceptual model to inform future work on improving inpatient transitions of care.

METHODS

Sampling and Recruitment

We conducted bedside interviews of hospitalized patients at an urban academic medical center from October 2014 through December 2014. The hospitalist service consists of a physician and an advanced nurse practitioner (ANP) who divide a panel of patients that consist of general medicine and subspecialty patients who are often comanaged with hepatology, oncology, and nephrology subspecialists. We performed a purposive selection of patients who could potentially comment on their experience with a hospitalist service transition using the following method: 48 hours after a service handoff (ie, an outgoing physician completing 1 week on service, then transfers the care of the patient to a new oncoming hospitalist), oncoming hospitalists were approached and asked if any patient on their service had experienced a service handoff and still remained in the hospital. A 48‐hour time period was chosen to give the patients time to familiarize themselves with their new hospitalist, allowing them to properly comment on the handoff. Patients who were managed by the ANP, who were non‐English speaking, or who were deemed to have an altered mental status based on clinical suspicion by the interviewing physician (C.M.W.) were excluded from participation. Following each weekly service transition, a list of patients who met the above criteria was collected from 4 nonteaching hospitalist services, and were approached by the primary investigator (C.M.W.) and asked if they would be willing to participate. All patients were general medicine patients and no exclusions were made based on physical location within the hospital. Those who agreed provided signed written consent prior to participation to allow access to the electronic health records (EHRs) by study personnel.

Data Collection

Patients were administered a 9‐question, semistructured interview that was informed by expert opinion and existing literature, which was developed to elicit their perspective regarding their transition between hospitalists.[10, 11] No formal changes were made to the interview guide during the study period, and all patients were asked the same questions. Outcomes from interim analysis guided further questioning in subsequent interviews so as to increase the depth of patient responses (ie, Can you explain your response in greater depth?). Prior to the interview, patients were read a description of a hospitalist, and were reminded which hospitalists had cared for them during their stay (see Supporting Information, Appendix 1, in the online version of this article). If family members or a caregiver were present at the time of interview, they were asked not to comment. No repeat interviews were carried out.

All interviews were performed privately in single‐occupancy rooms, digitally recorded using an iPad (Apple, Cupertino, CA) and professionally transcribed verbatim (Rev, San Francisco, CA). All analysis was performed using MAXQDA Software (VERBI Software GmbH, Berlin, Germany). We obtained demographic information about each patient through chart review

Data Analysis

Grounded theory was utilized, with an inductive approach with no a priori hypothesis.[12] The constant comparative method was used to generate emerging and reoccurring themes.[13] Units of analysis were sentences and phrases. Our research team consisted of 4 academic hospitalists, 2 with backgrounds in clinical medicine, medical education, and qualitative analysis (J.M.F., V.M.A.), 1 as a clinician (C.M.W.), and 1 in health economics (D.O.M.). Interim analysis was performed on a weekly basis (C.M.W.), during which time a coding template was created and refined through an iterative process (C.M.W., J.M.F.). All disagreements in coded themes were resolved through group discussion until full consensus was reached. Each week, responses were assessed for thematic saturation.[14] Interviews were continued if new themes arose during this analysis. Data collection was ended once we ceased to extract new topics from participants. A summary of all themes was then presented to a group of 10 patients who met the same inclusion criteria for respondent validation and member checking. All reporting was performed within the Standards for Reporting Qualitative Research, with additional guidance derived from the Consolidated Criteria for Reporting Qualitative Research.[15, 16] The University of Chicago Institutional Review Board approved this protocol.

RESULTS

In total, 43 eligible patients were recruited, and 40 (93%) agreed to participate. Interviewed patients were between 51 and 65 (39%) years old, had a mean age of 54.5 (15) years, were predominantly female (65%), African American (58%), had a median length of stay at the time of interview of 6.5 days (interquartile range [IQR]: 48), and had an average of 2.0 (IQR: 13) hospitalists oversee their care at the time of interview (Table 1). Interview times ranged from 10:25 to 25:48 minutes, with an average of 15:32 minutes.

Respondent Characteristics
Value
  • NOTE: Abbreviations: IQR, interquartile range; LOS, length of stay; SD, standard deviation.

Response rate, n (%) 40/43 (93)
Age, mean SD 54.5 15
Sex, n (%)
Female 26 (65)
Male 14 (35)
Race, n (%)
African American 23 (58)
White 16 (40)
Hispanic 1 (2)
Median LOS at time of interview, d (IQR) 6.5 (48)
Median no. of hospitalists at time of interview, n (IQR) 2.0 (13)

We identified 6 major themes on patient perceptions of hospitalist service handoffs including (1) physician‐patient communication, (2) transparency in the hospitalist transition process, (3) indifference toward the hospitalist transition, (4) hospitalist‐subspecialist communication, (5) recognition of new opportunities due to a transition, and (6) hospitalists' bedside manner (Table 2).

Key Themes and Subthemes on Hospitalist Service Changeovers
Themes Subthemes Representative Quotes
Physician‐patient communication Patients dislike redundant communication with oncoming hospitalist. I mean it's just you always have to explain your situation over and over and over again. (patient 14)
When I said it once already, then you're repeating it to another doctor. I feel as if that hospitalist didn't talk to the other hospitalist. (patient 7)
Poor communication can negatively affect the doctor‐patient relationship. They don't really want to explain things. They don't think I'll understand. I think & yeah, I'm okay. You don't even have to put it in layman's terms. I know medical. I'm in nursing school. I have a year left. But even if you didn't know that, I would still hope you would try to tell me what was going on instead of just doing it in your head, and treating it. (patient 2)
I mean it's just you always have to explain your situation over and over and over again. After a while you just stop trusting them. (patient 20)
Good communication can positively affect the doctor‐patient relationship. Just continue with the communication, the open communication, and always stress to me that I have a voice and just going out of their way to do whatever they can to help me through whatever I'm going through. (patient 1)
Transparency in transition Patients want to be informed prior to a service changeover. I think they should be told immediately, even maybe given prior notice, like this may happen, just so you're not surprised when it happens. (patient 15)
When the doctor approached me, he let me know that he wasn't going to be here the next day and there was going to be another doctor coming in. That made me feel comfortable. (patient 9)
Patients desire a more formalized process in the service changeover. People want things to be consistent. People don't like change. They like routine. So, if he's leaving, you're coming on, I'd like for him to bring you in, introduce you to me, and for you just assure me that I'll take care of you. (patient 4)
Just like when you get a new medication, you're given all this information on it. So when you get a new hospitalist, shouldn't I get all the information on them? Like where they went to school, what they look like. (patient 23)
Patients want clearer definition of the roles the physicians will play in their care. The first time I was hospitalized for the first time I had all these different doctors coming in, and I had the residency, and the specialists, and the department, and I don't know who's who. What I asked them to do is when they come in the room, which they did, but introduce it a little more for me. Write it down like these are the special team and these are the doctors because even though they come in and give me their name, I have no idea what they're doing. (patient 5)
Someone should explain the setup and who people are. Someone would say, Okay when you're in a hospital this is your [doctor's] role. Like they should have booklets and everything. (patient 19)
Indifference toward transition Many patients have trust in service changeovers. [S]o as long as everybody's on board and communicates well and efficiently, I don't have a problem with it. (patient 6)
To me, it really wasn't no preference, as long as I was getting the care that I needed. (patient 21)
It's not a concern as long as they're on the same page. (patient 17)
Hospitalist‐specialist communication Patients are concerned about communication between their hospitalist and their subspecialists. The more cooks you get in the kitchen, the more things get to get lost, so I'm always concerned that they're not sharing the same information, especially when you're getting asked the same questions that you might have just answered the last hour ago. (patient 9)
I don't know if the hospitalist are talking to them [subspecialist]. They haven't got time. (patient 35)
Patients place trust in the communication between hospitalist and subspecialist. I think among the teams themselveswhich is my pain doctor, Dr. K's group, the oncology group itself, they switch off and trade with each other and they all speak the same language so that works out good. (patient 3)
Lack of interprofessional communication can lead to patient concern. I was afraid that one was going to drop the ball on something and not pass something on, or you know. (patient 11)
I had numerous doctors who all seemed to not communicate with each other at all or did so by email or whatever. They didn't just sit down together and say we feel this way and we feel that way. I didn't like that at all. (patient 10)
New opportunities due to transition Patients see new doctor as opportunity for medical reevaluation. I see it as two heads are better than one, three heads are better than one, four heads are better than one. When people put their heads together to work towards a common goal, especially when they're, you know, people working their craft, it can't be bad. (patient 9)
I finally got my ears looked atbecause I've asked to have my ears looked at since Mondayand the new doc is trying to make an effort to look at them. (patient 39)
Patients see service changeover as an opportunity to form a better personal relationship. Having a new hospitalist it gives you opportunity for a new beginning. (patient 11)
Bedside manner Good bedside manner can assist in a service changeover. Some of them are all business‐like but some of them are, Well how do you feel today? Hi, how are you? So this made a little difference. You feel more comfortable. You're going to be more comfortable with them. Their bedside manner helps. (patient 16)
It's just like when a doctor sits down and talks to you, they just seem more relaxed and more .... I know they're very busy and they have lots of things to do and other patients to see, but while they're in there with you, you know, you don't get too much time with them. So bedside manner is just so important. (patient 24)
Poor bedside manner can be detrimental in transition. [B]ecause they be so busy they claim they don't have time just to sit and talk to a patient, and make sure they all right. (patient 17)

Physician‐Patient Communication

Communication between the physician and the patient was an important element in patients' assessment of their experience. Patient's tended to divide physician‐patient communication into 2 categories: good communication, which consisted of open communication (patient 1) and patient engagement, and bad communication, which was described as physicians not sharing information or taking the time to explain the course of care in words that I'll understand (patient 2). Patients also described dissatisfaction with redundant communication between multiple hospitalists and the frustration of often having to describe their clinical course to multiple providers.

Transparency in Communication

The desire to have greater transparency in the handoff process was another common theme. This was likely due to the fact that 34/40 (85%) of surveyed patients were unaware that a service changeover had ever taken place. This lack of transparency was viewed to have further downstream consequences as patients stated that there should be a level of transparency, and when it's not, then there is always trust issues (patient 1). Upon further questioning as to how to make the process more transparent, many patients recommended a formalized, face‐to‐face introduction involving the patient and both hospitalists, in which the outgoing hospitalist would, bring you [oncoming hospitalist] in, and introduce you to me (patient 4).

Patients often stated that given the large spectrum of physicians they might encounter during their stay (ie, medical student, resident, hospitalist attending, subspecialty fellow, subspecialist attending), clearer definitions of physicians' roles are needed.

Hospitalist‐Specialist Communication

Concern about the communication between their hospitalist and subspecialist was another predominant theme. Conflicting and unclear directions from multiple services were especially frustrating, as a patient stated, One guy took me off this pill, the other guy wants me on that pill, I'm like okay, I can't do both (patient 8). Furthermore, a subset of patients referenced their subspecialist as their primary care provider and preferred their subspecialist for guidance in their hospital course, rather than their hospitalist. This often appeared in cases where the patient had an established relationship with the subspecialist prior to their hospitalization.

New Opportunities Due to Transition

Patients expressed positive feelings toward service handoffs by viewing the transition as an opportunity for medical reevaluation by a new physician. Patients told of instances in which a specific complaint was not being addressed by the first physician, but would be addressed by the second (oncoming) physician. A commonly expressed idea was that the oncoming physician might know something that he [Dr. B] didn't know, and since Dr. B was only here for a week, why not give him [oncoming hospitalist] a chance (patient 10). Patients would also describe the transition as an opportunity to form, and possibly improve, therapeutic alliances with a new hospitalist.

Bedside Manner

Bedside manner was another commonly mentioned thematic element. Patients were often quick to forget prior problems or issues that they may have suffered because of the transition if the oncoming physician was perceived to have a good bedside manner, often described as someone who formally introduced themselves, was considered relaxed, and would take the time to sit and talk with the patient. As a patient put it, [S]he sat down and got to know meand asked me what I wanted to do (patient 12). Conversely, patients described instances in which a perceived bad bedside manner led to a poor relationship between the physician and the patient, in which trust and comfort (patient 11) were sacrificed.

Indifference Toward Transition

In contrast to some of the previous findings, which called for improved interactions between physicians and patients, we also discovered a theme of indifference toward the transition. Several patients stated feelings of trust with the medical system, and were content with the service changeover as long as they felt that their medical needs were being met. Patients also tended to express a level of acceptance with the transition, and tended to believe that this was the price we pay for being here [in the hospital] (patient 7).

Conceptual Model

Following the collection and analysis of all patient responses, all themes were utilized to construct the ideal patient‐centered service handoff. The ideal transition describes open lines of communication between all involved parties, is facilitated by multiple modalities, such as the EHRs and nursing staff, and recognizes the patient as the primary stakeholder (Figure 1).

Figure 1
Conceptual model of the ideal patient experience with a service handoff. Abbreviations: EHR, electronic health record.

DISCUSSION

To our knowledge, this is the first qualitative investigation of the hospitalized patient's experience with service handoffs between hospitalists. The patient perspective adds a personal and first‐hand description of how fragmented care may impact the hospitalized patient experience.

Of the 6 themes, communication was found to be the most pertinent to our respondents. Because much of patient care is an inherently communicative activity, it is not surprising that patients, as well as patient safety experts, have focused on communication as an area in need of improvement in transition processes.[17, 18] Moreover, multiple medical societies have directly called for improvements within this area, and have specifically recommended clear and direct communication of treatment plans between the patient and physician, timely exchange of information, and knowledge of who is primarily in charge of the patients care.[11] Not surprisingly, each of these recommendations appears to be echoed by our participants. This theme is especially important given that good physician‐patient communication has been noted to be a major goal in achieving patient‐centered care, and has been positively correlated to medication adherence, patient satisfaction, and physical health outcomes.[19, 20, 21, 22, 23]

Although not a substitute for face‐to‐face interactions, other communication interventions between physicians and patients should be considered. For example, get to know me posters placed in patient rooms have been shown to encourage communication between patients and physicians.[24] Additionally, physician face cards have been used to improve patients' abilities to identify and clarify physicians' roles in patient care.[25] As a patient put it, If they got a new one [hospitalist], just as if I got a new medicationprint out information on themlike where they went to med school, and stuff(patient 13). These modalities may represent highly implementable, cost‐effective adjuncts to current handoff methods that may improve lines of communication between physicians and patients.

In addition to the importance placed on physician‐patient communication, interprofessional communication between hospitalists and subspecialists was also highly regarded. Studies have shown that practice‐based interprofessional communication, such as daily interdisciplinary rounds and the use of external facilitators, can improve healthcare processes and outcomes.[26] However, these interventions must be weighed with the many conflicting factors that both hospitalists and subspecialists face on daily basis, including high patient volumes, time limitations, patient availability, and scheduling conflicts.[27] None the less, the strong emphasis patients placed on this line of communication highlights this domain as an area in which hospitalist and subspecialist can work together for systematic improvement.

Patients also recognized the complexity of the transfer process between hospitalists and called for improved transparency. For example, patients repeatedly requested to be informed prior to any changes in their hospitalists, a request that remains consistent with current guidelines.[11] There also existed a strong desire for a more formalized process of transitioning between hospitalists, which often described a handoff procedure that would occur at the patient's bedside. This desire seems to be mirrored in the data that show that patients prefer to interact with their care team at the bedside and report higher satisfaction when they are involved with their care.[28, 29] Unfortunately, this desire for more direct interaction with physicians runs counter to the current paradigm of patient care, where most activities on rounds do not take place at the bedside.[30]

In contrast to patient's calls for improved transparency, an equally large portion of patients expressed relative indifference to the transition. Whereas on the surface this may seem salutary, some studies suggest that a lack of patient activation and engagement may have adverse effects toward patients' overall care.[31] Furthermore, others have shown evidence of better healthcare experiences, improved health outcomes, and lower costs in patients who are more active in their care.[30, 31] Altogether, this suggests that despite some patients' indifference, physicians should continue to engage patients in their hospital care.[32]

Although prevailing sentiments among patient safety advocates are that patient handoffs are inherently dangerous and place patients at increased risk of adverse events, patients did not always share this concern. A frequently occurring theme was that the transition is an opportunity for medical reevaluation or the establishment of a new, possibly improved therapeutic alliance. Recognizing this viewpoint offers oncoming hospitalists the opportunity to focus on issues that the patient may have felt were not being properly addressed with their prior physician.

Finally, although our conceptual model is not a strict guideline, we believe that any future studies should consider this framework when constructing interventions to improve service‐level handoffs. Several interventions already exist. For instance, educational interventions, such as patient‐centered interviewing, have been shown to improve patient satisfaction, compliance with medications, lead to fewer lawsuits, and improve health outcomes.[33, 34, 35] Additional methods of keeping the patient more informed include physician face sheets and performance of the handoff at the patient's bedside. Although well known in nursing literature, the idea of physicians performing handoffs at the patient's bedside is a particularly patient‐centric process.[36] This type of intervention may have the ability to transform the handoff from the current state of a 2‐way street, in which information is passed between 2 hospitalists, to a 3‐way stop, in which both hospitalists and the patient are able to communicate at this critical junction of care.

Although our study does offer new insight into the effects of discontinuous care, its exploratory nature does have limitations. First, being performed at a single academic center limits our ability to generalize our findings. Second, perspectives of those who did not wish to participate, patients' family members or caregivers, and those who were not queried, could highly differ from those we interviewed. Additionally, we did not collect data on patients' diagnoses or reason for admission, thus limiting our ability to assess if certain diagnosis or subpopulations predispose patients to experiencing a service handoff. Third, although our study was restricted to English‐speaking patients only, we must consider that non‐English speakers would likely suffer from even greater communication barriers than those who took part in our study. Finally, our interviews and data analysis were conducted by hospitalists, which could have subconsciously influenced the interview process, and the interpretation of patient responses. However, we tried to mitigate these issues by having the same individual interview all participants, by using an interview guide to ensure cross‐cohort consistency, by using open‐ended questions, and by attempting to give patients every opportunity to express themselves.

CONCLUSIONS

From a patients' perspective, inpatient service handoffs are often opaque experiences that are highlighted by poor communication between physicians and patients. Although deficits in communication and transparency acted as barriers to a patient‐centered handoff, physicians should recognize that service handoffs may also represent opportunities for improvement, and should focus on these domains when they start on a new service.

Disclosures

All funding for this project was provided by the Section of Hospital Medicine at The University of Chicago Medical Center. The data from this article were presented at the Society of Hospital Medicine Annual Conference, National Harbor, March 31, 2015, and at the Society of General Internal Medicine National Meeting in Toronto, Canada, April 23, 2015. The authors report that no conflicts of interest, financial or otherwise, exist.

Files
References
  1. Sharma G, Fletcher KE, Zhang D, Kuo Y‐F, Freeman JL, Goodwin JS. Continuity of outpatient and inpatient care by primary care physicians for hospitalized older adults. JAMA. 2009;301(16):16711680.
  2. Nyweide DJ, Anthony DL, Bynum JPW, et al. Continuity of care and the risk of preventable hospitalization in older adults. JAMA Intern Med. 2013;173(20):18791885.
  3. Agency for Healthcare Research and Quality. HCUPnet: a tool for identifying, tracking, and analyzing national hospital statistics. Available at: http://hcupnet.ahrq.gov/HCUPnet.jsp?Id=82B37DA366A36BAD6(8):438444.
  4. Arora VM, Manjarrez E, Dressler DD, Basaviah P, Halasyamani L, Kripalani S. Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433440.
  5. Epstein K, Juarez E, Epstein A, Loya K, Singer A. The impact of fragmentation of hospitalist care on length of stay. J Hosp Med. 2010;5(6):335338.
  6. Turner J, Hansen L, Hinami K, et al. The impact of hospitalist discontinuity on hospital cost, readmissions, and patient satisfaction. J Gen Intern Med. 2014;29(7):10041008.
  7. O'Leary KJ, Turner J, Christensen N, et al. The effect of hospitalist discontinuity on adverse events. J Hosp Med. 2015;10(3):147151.
  8. Agency for Healthcare Research and Quality. HCAHPS Fact Sheet. CAHPS Hospital Survey August 2013. Available at: http://www.hcahpsonline.org/files/August_2013_HCAHPS_Fact_Sheet3.pdf. Accessed February 2, 2015.
  9. Behara R, Wears RL, Perry SJ, et al. A conceptual framework for studying the safety of transitions in emergency care. In: Henriksen K, Battles JB, Marks ES, eds. Advances in Patient Safety: From Research to Implementation. Rockville, MD: Agency for Healthcare Research and Quality; 2005:309321. Concepts and Methodology; vol 2. Available at: http://www.ncbi.nlm.nih.gov/books/NBK20522. Accessed January 15, 2015.
  10. Snow V, Beck D, Budnitz T, et al. Transitions of care consensus policy statement American College of Physicians‐Society of General Internal Medicine‐Society of Hospital Medicine‐American Geriatrics Society‐American College of Emergency Physicians‐Society of Academic Emergency Medicine. J Gen Intern Med. 2009;24(8):971976.
  11. Watling CJ, Lingard L. Grounded theory in medical education research: AMEE guide no. 70. Med Teach. 2012;34(10):850861.
  12. Boeije H. A purposeful approach to the constant comparative method in the analysis of qualitative interviews. Qual Quant. 2002;36(4):391409.
  13. Morse JM. The significance of saturation. Qual Health Res. 1995;5(2):147149.
  14. O'Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):12451251.
  15. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32‐item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349357.
  16. Kripalani S, Jackson AT, Schnipper JL, Coleman EA. Promoting effective transitions of care at hospital discharge: a review of key issues for hospitalists. J Hosp Med. 2007;2(5):314323.
  17. The Joint Commission. Hot Topics in Healthcare, Issue 2. Transitions of care: the need for collaboration across entire care continuum. Available at: http://www.jointcommission.org/assets/1/6/TOC_Hot_Topics.pdf. Accessed April 9, 2015.
  18. Zolnierek KBH, Dimatteo MR. Physician communication and patient adherence to treatment: a meta‐analysis. Med Care. 2009;47(8):826834.
  19. Desai NR, Choudhry NK. Impediments to adherence to post myocardial infarction medications. Curr Cardiol Rep. 2013;15(1):322.
  20. Zandbelt LC, Smets EMA, Oort FJ, Godfried MH, Haes HCJM. Medical specialists' patient‐centered communication and patient‐reported outcomes. Med Care. 2007;45(4):330339.
  21. Clever SL, Jin L, Levinson W, Meltzer DO. Does doctor‐patient communication affect patient satisfaction with hospital care? Results of an analysis with a novel instrumental variable. Health Serv Res. 2008;43(5 pt 1):15051519.
  22. Michie S, Miles J, Weinman J. Patient‐centredness in chronic illness: what is it and does it matter? Patient Educ Couns. 2003;51(3):197206.
  23. Billings JA, Keeley A, Bauman J, et al. Merging cultures: palliative care specialists in the medical intensive care unit. Crit Care Med. 2006;34(11 suppl):S388S393.
  24. Arora VM, Schaninger C, D'Arcy M, et al. Improving inpatients' identification of their doctors: use of FACE cards. Jt Comm J Qual Patient Saf. 2009;35(12):613619.
  25. Zwarenstein M, Goldman J, Reeves S. Interprofessional collaboration: effects of practice‐based interventions on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2009;(3):CD000072.
  26. Gonzalo JD, Heist BS, Duffy BL, et al. Identifying and overcoming the barriers to bedside rounds: a multicenter qualitative study. Acad Med. 2014;89(2):326334.
  27. Lehmann LS, Brancati FL, Chen MC, Roter D, Dobs AS. The effect of bedside case presentations on patients' perceptions of their medical care. N Engl J Med. 1997;336(16):11501155.
  28. Gonzalo JD, Wolpaw DR, Lehman E, Chuang CH. Patient‐centered interprofessional collaborative care: factors associated with bedside interprofessional rounds. J Gen Intern Med. 2014;29(7):10401047.
  29. Stickrath C, Noble M, Prochazka A, et al. Attending rounds in the current era: what is and is not happening. JAMA Intern Med. 2013;173(12):10841089.
  30. Hibbard JH, Greene J. What the evidence shows about patient activation: better health outcomes and care experiences; fewer data on costs. Health Aff (Millwood). 2013;32(2):207214.
  31. Greene J, Hibbard JH, Sacks R, Overton V, Parrotta CD. When patient activation levels change, health outcomes and costs change, too. Health Aff Proj Hope. 2015;34(3):431437.
  32. Smith RC, Marshall‐Dorsey AA, Osborn GG, et al. Evidence‐based guidelines for teaching patient‐centered interviewing. Patient Educ Couns. 2000;39(1):2736.
  33. Hall JA, Roter DL, Katz NR. Meta‐analysis of correlates of provider behavior in medical encounters. Med Care. 1988;26(7):657675.
  34. Huycke LI, Huycke MM. Characteristics of potential plaintiffs in malpractice litigation. Ann Intern Med. 1994;120(9):792798.
  35. Gregory S, Tan D, Tilrico M, Edwardson N, Gamm L. Bedside shift reports: what does the evidence say? J Nurs Adm. 2014;44(10):541545.
Article PDF
Issue
Journal of Hospital Medicine - 11(10)
Publications
Page Number
675-681
Sections
Files
Files
Article PDF
Article PDF

Studies examining the importance of continuity of care have shown that patients who maintain a continuous relationship with a single physician have improved outcomes.[1, 2] However, most of these studies were performed in the outpatient, rather than the inpatient setting. With over 35 million patients admitted to hospitals in 2013, along with the significant increase in hospital discontinuity over recent years, the impact of inpatient continuity of care on quality outcomes and patient satisfaction is becoming increasingly relevant.[3, 4]

Service handoffs, when a physician hands over treatment responsibility for a panel of patients and is not expected to return, are a type of handoff that contributes to inpatient discontinuity. In particular, service handoffs between hospitalists are an especially common and inherently risky type of transition, as there is a severing of an established relationship during a patient's hospitalization. Unfortunately, due to the lack of evidence on the effects of service handoffs, current guidelines are limited in their recommendations.[5] Whereas several recent studies have begun to explore the effects of these handoffs, no prior study has examined this issue from a patient's perspective.[6, 7, 8]

Patients are uniquely positioned to inform us about their experiences in care transitions. Furthermore, with patient satisfaction now affecting Medicare reimbursement rates, patient experiences while in the hospital are becoming even more significant.[9] Despite this emphasis toward more patient‐centered care, no study has explored the hospitalized patient's experience with hospitalist service handoffs. Our goal was to qualitatively assess the hospitalized patients' experiences with transitions between hospitalists to develop a conceptual model to inform future work on improving inpatient transitions of care.

METHODS

Sampling and Recruitment

We conducted bedside interviews of hospitalized patients at an urban academic medical center from October 2014 through December 2014. The hospitalist service consists of a physician and an advanced nurse practitioner (ANP) who divide a panel of patients that consist of general medicine and subspecialty patients who are often comanaged with hepatology, oncology, and nephrology subspecialists. We performed a purposive selection of patients who could potentially comment on their experience with a hospitalist service transition using the following method: 48 hours after a service handoff (ie, an outgoing physician completing 1 week on service, then transfers the care of the patient to a new oncoming hospitalist), oncoming hospitalists were approached and asked if any patient on their service had experienced a service handoff and still remained in the hospital. A 48‐hour time period was chosen to give the patients time to familiarize themselves with their new hospitalist, allowing them to properly comment on the handoff. Patients who were managed by the ANP, who were non‐English speaking, or who were deemed to have an altered mental status based on clinical suspicion by the interviewing physician (C.M.W.) were excluded from participation. Following each weekly service transition, a list of patients who met the above criteria was collected from 4 nonteaching hospitalist services, and were approached by the primary investigator (C.M.W.) and asked if they would be willing to participate. All patients were general medicine patients and no exclusions were made based on physical location within the hospital. Those who agreed provided signed written consent prior to participation to allow access to the electronic health records (EHRs) by study personnel.

Data Collection

Patients were administered a 9‐question, semistructured interview that was informed by expert opinion and existing literature, which was developed to elicit their perspective regarding their transition between hospitalists.[10, 11] No formal changes were made to the interview guide during the study period, and all patients were asked the same questions. Outcomes from interim analysis guided further questioning in subsequent interviews so as to increase the depth of patient responses (ie, Can you explain your response in greater depth?). Prior to the interview, patients were read a description of a hospitalist, and were reminded which hospitalists had cared for them during their stay (see Supporting Information, Appendix 1, in the online version of this article). If family members or a caregiver were present at the time of interview, they were asked not to comment. No repeat interviews were carried out.

All interviews were performed privately in single‐occupancy rooms, digitally recorded using an iPad (Apple, Cupertino, CA) and professionally transcribed verbatim (Rev, San Francisco, CA). All analysis was performed using MAXQDA Software (VERBI Software GmbH, Berlin, Germany). We obtained demographic information about each patient through chart review

Data Analysis

Grounded theory was utilized, with an inductive approach with no a priori hypothesis.[12] The constant comparative method was used to generate emerging and reoccurring themes.[13] Units of analysis were sentences and phrases. Our research team consisted of 4 academic hospitalists, 2 with backgrounds in clinical medicine, medical education, and qualitative analysis (J.M.F., V.M.A.), 1 as a clinician (C.M.W.), and 1 in health economics (D.O.M.). Interim analysis was performed on a weekly basis (C.M.W.), during which time a coding template was created and refined through an iterative process (C.M.W., J.M.F.). All disagreements in coded themes were resolved through group discussion until full consensus was reached. Each week, responses were assessed for thematic saturation.[14] Interviews were continued if new themes arose during this analysis. Data collection was ended once we ceased to extract new topics from participants. A summary of all themes was then presented to a group of 10 patients who met the same inclusion criteria for respondent validation and member checking. All reporting was performed within the Standards for Reporting Qualitative Research, with additional guidance derived from the Consolidated Criteria for Reporting Qualitative Research.[15, 16] The University of Chicago Institutional Review Board approved this protocol.

RESULTS

In total, 43 eligible patients were recruited, and 40 (93%) agreed to participate. Interviewed patients were between 51 and 65 (39%) years old, had a mean age of 54.5 (15) years, were predominantly female (65%), African American (58%), had a median length of stay at the time of interview of 6.5 days (interquartile range [IQR]: 48), and had an average of 2.0 (IQR: 13) hospitalists oversee their care at the time of interview (Table 1). Interview times ranged from 10:25 to 25:48 minutes, with an average of 15:32 minutes.

Respondent Characteristics
Value
  • NOTE: Abbreviations: IQR, interquartile range; LOS, length of stay; SD, standard deviation.

Response rate, n (%) 40/43 (93)
Age, mean SD 54.5 15
Sex, n (%)
Female 26 (65)
Male 14 (35)
Race, n (%)
African American 23 (58)
White 16 (40)
Hispanic 1 (2)
Median LOS at time of interview, d (IQR) 6.5 (48)
Median no. of hospitalists at time of interview, n (IQR) 2.0 (13)

We identified 6 major themes on patient perceptions of hospitalist service handoffs including (1) physician‐patient communication, (2) transparency in the hospitalist transition process, (3) indifference toward the hospitalist transition, (4) hospitalist‐subspecialist communication, (5) recognition of new opportunities due to a transition, and (6) hospitalists' bedside manner (Table 2).

Key Themes and Subthemes on Hospitalist Service Changeovers
Themes Subthemes Representative Quotes
Physician‐patient communication Patients dislike redundant communication with oncoming hospitalist. I mean it's just you always have to explain your situation over and over and over again. (patient 14)
When I said it once already, then you're repeating it to another doctor. I feel as if that hospitalist didn't talk to the other hospitalist. (patient 7)
Poor communication can negatively affect the doctor‐patient relationship. They don't really want to explain things. They don't think I'll understand. I think & yeah, I'm okay. You don't even have to put it in layman's terms. I know medical. I'm in nursing school. I have a year left. But even if you didn't know that, I would still hope you would try to tell me what was going on instead of just doing it in your head, and treating it. (patient 2)
I mean it's just you always have to explain your situation over and over and over again. After a while you just stop trusting them. (patient 20)
Good communication can positively affect the doctor‐patient relationship. Just continue with the communication, the open communication, and always stress to me that I have a voice and just going out of their way to do whatever they can to help me through whatever I'm going through. (patient 1)
Transparency in transition Patients want to be informed prior to a service changeover. I think they should be told immediately, even maybe given prior notice, like this may happen, just so you're not surprised when it happens. (patient 15)
When the doctor approached me, he let me know that he wasn't going to be here the next day and there was going to be another doctor coming in. That made me feel comfortable. (patient 9)
Patients desire a more formalized process in the service changeover. People want things to be consistent. People don't like change. They like routine. So, if he's leaving, you're coming on, I'd like for him to bring you in, introduce you to me, and for you just assure me that I'll take care of you. (patient 4)
Just like when you get a new medication, you're given all this information on it. So when you get a new hospitalist, shouldn't I get all the information on them? Like where they went to school, what they look like. (patient 23)
Patients want clearer definition of the roles the physicians will play in their care. The first time I was hospitalized for the first time I had all these different doctors coming in, and I had the residency, and the specialists, and the department, and I don't know who's who. What I asked them to do is when they come in the room, which they did, but introduce it a little more for me. Write it down like these are the special team and these are the doctors because even though they come in and give me their name, I have no idea what they're doing. (patient 5)
Someone should explain the setup and who people are. Someone would say, Okay when you're in a hospital this is your [doctor's] role. Like they should have booklets and everything. (patient 19)
Indifference toward transition Many patients have trust in service changeovers. [S]o as long as everybody's on board and communicates well and efficiently, I don't have a problem with it. (patient 6)
To me, it really wasn't no preference, as long as I was getting the care that I needed. (patient 21)
It's not a concern as long as they're on the same page. (patient 17)
Hospitalist‐specialist communication Patients are concerned about communication between their hospitalist and their subspecialists. The more cooks you get in the kitchen, the more things get to get lost, so I'm always concerned that they're not sharing the same information, especially when you're getting asked the same questions that you might have just answered the last hour ago. (patient 9)
I don't know if the hospitalist are talking to them [subspecialist]. They haven't got time. (patient 35)
Patients place trust in the communication between hospitalist and subspecialist. I think among the teams themselveswhich is my pain doctor, Dr. K's group, the oncology group itself, they switch off and trade with each other and they all speak the same language so that works out good. (patient 3)
Lack of interprofessional communication can lead to patient concern. I was afraid that one was going to drop the ball on something and not pass something on, or you know. (patient 11)
I had numerous doctors who all seemed to not communicate with each other at all or did so by email or whatever. They didn't just sit down together and say we feel this way and we feel that way. I didn't like that at all. (patient 10)
New opportunities due to transition Patients see new doctor as opportunity for medical reevaluation. I see it as two heads are better than one, three heads are better than one, four heads are better than one. When people put their heads together to work towards a common goal, especially when they're, you know, people working their craft, it can't be bad. (patient 9)
I finally got my ears looked atbecause I've asked to have my ears looked at since Mondayand the new doc is trying to make an effort to look at them. (patient 39)
Patients see service changeover as an opportunity to form a better personal relationship. Having a new hospitalist it gives you opportunity for a new beginning. (patient 11)
Bedside manner Good bedside manner can assist in a service changeover. Some of them are all business‐like but some of them are, Well how do you feel today? Hi, how are you? So this made a little difference. You feel more comfortable. You're going to be more comfortable with them. Their bedside manner helps. (patient 16)
It's just like when a doctor sits down and talks to you, they just seem more relaxed and more .... I know they're very busy and they have lots of things to do and other patients to see, but while they're in there with you, you know, you don't get too much time with them. So bedside manner is just so important. (patient 24)
Poor bedside manner can be detrimental in transition. [B]ecause they be so busy they claim they don't have time just to sit and talk to a patient, and make sure they all right. (patient 17)

Physician‐Patient Communication

Communication between the physician and the patient was an important element in patients' assessment of their experience. Patient's tended to divide physician‐patient communication into 2 categories: good communication, which consisted of open communication (patient 1) and patient engagement, and bad communication, which was described as physicians not sharing information or taking the time to explain the course of care in words that I'll understand (patient 2). Patients also described dissatisfaction with redundant communication between multiple hospitalists and the frustration of often having to describe their clinical course to multiple providers.

Transparency in Communication

The desire to have greater transparency in the handoff process was another common theme. This was likely due to the fact that 34/40 (85%) of surveyed patients were unaware that a service changeover had ever taken place. This lack of transparency was viewed to have further downstream consequences as patients stated that there should be a level of transparency, and when it's not, then there is always trust issues (patient 1). Upon further questioning as to how to make the process more transparent, many patients recommended a formalized, face‐to‐face introduction involving the patient and both hospitalists, in which the outgoing hospitalist would, bring you [oncoming hospitalist] in, and introduce you to me (patient 4).

Patients often stated that given the large spectrum of physicians they might encounter during their stay (ie, medical student, resident, hospitalist attending, subspecialty fellow, subspecialist attending), clearer definitions of physicians' roles are needed.

Hospitalist‐Specialist Communication

Concern about the communication between their hospitalist and subspecialist was another predominant theme. Conflicting and unclear directions from multiple services were especially frustrating, as a patient stated, One guy took me off this pill, the other guy wants me on that pill, I'm like okay, I can't do both (patient 8). Furthermore, a subset of patients referenced their subspecialist as their primary care provider and preferred their subspecialist for guidance in their hospital course, rather than their hospitalist. This often appeared in cases where the patient had an established relationship with the subspecialist prior to their hospitalization.

New Opportunities Due to Transition

Patients expressed positive feelings toward service handoffs by viewing the transition as an opportunity for medical reevaluation by a new physician. Patients told of instances in which a specific complaint was not being addressed by the first physician, but would be addressed by the second (oncoming) physician. A commonly expressed idea was that the oncoming physician might know something that he [Dr. B] didn't know, and since Dr. B was only here for a week, why not give him [oncoming hospitalist] a chance (patient 10). Patients would also describe the transition as an opportunity to form, and possibly improve, therapeutic alliances with a new hospitalist.

Bedside Manner

Bedside manner was another commonly mentioned thematic element. Patients were often quick to forget prior problems or issues that they may have suffered because of the transition if the oncoming physician was perceived to have a good bedside manner, often described as someone who formally introduced themselves, was considered relaxed, and would take the time to sit and talk with the patient. As a patient put it, [S]he sat down and got to know meand asked me what I wanted to do (patient 12). Conversely, patients described instances in which a perceived bad bedside manner led to a poor relationship between the physician and the patient, in which trust and comfort (patient 11) were sacrificed.

Indifference Toward Transition

In contrast to some of the previous findings, which called for improved interactions between physicians and patients, we also discovered a theme of indifference toward the transition. Several patients stated feelings of trust with the medical system, and were content with the service changeover as long as they felt that their medical needs were being met. Patients also tended to express a level of acceptance with the transition, and tended to believe that this was the price we pay for being here [in the hospital] (patient 7).

Conceptual Model

Following the collection and analysis of all patient responses, all themes were utilized to construct the ideal patient‐centered service handoff. The ideal transition describes open lines of communication between all involved parties, is facilitated by multiple modalities, such as the EHRs and nursing staff, and recognizes the patient as the primary stakeholder (Figure 1).

Figure 1
Conceptual model of the ideal patient experience with a service handoff. Abbreviations: EHR, electronic health record.

DISCUSSION

To our knowledge, this is the first qualitative investigation of the hospitalized patient's experience with service handoffs between hospitalists. The patient perspective adds a personal and first‐hand description of how fragmented care may impact the hospitalized patient experience.

Of the 6 themes, communication was found to be the most pertinent to our respondents. Because much of patient care is an inherently communicative activity, it is not surprising that patients, as well as patient safety experts, have focused on communication as an area in need of improvement in transition processes.[17, 18] Moreover, multiple medical societies have directly called for improvements within this area, and have specifically recommended clear and direct communication of treatment plans between the patient and physician, timely exchange of information, and knowledge of who is primarily in charge of the patients care.[11] Not surprisingly, each of these recommendations appears to be echoed by our participants. This theme is especially important given that good physician‐patient communication has been noted to be a major goal in achieving patient‐centered care, and has been positively correlated to medication adherence, patient satisfaction, and physical health outcomes.[19, 20, 21, 22, 23]

Although not a substitute for face‐to‐face interactions, other communication interventions between physicians and patients should be considered. For example, get to know me posters placed in patient rooms have been shown to encourage communication between patients and physicians.[24] Additionally, physician face cards have been used to improve patients' abilities to identify and clarify physicians' roles in patient care.[25] As a patient put it, If they got a new one [hospitalist], just as if I got a new medicationprint out information on themlike where they went to med school, and stuff(patient 13). These modalities may represent highly implementable, cost‐effective adjuncts to current handoff methods that may improve lines of communication between physicians and patients.

In addition to the importance placed on physician‐patient communication, interprofessional communication between hospitalists and subspecialists was also highly regarded. Studies have shown that practice‐based interprofessional communication, such as daily interdisciplinary rounds and the use of external facilitators, can improve healthcare processes and outcomes.[26] However, these interventions must be weighed with the many conflicting factors that both hospitalists and subspecialists face on daily basis, including high patient volumes, time limitations, patient availability, and scheduling conflicts.[27] None the less, the strong emphasis patients placed on this line of communication highlights this domain as an area in which hospitalist and subspecialist can work together for systematic improvement.

Patients also recognized the complexity of the transfer process between hospitalists and called for improved transparency. For example, patients repeatedly requested to be informed prior to any changes in their hospitalists, a request that remains consistent with current guidelines.[11] There also existed a strong desire for a more formalized process of transitioning between hospitalists, which often described a handoff procedure that would occur at the patient's bedside. This desire seems to be mirrored in the data that show that patients prefer to interact with their care team at the bedside and report higher satisfaction when they are involved with their care.[28, 29] Unfortunately, this desire for more direct interaction with physicians runs counter to the current paradigm of patient care, where most activities on rounds do not take place at the bedside.[30]

In contrast to patient's calls for improved transparency, an equally large portion of patients expressed relative indifference to the transition. Whereas on the surface this may seem salutary, some studies suggest that a lack of patient activation and engagement may have adverse effects toward patients' overall care.[31] Furthermore, others have shown evidence of better healthcare experiences, improved health outcomes, and lower costs in patients who are more active in their care.[30, 31] Altogether, this suggests that despite some patients' indifference, physicians should continue to engage patients in their hospital care.[32]

Although prevailing sentiments among patient safety advocates are that patient handoffs are inherently dangerous and place patients at increased risk of adverse events, patients did not always share this concern. A frequently occurring theme was that the transition is an opportunity for medical reevaluation or the establishment of a new, possibly improved therapeutic alliance. Recognizing this viewpoint offers oncoming hospitalists the opportunity to focus on issues that the patient may have felt were not being properly addressed with their prior physician.

Finally, although our conceptual model is not a strict guideline, we believe that any future studies should consider this framework when constructing interventions to improve service‐level handoffs. Several interventions already exist. For instance, educational interventions, such as patient‐centered interviewing, have been shown to improve patient satisfaction, compliance with medications, lead to fewer lawsuits, and improve health outcomes.[33, 34, 35] Additional methods of keeping the patient more informed include physician face sheets and performance of the handoff at the patient's bedside. Although well known in nursing literature, the idea of physicians performing handoffs at the patient's bedside is a particularly patient‐centric process.[36] This type of intervention may have the ability to transform the handoff from the current state of a 2‐way street, in which information is passed between 2 hospitalists, to a 3‐way stop, in which both hospitalists and the patient are able to communicate at this critical junction of care.

Although our study does offer new insight into the effects of discontinuous care, its exploratory nature does have limitations. First, being performed at a single academic center limits our ability to generalize our findings. Second, perspectives of those who did not wish to participate, patients' family members or caregivers, and those who were not queried, could highly differ from those we interviewed. Additionally, we did not collect data on patients' diagnoses or reason for admission, thus limiting our ability to assess if certain diagnosis or subpopulations predispose patients to experiencing a service handoff. Third, although our study was restricted to English‐speaking patients only, we must consider that non‐English speakers would likely suffer from even greater communication barriers than those who took part in our study. Finally, our interviews and data analysis were conducted by hospitalists, which could have subconsciously influenced the interview process, and the interpretation of patient responses. However, we tried to mitigate these issues by having the same individual interview all participants, by using an interview guide to ensure cross‐cohort consistency, by using open‐ended questions, and by attempting to give patients every opportunity to express themselves.

CONCLUSIONS

From a patients' perspective, inpatient service handoffs are often opaque experiences that are highlighted by poor communication between physicians and patients. Although deficits in communication and transparency acted as barriers to a patient‐centered handoff, physicians should recognize that service handoffs may also represent opportunities for improvement, and should focus on these domains when they start on a new service.

Disclosures

All funding for this project was provided by the Section of Hospital Medicine at The University of Chicago Medical Center. The data from this article were presented at the Society of Hospital Medicine Annual Conference, National Harbor, March 31, 2015, and at the Society of General Internal Medicine National Meeting in Toronto, Canada, April 23, 2015. The authors report that no conflicts of interest, financial or otherwise, exist.

Studies examining the importance of continuity of care have shown that patients who maintain a continuous relationship with a single physician have improved outcomes.[1, 2] However, most of these studies were performed in the outpatient, rather than the inpatient setting. With over 35 million patients admitted to hospitals in 2013, along with the significant increase in hospital discontinuity over recent years, the impact of inpatient continuity of care on quality outcomes and patient satisfaction is becoming increasingly relevant.[3, 4]

Service handoffs, when a physician hands over treatment responsibility for a panel of patients and is not expected to return, are a type of handoff that contributes to inpatient discontinuity. In particular, service handoffs between hospitalists are an especially common and inherently risky type of transition, as there is a severing of an established relationship during a patient's hospitalization. Unfortunately, due to the lack of evidence on the effects of service handoffs, current guidelines are limited in their recommendations.[5] Whereas several recent studies have begun to explore the effects of these handoffs, no prior study has examined this issue from a patient's perspective.[6, 7, 8]

Patients are uniquely positioned to inform us about their experiences in care transitions. Furthermore, with patient satisfaction now affecting Medicare reimbursement rates, patient experiences while in the hospital are becoming even more significant.[9] Despite this emphasis toward more patient‐centered care, no study has explored the hospitalized patient's experience with hospitalist service handoffs. Our goal was to qualitatively assess the hospitalized patients' experiences with transitions between hospitalists to develop a conceptual model to inform future work on improving inpatient transitions of care.

METHODS

Sampling and Recruitment

We conducted bedside interviews of hospitalized patients at an urban academic medical center from October 2014 through December 2014. The hospitalist service consists of a physician and an advanced nurse practitioner (ANP) who divide a panel of patients that consist of general medicine and subspecialty patients who are often comanaged with hepatology, oncology, and nephrology subspecialists. We performed a purposive selection of patients who could potentially comment on their experience with a hospitalist service transition using the following method: 48 hours after a service handoff (ie, an outgoing physician completing 1 week on service, then transfers the care of the patient to a new oncoming hospitalist), oncoming hospitalists were approached and asked if any patient on their service had experienced a service handoff and still remained in the hospital. A 48‐hour time period was chosen to give the patients time to familiarize themselves with their new hospitalist, allowing them to properly comment on the handoff. Patients who were managed by the ANP, who were non‐English speaking, or who were deemed to have an altered mental status based on clinical suspicion by the interviewing physician (C.M.W.) were excluded from participation. Following each weekly service transition, a list of patients who met the above criteria was collected from 4 nonteaching hospitalist services, and were approached by the primary investigator (C.M.W.) and asked if they would be willing to participate. All patients were general medicine patients and no exclusions were made based on physical location within the hospital. Those who agreed provided signed written consent prior to participation to allow access to the electronic health records (EHRs) by study personnel.

Data Collection

Patients were administered a 9‐question, semistructured interview that was informed by expert opinion and existing literature, which was developed to elicit their perspective regarding their transition between hospitalists.[10, 11] No formal changes were made to the interview guide during the study period, and all patients were asked the same questions. Outcomes from interim analysis guided further questioning in subsequent interviews so as to increase the depth of patient responses (ie, Can you explain your response in greater depth?). Prior to the interview, patients were read a description of a hospitalist, and were reminded which hospitalists had cared for them during their stay (see Supporting Information, Appendix 1, in the online version of this article). If family members or a caregiver were present at the time of interview, they were asked not to comment. No repeat interviews were carried out.

All interviews were performed privately in single‐occupancy rooms, digitally recorded using an iPad (Apple, Cupertino, CA) and professionally transcribed verbatim (Rev, San Francisco, CA). All analysis was performed using MAXQDA Software (VERBI Software GmbH, Berlin, Germany). We obtained demographic information about each patient through chart review

Data Analysis

Grounded theory was utilized, with an inductive approach with no a priori hypothesis.[12] The constant comparative method was used to generate emerging and reoccurring themes.[13] Units of analysis were sentences and phrases. Our research team consisted of 4 academic hospitalists, 2 with backgrounds in clinical medicine, medical education, and qualitative analysis (J.M.F., V.M.A.), 1 as a clinician (C.M.W.), and 1 in health economics (D.O.M.). Interim analysis was performed on a weekly basis (C.M.W.), during which time a coding template was created and refined through an iterative process (C.M.W., J.M.F.). All disagreements in coded themes were resolved through group discussion until full consensus was reached. Each week, responses were assessed for thematic saturation.[14] Interviews were continued if new themes arose during this analysis. Data collection was ended once we ceased to extract new topics from participants. A summary of all themes was then presented to a group of 10 patients who met the same inclusion criteria for respondent validation and member checking. All reporting was performed within the Standards for Reporting Qualitative Research, with additional guidance derived from the Consolidated Criteria for Reporting Qualitative Research.[15, 16] The University of Chicago Institutional Review Board approved this protocol.

RESULTS

In total, 43 eligible patients were recruited, and 40 (93%) agreed to participate. Interviewed patients were between 51 and 65 (39%) years old, had a mean age of 54.5 (15) years, were predominantly female (65%), African American (58%), had a median length of stay at the time of interview of 6.5 days (interquartile range [IQR]: 48), and had an average of 2.0 (IQR: 13) hospitalists oversee their care at the time of interview (Table 1). Interview times ranged from 10:25 to 25:48 minutes, with an average of 15:32 minutes.

Respondent Characteristics
Value
  • NOTE: Abbreviations: IQR, interquartile range; LOS, length of stay; SD, standard deviation.

Response rate, n (%) 40/43 (93)
Age, mean SD 54.5 15
Sex, n (%)
Female 26 (65)
Male 14 (35)
Race, n (%)
African American 23 (58)
White 16 (40)
Hispanic 1 (2)
Median LOS at time of interview, d (IQR) 6.5 (48)
Median no. of hospitalists at time of interview, n (IQR) 2.0 (13)

We identified 6 major themes on patient perceptions of hospitalist service handoffs including (1) physician‐patient communication, (2) transparency in the hospitalist transition process, (3) indifference toward the hospitalist transition, (4) hospitalist‐subspecialist communication, (5) recognition of new opportunities due to a transition, and (6) hospitalists' bedside manner (Table 2).

Key Themes and Subthemes on Hospitalist Service Changeovers
Themes Subthemes Representative Quotes
Physician‐patient communication Patients dislike redundant communication with oncoming hospitalist. I mean it's just you always have to explain your situation over and over and over again. (patient 14)
When I said it once already, then you're repeating it to another doctor. I feel as if that hospitalist didn't talk to the other hospitalist. (patient 7)
Poor communication can negatively affect the doctor‐patient relationship. They don't really want to explain things. They don't think I'll understand. I think & yeah, I'm okay. You don't even have to put it in layman's terms. I know medical. I'm in nursing school. I have a year left. But even if you didn't know that, I would still hope you would try to tell me what was going on instead of just doing it in your head, and treating it. (patient 2)
I mean it's just you always have to explain your situation over and over and over again. After a while you just stop trusting them. (patient 20)
Good communication can positively affect the doctor‐patient relationship. Just continue with the communication, the open communication, and always stress to me that I have a voice and just going out of their way to do whatever they can to help me through whatever I'm going through. (patient 1)
Transparency in transition Patients want to be informed prior to a service changeover. I think they should be told immediately, even maybe given prior notice, like this may happen, just so you're not surprised when it happens. (patient 15)
When the doctor approached me, he let me know that he wasn't going to be here the next day and there was going to be another doctor coming in. That made me feel comfortable. (patient 9)
Patients desire a more formalized process in the service changeover. People want things to be consistent. People don't like change. They like routine. So, if he's leaving, you're coming on, I'd like for him to bring you in, introduce you to me, and for you just assure me that I'll take care of you. (patient 4)
Just like when you get a new medication, you're given all this information on it. So when you get a new hospitalist, shouldn't I get all the information on them? Like where they went to school, what they look like. (patient 23)
Patients want clearer definition of the roles the physicians will play in their care. The first time I was hospitalized for the first time I had all these different doctors coming in, and I had the residency, and the specialists, and the department, and I don't know who's who. What I asked them to do is when they come in the room, which they did, but introduce it a little more for me. Write it down like these are the special team and these are the doctors because even though they come in and give me their name, I have no idea what they're doing. (patient 5)
Someone should explain the setup and who people are. Someone would say, Okay when you're in a hospital this is your [doctor's] role. Like they should have booklets and everything. (patient 19)
Indifference toward transition Many patients have trust in service changeovers. [S]o as long as everybody's on board and communicates well and efficiently, I don't have a problem with it. (patient 6)
To me, it really wasn't no preference, as long as I was getting the care that I needed. (patient 21)
It's not a concern as long as they're on the same page. (patient 17)
Hospitalist‐specialist communication Patients are concerned about communication between their hospitalist and their subspecialists. The more cooks you get in the kitchen, the more things get to get lost, so I'm always concerned that they're not sharing the same information, especially when you're getting asked the same questions that you might have just answered the last hour ago. (patient 9)
I don't know if the hospitalist are talking to them [subspecialist]. They haven't got time. (patient 35)
Patients place trust in the communication between hospitalist and subspecialist. I think among the teams themselveswhich is my pain doctor, Dr. K's group, the oncology group itself, they switch off and trade with each other and they all speak the same language so that works out good. (patient 3)
Lack of interprofessional communication can lead to patient concern. I was afraid that one was going to drop the ball on something and not pass something on, or you know. (patient 11)
I had numerous doctors who all seemed to not communicate with each other at all or did so by email or whatever. They didn't just sit down together and say we feel this way and we feel that way. I didn't like that at all. (patient 10)
New opportunities due to transition Patients see new doctor as opportunity for medical reevaluation. I see it as two heads are better than one, three heads are better than one, four heads are better than one. When people put their heads together to work towards a common goal, especially when they're, you know, people working their craft, it can't be bad. (patient 9)
I finally got my ears looked atbecause I've asked to have my ears looked at since Mondayand the new doc is trying to make an effort to look at them. (patient 39)
Patients see service changeover as an opportunity to form a better personal relationship. Having a new hospitalist it gives you opportunity for a new beginning. (patient 11)
Bedside manner Good bedside manner can assist in a service changeover. Some of them are all business‐like but some of them are, Well how do you feel today? Hi, how are you? So this made a little difference. You feel more comfortable. You're going to be more comfortable with them. Their bedside manner helps. (patient 16)
It's just like when a doctor sits down and talks to you, they just seem more relaxed and more .... I know they're very busy and they have lots of things to do and other patients to see, but while they're in there with you, you know, you don't get too much time with them. So bedside manner is just so important. (patient 24)
Poor bedside manner can be detrimental in transition. [B]ecause they be so busy they claim they don't have time just to sit and talk to a patient, and make sure they all right. (patient 17)

Physician‐Patient Communication

Communication between the physician and the patient was an important element in patients' assessment of their experience. Patient's tended to divide physician‐patient communication into 2 categories: good communication, which consisted of open communication (patient 1) and patient engagement, and bad communication, which was described as physicians not sharing information or taking the time to explain the course of care in words that I'll understand (patient 2). Patients also described dissatisfaction with redundant communication between multiple hospitalists and the frustration of often having to describe their clinical course to multiple providers.

Transparency in Communication

The desire to have greater transparency in the handoff process was another common theme. This was likely due to the fact that 34/40 (85%) of surveyed patients were unaware that a service changeover had ever taken place. This lack of transparency was viewed to have further downstream consequences as patients stated that there should be a level of transparency, and when it's not, then there is always trust issues (patient 1). Upon further questioning as to how to make the process more transparent, many patients recommended a formalized, face‐to‐face introduction involving the patient and both hospitalists, in which the outgoing hospitalist would, bring you [oncoming hospitalist] in, and introduce you to me (patient 4).

Patients often stated that given the large spectrum of physicians they might encounter during their stay (ie, medical student, resident, hospitalist attending, subspecialty fellow, subspecialist attending), clearer definitions of physicians' roles are needed.

Hospitalist‐Specialist Communication

Concern about the communication between their hospitalist and subspecialist was another predominant theme. Conflicting and unclear directions from multiple services were especially frustrating, as a patient stated, One guy took me off this pill, the other guy wants me on that pill, I'm like okay, I can't do both (patient 8). Furthermore, a subset of patients referenced their subspecialist as their primary care provider and preferred their subspecialist for guidance in their hospital course, rather than their hospitalist. This often appeared in cases where the patient had an established relationship with the subspecialist prior to their hospitalization.

New Opportunities Due to Transition

Patients expressed positive feelings toward service handoffs by viewing the transition as an opportunity for medical reevaluation by a new physician. Patients told of instances in which a specific complaint was not being addressed by the first physician, but would be addressed by the second (oncoming) physician. A commonly expressed idea was that the oncoming physician might know something that he [Dr. B] didn't know, and since Dr. B was only here for a week, why not give him [oncoming hospitalist] a chance (patient 10). Patients would also describe the transition as an opportunity to form, and possibly improve, therapeutic alliances with a new hospitalist.

Bedside Manner

Bedside manner was another commonly mentioned thematic element. Patients were often quick to forget prior problems or issues that they may have suffered because of the transition if the oncoming physician was perceived to have a good bedside manner, often described as someone who formally introduced themselves, was considered relaxed, and would take the time to sit and talk with the patient. As a patient put it, [S]he sat down and got to know meand asked me what I wanted to do (patient 12). Conversely, patients described instances in which a perceived bad bedside manner led to a poor relationship between the physician and the patient, in which trust and comfort (patient 11) were sacrificed.

Indifference Toward Transition

In contrast to some of the previous findings, which called for improved interactions between physicians and patients, we also discovered a theme of indifference toward the transition. Several patients stated feelings of trust with the medical system, and were content with the service changeover as long as they felt that their medical needs were being met. Patients also tended to express a level of acceptance with the transition, and tended to believe that this was the price we pay for being here [in the hospital] (patient 7).

Conceptual Model

Following the collection and analysis of all patient responses, all themes were utilized to construct the ideal patient‐centered service handoff. The ideal transition describes open lines of communication between all involved parties, is facilitated by multiple modalities, such as the EHRs and nursing staff, and recognizes the patient as the primary stakeholder (Figure 1).

Figure 1
Conceptual model of the ideal patient experience with a service handoff. Abbreviations: EHR, electronic health record.

DISCUSSION

To our knowledge, this is the first qualitative investigation of the hospitalized patient's experience with service handoffs between hospitalists. The patient perspective adds a personal and first‐hand description of how fragmented care may impact the hospitalized patient experience.

Of the 6 themes, communication was found to be the most pertinent to our respondents. Because much of patient care is an inherently communicative activity, it is not surprising that patients, as well as patient safety experts, have focused on communication as an area in need of improvement in transition processes.[17, 18] Moreover, multiple medical societies have directly called for improvements within this area, and have specifically recommended clear and direct communication of treatment plans between the patient and physician, timely exchange of information, and knowledge of who is primarily in charge of the patients care.[11] Not surprisingly, each of these recommendations appears to be echoed by our participants. This theme is especially important given that good physician‐patient communication has been noted to be a major goal in achieving patient‐centered care, and has been positively correlated to medication adherence, patient satisfaction, and physical health outcomes.[19, 20, 21, 22, 23]

Although not a substitute for face‐to‐face interactions, other communication interventions between physicians and patients should be considered. For example, get to know me posters placed in patient rooms have been shown to encourage communication between patients and physicians.[24] Additionally, physician face cards have been used to improve patients' abilities to identify and clarify physicians' roles in patient care.[25] As a patient put it, If they got a new one [hospitalist], just as if I got a new medicationprint out information on themlike where they went to med school, and stuff(patient 13). These modalities may represent highly implementable, cost‐effective adjuncts to current handoff methods that may improve lines of communication between physicians and patients.

In addition to the importance placed on physician‐patient communication, interprofessional communication between hospitalists and subspecialists was also highly regarded. Studies have shown that practice‐based interprofessional communication, such as daily interdisciplinary rounds and the use of external facilitators, can improve healthcare processes and outcomes.[26] However, these interventions must be weighed with the many conflicting factors that both hospitalists and subspecialists face on daily basis, including high patient volumes, time limitations, patient availability, and scheduling conflicts.[27] None the less, the strong emphasis patients placed on this line of communication highlights this domain as an area in which hospitalist and subspecialist can work together for systematic improvement.

Patients also recognized the complexity of the transfer process between hospitalists and called for improved transparency. For example, patients repeatedly requested to be informed prior to any changes in their hospitalists, a request that remains consistent with current guidelines.[11] There also existed a strong desire for a more formalized process of transitioning between hospitalists, which often described a handoff procedure that would occur at the patient's bedside. This desire seems to be mirrored in the data that show that patients prefer to interact with their care team at the bedside and report higher satisfaction when they are involved with their care.[28, 29] Unfortunately, this desire for more direct interaction with physicians runs counter to the current paradigm of patient care, where most activities on rounds do not take place at the bedside.[30]

In contrast to patient's calls for improved transparency, an equally large portion of patients expressed relative indifference to the transition. Whereas on the surface this may seem salutary, some studies suggest that a lack of patient activation and engagement may have adverse effects toward patients' overall care.[31] Furthermore, others have shown evidence of better healthcare experiences, improved health outcomes, and lower costs in patients who are more active in their care.[30, 31] Altogether, this suggests that despite some patients' indifference, physicians should continue to engage patients in their hospital care.[32]

Although prevailing sentiments among patient safety advocates are that patient handoffs are inherently dangerous and place patients at increased risk of adverse events, patients did not always share this concern. A frequently occurring theme was that the transition is an opportunity for medical reevaluation or the establishment of a new, possibly improved therapeutic alliance. Recognizing this viewpoint offers oncoming hospitalists the opportunity to focus on issues that the patient may have felt were not being properly addressed with their prior physician.

Finally, although our conceptual model is not a strict guideline, we believe that any future studies should consider this framework when constructing interventions to improve service‐level handoffs. Several interventions already exist. For instance, educational interventions, such as patient‐centered interviewing, have been shown to improve patient satisfaction, compliance with medications, lead to fewer lawsuits, and improve health outcomes.[33, 34, 35] Additional methods of keeping the patient more informed include physician face sheets and performance of the handoff at the patient's bedside. Although well known in nursing literature, the idea of physicians performing handoffs at the patient's bedside is a particularly patient‐centric process.[36] This type of intervention may have the ability to transform the handoff from the current state of a 2‐way street, in which information is passed between 2 hospitalists, to a 3‐way stop, in which both hospitalists and the patient are able to communicate at this critical junction of care.

Although our study does offer new insight into the effects of discontinuous care, its exploratory nature does have limitations. First, being performed at a single academic center limits our ability to generalize our findings. Second, perspectives of those who did not wish to participate, patients' family members or caregivers, and those who were not queried, could highly differ from those we interviewed. Additionally, we did not collect data on patients' diagnoses or reason for admission, thus limiting our ability to assess if certain diagnosis or subpopulations predispose patients to experiencing a service handoff. Third, although our study was restricted to English‐speaking patients only, we must consider that non‐English speakers would likely suffer from even greater communication barriers than those who took part in our study. Finally, our interviews and data analysis were conducted by hospitalists, which could have subconsciously influenced the interview process, and the interpretation of patient responses. However, we tried to mitigate these issues by having the same individual interview all participants, by using an interview guide to ensure cross‐cohort consistency, by using open‐ended questions, and by attempting to give patients every opportunity to express themselves.

CONCLUSIONS

From a patients' perspective, inpatient service handoffs are often opaque experiences that are highlighted by poor communication between physicians and patients. Although deficits in communication and transparency acted as barriers to a patient‐centered handoff, physicians should recognize that service handoffs may also represent opportunities for improvement, and should focus on these domains when they start on a new service.

Disclosures

All funding for this project was provided by the Section of Hospital Medicine at The University of Chicago Medical Center. The data from this article were presented at the Society of Hospital Medicine Annual Conference, National Harbor, March 31, 2015, and at the Society of General Internal Medicine National Meeting in Toronto, Canada, April 23, 2015. The authors report that no conflicts of interest, financial or otherwise, exist.

References
  1. Sharma G, Fletcher KE, Zhang D, Kuo Y‐F, Freeman JL, Goodwin JS. Continuity of outpatient and inpatient care by primary care physicians for hospitalized older adults. JAMA. 2009;301(16):16711680.
  2. Nyweide DJ, Anthony DL, Bynum JPW, et al. Continuity of care and the risk of preventable hospitalization in older adults. JAMA Intern Med. 2013;173(20):18791885.
  3. Agency for Healthcare Research and Quality. HCUPnet: a tool for identifying, tracking, and analyzing national hospital statistics. Available at: http://hcupnet.ahrq.gov/HCUPnet.jsp?Id=82B37DA366A36BAD6(8):438444.
  4. Arora VM, Manjarrez E, Dressler DD, Basaviah P, Halasyamani L, Kripalani S. Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433440.
  5. Epstein K, Juarez E, Epstein A, Loya K, Singer A. The impact of fragmentation of hospitalist care on length of stay. J Hosp Med. 2010;5(6):335338.
  6. Turner J, Hansen L, Hinami K, et al. The impact of hospitalist discontinuity on hospital cost, readmissions, and patient satisfaction. J Gen Intern Med. 2014;29(7):10041008.
  7. O'Leary KJ, Turner J, Christensen N, et al. The effect of hospitalist discontinuity on adverse events. J Hosp Med. 2015;10(3):147151.
  8. Agency for Healthcare Research and Quality. HCAHPS Fact Sheet. CAHPS Hospital Survey August 2013. Available at: http://www.hcahpsonline.org/files/August_2013_HCAHPS_Fact_Sheet3.pdf. Accessed February 2, 2015.
  9. Behara R, Wears RL, Perry SJ, et al. A conceptual framework for studying the safety of transitions in emergency care. In: Henriksen K, Battles JB, Marks ES, eds. Advances in Patient Safety: From Research to Implementation. Rockville, MD: Agency for Healthcare Research and Quality; 2005:309321. Concepts and Methodology; vol 2. Available at: http://www.ncbi.nlm.nih.gov/books/NBK20522. Accessed January 15, 2015.
  10. Snow V, Beck D, Budnitz T, et al. Transitions of care consensus policy statement American College of Physicians‐Society of General Internal Medicine‐Society of Hospital Medicine‐American Geriatrics Society‐American College of Emergency Physicians‐Society of Academic Emergency Medicine. J Gen Intern Med. 2009;24(8):971976.
  11. Watling CJ, Lingard L. Grounded theory in medical education research: AMEE guide no. 70. Med Teach. 2012;34(10):850861.
  12. Boeije H. A purposeful approach to the constant comparative method in the analysis of qualitative interviews. Qual Quant. 2002;36(4):391409.
  13. Morse JM. The significance of saturation. Qual Health Res. 1995;5(2):147149.
  14. O'Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):12451251.
  15. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32‐item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349357.
  16. Kripalani S, Jackson AT, Schnipper JL, Coleman EA. Promoting effective transitions of care at hospital discharge: a review of key issues for hospitalists. J Hosp Med. 2007;2(5):314323.
  17. The Joint Commission. Hot Topics in Healthcare, Issue 2. Transitions of care: the need for collaboration across entire care continuum. Available at: http://www.jointcommission.org/assets/1/6/TOC_Hot_Topics.pdf. Accessed April 9, 2015.
  18. Zolnierek KBH, Dimatteo MR. Physician communication and patient adherence to treatment: a meta‐analysis. Med Care. 2009;47(8):826834.
  19. Desai NR, Choudhry NK. Impediments to adherence to post myocardial infarction medications. Curr Cardiol Rep. 2013;15(1):322.
  20. Zandbelt LC, Smets EMA, Oort FJ, Godfried MH, Haes HCJM. Medical specialists' patient‐centered communication and patient‐reported outcomes. Med Care. 2007;45(4):330339.
  21. Clever SL, Jin L, Levinson W, Meltzer DO. Does doctor‐patient communication affect patient satisfaction with hospital care? Results of an analysis with a novel instrumental variable. Health Serv Res. 2008;43(5 pt 1):15051519.
  22. Michie S, Miles J, Weinman J. Patient‐centredness in chronic illness: what is it and does it matter? Patient Educ Couns. 2003;51(3):197206.
  23. Billings JA, Keeley A, Bauman J, et al. Merging cultures: palliative care specialists in the medical intensive care unit. Crit Care Med. 2006;34(11 suppl):S388S393.
  24. Arora VM, Schaninger C, D'Arcy M, et al. Improving inpatients' identification of their doctors: use of FACE cards. Jt Comm J Qual Patient Saf. 2009;35(12):613619.
  25. Zwarenstein M, Goldman J, Reeves S. Interprofessional collaboration: effects of practice‐based interventions on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2009;(3):CD000072.
  26. Gonzalo JD, Heist BS, Duffy BL, et al. Identifying and overcoming the barriers to bedside rounds: a multicenter qualitative study. Acad Med. 2014;89(2):326334.
  27. Lehmann LS, Brancati FL, Chen MC, Roter D, Dobs AS. The effect of bedside case presentations on patients' perceptions of their medical care. N Engl J Med. 1997;336(16):11501155.
  28. Gonzalo JD, Wolpaw DR, Lehman E, Chuang CH. Patient‐centered interprofessional collaborative care: factors associated with bedside interprofessional rounds. J Gen Intern Med. 2014;29(7):10401047.
  29. Stickrath C, Noble M, Prochazka A, et al. Attending rounds in the current era: what is and is not happening. JAMA Intern Med. 2013;173(12):10841089.
  30. Hibbard JH, Greene J. What the evidence shows about patient activation: better health outcomes and care experiences; fewer data on costs. Health Aff (Millwood). 2013;32(2):207214.
  31. Greene J, Hibbard JH, Sacks R, Overton V, Parrotta CD. When patient activation levels change, health outcomes and costs change, too. Health Aff Proj Hope. 2015;34(3):431437.
  32. Smith RC, Marshall‐Dorsey AA, Osborn GG, et al. Evidence‐based guidelines for teaching patient‐centered interviewing. Patient Educ Couns. 2000;39(1):2736.
  33. Hall JA, Roter DL, Katz NR. Meta‐analysis of correlates of provider behavior in medical encounters. Med Care. 1988;26(7):657675.
  34. Huycke LI, Huycke MM. Characteristics of potential plaintiffs in malpractice litigation. Ann Intern Med. 1994;120(9):792798.
  35. Gregory S, Tan D, Tilrico M, Edwardson N, Gamm L. Bedside shift reports: what does the evidence say? J Nurs Adm. 2014;44(10):541545.
References
  1. Sharma G, Fletcher KE, Zhang D, Kuo Y‐F, Freeman JL, Goodwin JS. Continuity of outpatient and inpatient care by primary care physicians for hospitalized older adults. JAMA. 2009;301(16):16711680.
  2. Nyweide DJ, Anthony DL, Bynum JPW, et al. Continuity of care and the risk of preventable hospitalization in older adults. JAMA Intern Med. 2013;173(20):18791885.
  3. Agency for Healthcare Research and Quality. HCUPnet: a tool for identifying, tracking, and analyzing national hospital statistics. Available at: http://hcupnet.ahrq.gov/HCUPnet.jsp?Id=82B37DA366A36BAD6(8):438444.
  4. Arora VM, Manjarrez E, Dressler DD, Basaviah P, Halasyamani L, Kripalani S. Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433440.
  5. Epstein K, Juarez E, Epstein A, Loya K, Singer A. The impact of fragmentation of hospitalist care on length of stay. J Hosp Med. 2010;5(6):335338.
  6. Turner J, Hansen L, Hinami K, et al. The impact of hospitalist discontinuity on hospital cost, readmissions, and patient satisfaction. J Gen Intern Med. 2014;29(7):10041008.
  7. O'Leary KJ, Turner J, Christensen N, et al. The effect of hospitalist discontinuity on adverse events. J Hosp Med. 2015;10(3):147151.
  8. Agency for Healthcare Research and Quality. HCAHPS Fact Sheet. CAHPS Hospital Survey August 2013. Available at: http://www.hcahpsonline.org/files/August_2013_HCAHPS_Fact_Sheet3.pdf. Accessed February 2, 2015.
  9. Behara R, Wears RL, Perry SJ, et al. A conceptual framework for studying the safety of transitions in emergency care. In: Henriksen K, Battles JB, Marks ES, eds. Advances in Patient Safety: From Research to Implementation. Rockville, MD: Agency for Healthcare Research and Quality; 2005:309321. Concepts and Methodology; vol 2. Available at: http://www.ncbi.nlm.nih.gov/books/NBK20522. Accessed January 15, 2015.
  10. Snow V, Beck D, Budnitz T, et al. Transitions of care consensus policy statement American College of Physicians‐Society of General Internal Medicine‐Society of Hospital Medicine‐American Geriatrics Society‐American College of Emergency Physicians‐Society of Academic Emergency Medicine. J Gen Intern Med. 2009;24(8):971976.
  11. Watling CJ, Lingard L. Grounded theory in medical education research: AMEE guide no. 70. Med Teach. 2012;34(10):850861.
  12. Boeije H. A purposeful approach to the constant comparative method in the analysis of qualitative interviews. Qual Quant. 2002;36(4):391409.
  13. Morse JM. The significance of saturation. Qual Health Res. 1995;5(2):147149.
  14. O'Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):12451251.
  15. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32‐item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349357.
  16. Kripalani S, Jackson AT, Schnipper JL, Coleman EA. Promoting effective transitions of care at hospital discharge: a review of key issues for hospitalists. J Hosp Med. 2007;2(5):314323.
  17. The Joint Commission. Hot Topics in Healthcare, Issue 2. Transitions of care: the need for collaboration across entire care continuum. Available at: http://www.jointcommission.org/assets/1/6/TOC_Hot_Topics.pdf. Accessed April 9, 2015.
  18. Zolnierek KBH, Dimatteo MR. Physician communication and patient adherence to treatment: a meta‐analysis. Med Care. 2009;47(8):826834.
  19. Desai NR, Choudhry NK. Impediments to adherence to post myocardial infarction medications. Curr Cardiol Rep. 2013;15(1):322.
  20. Zandbelt LC, Smets EMA, Oort FJ, Godfried MH, Haes HCJM. Medical specialists' patient‐centered communication and patient‐reported outcomes. Med Care. 2007;45(4):330339.
  21. Clever SL, Jin L, Levinson W, Meltzer DO. Does doctor‐patient communication affect patient satisfaction with hospital care? Results of an analysis with a novel instrumental variable. Health Serv Res. 2008;43(5 pt 1):15051519.
  22. Michie S, Miles J, Weinman J. Patient‐centredness in chronic illness: what is it and does it matter? Patient Educ Couns. 2003;51(3):197206.
  23. Billings JA, Keeley A, Bauman J, et al. Merging cultures: palliative care specialists in the medical intensive care unit. Crit Care Med. 2006;34(11 suppl):S388S393.
  24. Arora VM, Schaninger C, D'Arcy M, et al. Improving inpatients' identification of their doctors: use of FACE cards. Jt Comm J Qual Patient Saf. 2009;35(12):613619.
  25. Zwarenstein M, Goldman J, Reeves S. Interprofessional collaboration: effects of practice‐based interventions on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2009;(3):CD000072.
  26. Gonzalo JD, Heist BS, Duffy BL, et al. Identifying and overcoming the barriers to bedside rounds: a multicenter qualitative study. Acad Med. 2014;89(2):326334.
  27. Lehmann LS, Brancati FL, Chen MC, Roter D, Dobs AS. The effect of bedside case presentations on patients' perceptions of their medical care. N Engl J Med. 1997;336(16):11501155.
  28. Gonzalo JD, Wolpaw DR, Lehman E, Chuang CH. Patient‐centered interprofessional collaborative care: factors associated with bedside interprofessional rounds. J Gen Intern Med. 2014;29(7):10401047.
  29. Stickrath C, Noble M, Prochazka A, et al. Attending rounds in the current era: what is and is not happening. JAMA Intern Med. 2013;173(12):10841089.
  30. Hibbard JH, Greene J. What the evidence shows about patient activation: better health outcomes and care experiences; fewer data on costs. Health Aff (Millwood). 2013;32(2):207214.
  31. Greene J, Hibbard JH, Sacks R, Overton V, Parrotta CD. When patient activation levels change, health outcomes and costs change, too. Health Aff Proj Hope. 2015;34(3):431437.
  32. Smith RC, Marshall‐Dorsey AA, Osborn GG, et al. Evidence‐based guidelines for teaching patient‐centered interviewing. Patient Educ Couns. 2000;39(1):2736.
  33. Hall JA, Roter DL, Katz NR. Meta‐analysis of correlates of provider behavior in medical encounters. Med Care. 1988;26(7):657675.
  34. Huycke LI, Huycke MM. Characteristics of potential plaintiffs in malpractice litigation. Ann Intern Med. 1994;120(9):792798.
  35. Gregory S, Tan D, Tilrico M, Edwardson N, Gamm L. Bedside shift reports: what does the evidence say? J Nurs Adm. 2014;44(10):541545.
Issue
Journal of Hospital Medicine - 11(10)
Issue
Journal of Hospital Medicine - 11(10)
Page Number
675-681
Page Number
675-681
Publications
Publications
Article Type
Display Headline
A qualitative analysis of patients' experience with hospitalist service handovers
Display Headline
A qualitative analysis of patients' experience with hospitalist service handovers
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Charlie M. Wray, DO, Hospitalist Research Scholar–Clinical Associate, Section of Hospital Medicine, University of Chicago Medical Center, 5841 S. Maryland Avenue, MC 5000, Chicago, IL 60637; Telephone: 415‐595‐9662; E‐mail: cwray@medicine.bsd.uchicago.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Using Video to Validate Handoff Quality

Article Type
Changed
Sun, 05/21/2017 - 14:08
Display Headline
Using standardized videos to validate a measure of handoff quality: The handoff mini‐clinical examination exercise

Over the last decade, there has been an unprecedented focus on physician handoffs in US hospitals. One major reason for this are the reductions in residency duty hours that have been mandated by the American Council for Graduate Medical Education (ACGME), first in 2003 and subsequently revised in 2011.[1, 2] As residents work fewer hours, experts believe that potential safety gains from reduced fatigue are countered by an increase in the number of handoffs, which represent a risk due to the potential miscommunication. Prior studies show that critical patient information is often lost or altered during this transfer of clinical information and professional responsibility, which can result in patient harm.[3, 4] As a result of these concerns, the ACGME now requires residency programs to ensure and monitor effective, structured hand‐over processes to facilitate both continuity of care and patient safety. Programs must ensure that residents are competent in communicating with team members in the hand‐over process.[2] Moreover, handoffs have also been a major improvement focus for organizations with broader scope than teaching hospitals, including the World Health Organization, Joint Commission, and the Society for Hospital Medicine (SHM).[5, 6, 7]

Despite this focus on handoffs, monitoring quality of handoffs has proven challenging due to lack of a reliable and validated tool to measure handoff quality. More recently, the Accreditation Council of Graduate Medical Education's introduction of the Next Accreditation System, with its focus on direct observation of clinical skills to achieve milestones, makes it crucial for residency educators to have valid tools to measure competence in handoffs. As a result, it is critical that instruments to measure handoff performance are not only created but also validated.[8]

To help fill this gap, we previously reported on the development of a 9‐item Handoff Clinical Examination Exercise (CEX) assessment tool. The Handoff CEX, designed for use by those participating in the handoff or by a third‐party observer, can be used to rate the quality of patient handoffs in domains such as professionalism and communication skills between the receiver and sender of patient information.[9, 10] Despite prior demonstration of feasibility of use, the initial tool was perceived as lengthy and redundant. In addition, although the tool has been shown to discriminate between performance of novice and expert nurses, the construct validity of this tool has not been established.[11] Establishing construct validity is important to ensuring that the tool can measure the construct in question, namely whether it detects those who are actually competent to perform handoffs safely and effectively. We present here the results of the development of a shorter Handoff Mini‐CEX, along with the formal establishment of its construct validity, namely its ability to distinguish between levels of performance in 3 domains of handoff quality.

METHODS

Adaption of the Handoff CEX and Development of the Abbreviated Tool

The 9‐item Handoff CEX is a paper‐based instrument that was created by the investigators (L.I.H., J.M.F., V.M.A.) to evaluate either the sender or the receiver of handoff communications and has been used in prior studies (see Supporting Information, Appendix 1, in the online version of this article).[9, 10] The evaluation may be conducted by either an observer or by a handoff participant. The instrument includes 6 domains: (1) setting, (2) organization and efficiency, (3) communication skills, (4) content, (5) clinical judgment, and (6) humanistic skills/professionalism. Each domain is graded on a 9‐point rating scale, modeled on the widely used Mini‐CEX (Clinical Evaluation Exercise) for real‐time observation of clinical history and exam skills in internal medicine clerkships and residencies (13=unsatisfactory, 46=marginal/satisfactory, 79=superior).[12] This familiar 9‐point scale is utilized in graduate medical education evaluation of the ACGME core competencies.

To standardize the evaluation, the instrument uses performance‐based anchors for evaluating both the sender and the receiver of the handoff information. The anchors are derived from functional evaluation of the roles of senders and receivers in our preliminary work at both the University of Chicago and Yale University, best practices in other high‐reliability industries, guidelines from the Joint Commission and the SHM, and prior studies of effective communication in clinical systems.[5, 6, 13]

After piloting the Handoff CEX with the University of Chicago's internal medicine residency program (n=280 handoff evaluations), a strong correlation was noted between the measures of content (medical knowledge), patient care, clinical judgment, organization/efficiency, and communication skills. Moreover, the Handoff CEX's Cronbach , or measurement of internal reliability and consistency, was very high (=0.95). Given the potential of redundant items, and to increase ease of use of the instrument, factor analysis was used to reduce the instrument to yield a shorter 3‐item tool, the Handoff Mini‐CEX, that assessed 3 of the initial items: setting, communication skills, and professionalism. Overall, performance on these 3 items were responsible for 82% of the variance of overall sign‐out quality (see Supporting Information, Appendix 2, in the online version of this article).

Establishing Construct Validity of the Handoff Mini‐CEX

To establish construct validity of the Handoff Mini‐CEX, we adapted a protocol used by Holmboe and colleagues to report the construct validity of the Handoff Mini‐CEX, which is based on the development and use of video scenarios depicting varying levels of clinical performance.[14] A clinical scenario script, based on prior observational work, was developed, which represented an internal medicine resident (the sender) signing out 3 different patients to colleagues (intern [postgraduate year 1] and resident). This scenario was developed to explicitly include observable components of professionalism, communication, and setting. Three levels of performancesuperior, satisfactory, and unsatisfactorywere defined and described for the 3 domains. These levels were defined, and separate scripts were written using this information, demonstrating varying levels of performance in each of the domains of interest, using the descriptive anchors of the Handoff Mini‐CEX.

After constructing the superior, or gold standard, script that showcases superior communication, professionalism, and setting, individual domains of performance were changed (eg, to satisfactory or unsatisfactory), while holding the other 2 constant at the superior level of performance. For example, superior communication requires that the sender provides anticipatory guidance and includes clinical rationale, whereas unsatisfactory communication includes vague language about overnight events and a disorganized presentation of patients. Superior professionalism requires no inappropriate comments by the sender about patients, family, and staff as well as a presentation focused on the most urgent patients. Unsatisfactory professionalism is shown by a hurried and inattentive sign‐out, with inappropriate comments about patients, family, and staff. Finally, a superior setting is one in which the receiver is listening attentively and discourages interruptions, whereas an unsatisfactory setting finds the sender or receiver answering pages during the handoff surrounded by background noise. We omitted the satisfactory level for setting due to the difficulties in creating subtleties in the environment.

Permutations of each of these domains resulted in 6 scripts depicting different levels of sender performance (see Supporting Information, Appendix 3, in the online version of this article). Only the performance level of the sender was changed, and the receivers of the handoff performance remained consistent, using best practices for receivers, such as attentive listening, asking questions, reading back, and taking notes during the handoff. The scripts were developed by 2 investigators (V.M.A., S.B.), then reviewed and edited independently by other investigators (J.M.F., P.S.) to achieve consensus. Actors were recruited to perform the video scenarios and were trained by the physician investigators (J.M.F., V.M.A.). The part of the sender was played by a study investigator (P.S.) with prior acting experience, and who had accrued over 40 hours of experience observing handoffs to depict varying levels of handoff performance. The digital video recordings ranged in length from 2.00 minutes to 4.08 minutes. All digital videos were recorded using a Sony XDCAM PMW‐EX3 HD camcorder (Sony Corp., Tokyo, Japan.

Participants

Faculty from the University of Chicago Medical Center and Yale University were included. At the University of Chicago, faculty were recruited to participate via email by the study investigators to the Research in Medical Education (RIME) listhost, which includes program directors, clerkship directors, and medical educators. Two sessions were offered and administered. Continuing medical education (CME) credit was provided for participation, as this workshop was given in conjunction with the RIME CME conference. Evaluations were deidentified using a unique identifier for each rater. At Yale University, the workshop on handoffs was offered as part of 2 seminars for program directors and chief residents from all specialties. During these seminars, program directors and chief residents used anonymous evaluation rating forms that did not capture rater identifiers. No other incentive was provided for participation. Although neither faculty at the University of Chicago nor Yale University received any formal training on handoff evaluation, they did receive a short introduction to the importance of handoffs and the goals of the workshop. The protocol was deemed exempt by the institutional review board at the University of Chicago.

Workshop Protocol

After a brief introduction, faculty viewed the tapes in random order on a projected screen. Participants were instructed to use the Handoff Mini‐CEX to rate whichever element(s) of handoff quality they believed they could suitably evaluate while watching the tapes. The videos were rated on the Handoff Mini‐CEX form, and participants anonymously completed the forms independently without any contact with other participants. The lead investigators proctored all sessions. At University of Chicago, participants viewed and rated all 6 videos over the course of an hour. At Yale University, due to time constraints in the program director and chief resident seminars, participants reviewed 1 of the videos in seminar 1 (unsatisfactory professionalism) and 2 in the other seminar (unsatisfactory communication, unsatisfactory professionalism) (Table 1).

Script Matrix
 UnsatisfactorySatisfactorySuperior
  • NOTE: Abbreviations: CBC, complete blood count; CCU, coronary care unit; ECG, electrocardiogram.

  • Denotes video scenario seen by Yale University raters. All videos were seen by University of Chicago raters.

CommunicationScript 3 (n=36)aScript 2 (n=13)Script 1 (n=13)
Uses vague language about overnight events, missing critical patient information, disorganized.Insufficient level of clinical detail, directions are not as thorough, handoff is generally on task and sufficient.Anticipatory guidance provided, rationale explained; important information is included, highlights sick patients.
Look in the record; I'm sure it's in there. And oh yeah, I need you to check enzymes and finish ruling her out.So the only thing to do is to check labs; you know, check CBC and cardiac enzymes.So for today, I need you to check post‐transfusion hemoglobin to make sure it's back to the baseline of 10. If it's under 10, then transfuse her 2 units, but hopefully it will be bumped up. Also continue to check cardiac enzymes; the next set is coming at 2 pm, and we need to continue the rule out. If her enzymes are positive or she has other ECG changes, definitely call the cardio fellow, since they'll want to take her to the CCU.
ProfessionalismScript 5 (n=39)aScript 4 (n=22)aScript 1
Hurried, inattentive, rushing to leave, inappropriate comments (re: patients, family, staff).Some tangential comments (re: patients, family, staff).Appropriate comments (re: patients, family, staff), focused on task.
[D]efinitely call the cards fellow, since they'll want to take her to the CCU. And let me tell you, if you don't call her, she'll rip you a new one.Let's breeze through them quickly so I can get out of here, I've had a rough day. I'll start with the sickest first, and oh my God she's a train wreck! 
SettingScript 6 (n=13) Script 1
Answering pages during handoff, interruptions (people entering room, phone ringing). Attentive listening, no interruptions, pager silenced.

Data Collection and Statistical Analysis

Using combined data from University of Chicago and Yale University, descriptive statistics were reported as raw scores on the Handoff Mini‐CEX. To assess internal consistency of the tool, Cronbach was used. To assess inter‐rater reliability of these attending physician ratings on the tool, we performed a Kendall coefficient of concordance analysis after collapsing the ratings into 3 categories (unsatisfactory, satisfactory, superior). In addition, we also calculated intraclass correlation coefficients for each item using the raw data and generalizability analysis to calculate the number of raters that would be needed to achieve a desired reliability of 0.95. To ascertain if faculty were able to detect varying levels of performance depicted in the video, an ordinal test of trend on the communication, professionalism, and setting scores was performed.

To assess for rater bias, we were able to use the identifiers on the University of Chicago data to perform a 2‐way analysis of variance (ANOVA) to assess if faculty scores were associated with performance level after controlling for faculty. The results of the faculty rater coefficients and P values in the 2‐way ANOVA were also examined for any evidence of rater bias. All calculations were performed in Stata 11.0 (StataCorp, College Station, TX) with statistical significance defined as P<0.05.

RESULTS

Forty‐seven faculty members (14=site 1; 33=site 2) participated in the validation workshops (2 at the University of Chicago, and 2 at Yale University), which were held in August 2011 and September 2011, providing a total of 172 observations of a possible 191 (90%).

The overall handoff quality ratings for the superior, gold standard video (superior communication, professionalism, and communication) ranged from 7 to 9 with a mean of 8.5 (standard deviation [SD] 0.7). The overall ratings for the video depicting satisfactory communication (satisfactory communication, superior professionalism and setting) ranged from 5 to 9 with a mean of 7.3 (SD 1.1). The overall ratings for the unsatisfactory communication (unsatisfactory communication, superior professionalism and setting) video ranged from 1 to 7 with a mean of 2.6 (SD 1.2). The overall ratings for the satisfactory professionalism video (satisfactory professionalism, superior communication and setting) ranged from 4 to 8 with a mean of 5.7 (SD 1.3). The overall ratings for the unsatisfactory professionalism (unsatisfactory professionalism, superior communication and setting) video ranged from 2 to 5 with a mean of 2.4 (SD 1.03). Finally, the overall ratings for the unsatisfactory setting (unsatisfactory setting, superior communication and professionalism) video ranged from 1 to 8 with a mean of 3.1 (SD 1.7).

Figure 1 demonstrates that for the domain of communication, the raters were able to discern the unsatisfactory performance but had difficulty reliably distinguishing between superior and satisfactory performance. Figure 2 illustrates that for the domain of professionalism, raters were able to detect the videos' changing levels of performance at the extremes of behavior, with unsatisfactory and superior displays more readily identified. Figure 3 shows that for the domain of setting, the raters were able to discern the unsatisfactory versus superior level of the changing setting. Of note, we also found a moderate significant correlation between ratings of professionalism and communication (r=0.47, P<0.001).

Figure 1
Faculty ratings of communication by performance. The handoff Clinical Examination Exercise ratings are a 9‐point scale: 1–3 = unsatisfactory, 4–6 = satisfactory, 7–9 = superior.
Figure 2
Faculty ratings of professionalism by performance. The handoff Clinical Examination Exercise ratings are a 9‐point scale: 1–3 = unsatisfactory, 4–6 = satisfactory, 7–9 = superior.
Figure 3
Faculty ratings of setting by performance. The handoff Clinical Examination Exercise ratings are a 9‐point scale: 1–3 = unsatisfactory, 4–6 = satisfactory, 7–9 = superior.

The Cronbach , or measurement of internal reliability and consistency, for the Handoff Mini‐CEX (3 items plus overall) was 0.77, indicating high internal reliability and consistency. Using data from University of Chicago, where raters were labeled with a unique identifier, the Kendall coefficient of concordance was calculated to be 0.79, demonstrating high inter‐rater reliability of the faculty raters. High inter‐rater reliability was also seen using intraclass coefficients for each domain: communication (0.84), professionalism (0.68), setting (0.83), and overall (0.89). Using generalizability analysis, the average reliability was determined to be above 0.9 for all domains (0.99 for overall).

Last, the 2‐way ANOVA (n=75 observations from 13 raters) revealed no evidence of rater bias when examining the coefficient for attending rater (P=0.55 for professionalism, P=0.45 for communication, P=0.92 for setting). The range of scores for each video, however, was broad (Table 2).

Faculty's Mini‐Handoff Clinical Examination Exercise Ratings by Level of Performance Depicted in Video
 UnsatisfactorySatisfactorySuperior 
MeanMedianRangeMeanMedianRangeMeanMedianRangePb
  • NOTE: Clinical Examination Exercise ratings are on a 9‐point scale: 13=unsatisfactory, 46=satisfactory, 79=superior.

  • P value is from 2‐way analysis of variance examining the level of performance on rating of that construct controlling for rater.

Professionalism2.32144.44387.07390.026
Communication2.831678596.67190.005
Setting3.1318 7.58290.005

DISCUSSION

This study demonstrates that valid conclusions on handoff performance can be drawn using the Handoff CEX as the instrument to rate handoff quality. Utilizing standardized videos depicting varying levels of performance communication, professionalism, and setting, the Handoff Mini‐CEX has demonstrated potential to discern between increasing levels of performance, providing evidence for the construct validity of the instrument.

We observed that faculty could reliably detect unsatisfactory professionalism with ease, and that there was a distinct correlation between faculty ratings and the internally set levels of performance displayed in the videos. This trend demonstrated that faculty were able to discern different levels of professionalism using the Handoff Mini‐CEX. It became more difficult, however, for faculty to detect superior professionalism when the domain of communication was permuted. If the sender of the handoff was professional but the information delivered was disorganized, inaccurate, and missing crucial pieces of information, the faculty perceived this ineffective communication as unprofessional. Prior literature on professionalism has found that communication is a necessary component of professional behavior, and consequently, being a competent communicator is necessary to fulfill ones duty as a professional physician.[15, 16]

This is of note because we did find a moderate significant correlation between ratings of professionalism and communication. It is possible that this distinction would be made clearer with formal rater training in the future prior to any evaluations. However, it is also possible that professionalism and communication, due to a synergistic role between the 2 domains, cannot be separated. If this is the case, it would be important to educate clinicians to present patients in a concise, clear, and accurate way with a professional demeanor. Acknowledging professional responsibility as an integral piece of patient care is also critical in effectively communicating patient information.[5]

We also noted that faculty could detect unsatisfactory communication consistently; however, they were unable to differentiate between satisfactory and superior communication reliably or consistently. Because the unsatisfactory professionalism, unsatisfactory setting, and satisfactory professionalism videos all demonstrated superior communication, we believe that the faculty penalized communication when distractions, in the form of interruptions and rude behavior by the resident giving the handoff, interrupted the flow of the handoff. Thus, the wide ranges in scores observed by some raters may be attributed to this interaction between the Handoff Mini‐CEX domains. In the future, definitions of the anchors, including at the middle spectrum of performance, and rater training may improve the ability of raters to distinguish performance between each domain.

The overall value of the Handoff Mini‐CEX is in its ease of use, in part due to its brevity, as well as evidence for its validity in distinguishing between varying levels of performance. Given the emphasis on monitoring handoff quality and performance, the Handoff Mini‐CEX provides a standard foundation from which baseline handoff performance can be easily measured and improved. Moreover, it can also be used to give individual feedback to a specific practicing clinician on their practices and an opportunity to improve. This is particularly important given current recommendations by the Joint Commission, that handoffs are standardized, and by the ACGME, that residents are competent in handoff skills. Moreover, given the creation of the SHM's handoff recommendations and handoffs as a core competency for hospitalists, the tool provides the ability for hospitalist programs to actually assess their handoff practices as baseline measurements for any quality improvement activities that may take place.

Faculty were able to discern the superior and unsatisfactory levels of setting with ease. After watching and rating the videos, participants said that the chaotic scene of the unsatisfactory setting video had significant authenticity, and that they were constantly interrupted during their own handoffs by pages, phone calls, and people entering the handoff space. System‐level fixes, such as protected time and dedicated space for handoffs, and discouraging pages to be sent during the designated handoff time, could mitigate the reality of unsatisfactory settings.[17, 18]

Our study has several limitations. First, although this study was held at 2 sites, it included a small number of faculty, which can impact the generalizability of our findings. Implementation varied at Yale University and the University of Chicago, preventing use of all data for all analyses. Furthermore, institutional culture may also impact faculty raters' perceptions, so future work aims at repeating our protocol at partner institutions, increasing both the number and diversity of participants. We were also unable to compare the new shorter Handoff Mini‐CEX to the larger 9‐item Handoff CEX in this study.

Despite these limitations, we believe that the Handoff Mini‐CEX, has future potential as an instrument with which to make valid and reliable conclusions about handoff quality, and could be used to both evaluate handoff quality and as an educational tool for trainees and faculty on effective handoff communication.

Disclosures

This work was supported by the National Institute on Aging Short‐Term Aging‐Related Research Program (5T35AG029795), Agency for Healthcare Research and Quality (1 R03HS018278‐01), and the University of Chicago Department of Medicine Excellence in Medical Education Award. Dr. Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Arora is funded by National Institute on Aging Career Development Award K23AG033763. Prior presentations of these data include the 2011 Association of American Medical Colleges meeting in Denver, Colorado, the 2012 Association of Program Directors of Internal Medicine meeting in Atlanta, Georgia, and the 2012 Society of General Internal Medicine Meeting in Orlando, Florida.

Files
References
  1. Nasca TJ, Day SH, Amis ES. The new recommendations on duty hours from the ACGME task force. New Engl J Med. 2010;363(2):e3.
  2. ACGME common program requirements. Effective July 1, 2011. Available at: http://www.acgme.org/acgmeweb/Portals/0/PDFs/Common_Program_Requirements_07012011[2].pdf. Accessed February 8, 2014.
  3. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):17551760.
  4. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Healthcare. 2005;14(6):401407.
  5. Arora VM, Manjarrez E, Dressler DD, Basaviah P, Halasyamani L, Kripalani S. Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433440.
  6. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32(11):646655.
  7. World Health Organization Collaborating Centre for Patient Safety. Solutions on communication during patient hand‐overs. 2007; Volume 1, Solution 1. Available at: http://www.who.int/patientsafety/solutions/patientsafety/PS‐Solution3.pdf. Accessed February 8, 2014.
  8. Patterson ES, Wears RL. Patient handoffs: standardized and reliable measurement tools remain elusive. Jt Comm J Qual Patient Saf. 2010;36(2):5261.
  9. Horwitz L, Rand D, Staisiunas P, et al. Development of a handoff evaluation tool for shift‐to‐shift physician handoffs: the handoff CEX. J Hosp Med. 2013;8(4):191200.
  10. Farnan JM, Paro JAM, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129134.
  11. Horwitz LI, Dombroski J, Murphy TE, Farnan JM, Johnson JK, Arora VM. Validation of a handoff tool: the Handoff CEX. J Clin Nurs. 2013;22(9‐10):14771486.
  12. Norcini JJ, Blank LL, Duffy FD, Fortna GS. The mini‐CEX: a method for assessing clinical skills. Ann Intern Med. 2003;138(6):476481.
  13. Patterson ES, Roth EM, Woods DD, Chow R, Gomes JO. Handoff strategies in settings with high consequences for failure: lessons for health care operations. Int J Qual Health Care. 2004;16(2):125132.
  14. Holmboe ES, Huot S, Chung J, Norcini J, Hawkins RE. Construct validity of the miniclinical evaluation exercise (miniCEX). Acad Med. 2003;78(8):826830.
  15. Reddy ST, Farnan JM, Yoon JD, et al. Third‐year medical students' participation in and perceptions of unprofessional behaviors. Acad Med. 2007;82(10 suppl):S35S39.
  16. Hafferty FW. Professionalism—the next wave. N Engl J Med. 2006;355(20):21512152.
  17. Chang VY, Arora VM, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491496.
  18. Greenstein EA, Arora VM, Staisiunas PG, Banerjee SS, Farnan JM. Characterising physician listening behaviour during hospitalist handoffs using the HEAR checklist. BMJ Qual Saf. 2013;22(3):203209.
Article PDF
Issue
Journal of Hospital Medicine - 9(7)
Publications
Page Number
441-446
Sections
Files
Files
Article PDF
Article PDF

Over the last decade, there has been an unprecedented focus on physician handoffs in US hospitals. One major reason for this are the reductions in residency duty hours that have been mandated by the American Council for Graduate Medical Education (ACGME), first in 2003 and subsequently revised in 2011.[1, 2] As residents work fewer hours, experts believe that potential safety gains from reduced fatigue are countered by an increase in the number of handoffs, which represent a risk due to the potential miscommunication. Prior studies show that critical patient information is often lost or altered during this transfer of clinical information and professional responsibility, which can result in patient harm.[3, 4] As a result of these concerns, the ACGME now requires residency programs to ensure and monitor effective, structured hand‐over processes to facilitate both continuity of care and patient safety. Programs must ensure that residents are competent in communicating with team members in the hand‐over process.[2] Moreover, handoffs have also been a major improvement focus for organizations with broader scope than teaching hospitals, including the World Health Organization, Joint Commission, and the Society for Hospital Medicine (SHM).[5, 6, 7]

Despite this focus on handoffs, monitoring quality of handoffs has proven challenging due to lack of a reliable and validated tool to measure handoff quality. More recently, the Accreditation Council of Graduate Medical Education's introduction of the Next Accreditation System, with its focus on direct observation of clinical skills to achieve milestones, makes it crucial for residency educators to have valid tools to measure competence in handoffs. As a result, it is critical that instruments to measure handoff performance are not only created but also validated.[8]

To help fill this gap, we previously reported on the development of a 9‐item Handoff Clinical Examination Exercise (CEX) assessment tool. The Handoff CEX, designed for use by those participating in the handoff or by a third‐party observer, can be used to rate the quality of patient handoffs in domains such as professionalism and communication skills between the receiver and sender of patient information.[9, 10] Despite prior demonstration of feasibility of use, the initial tool was perceived as lengthy and redundant. In addition, although the tool has been shown to discriminate between performance of novice and expert nurses, the construct validity of this tool has not been established.[11] Establishing construct validity is important to ensuring that the tool can measure the construct in question, namely whether it detects those who are actually competent to perform handoffs safely and effectively. We present here the results of the development of a shorter Handoff Mini‐CEX, along with the formal establishment of its construct validity, namely its ability to distinguish between levels of performance in 3 domains of handoff quality.

METHODS

Adaption of the Handoff CEX and Development of the Abbreviated Tool

The 9‐item Handoff CEX is a paper‐based instrument that was created by the investigators (L.I.H., J.M.F., V.M.A.) to evaluate either the sender or the receiver of handoff communications and has been used in prior studies (see Supporting Information, Appendix 1, in the online version of this article).[9, 10] The evaluation may be conducted by either an observer or by a handoff participant. The instrument includes 6 domains: (1) setting, (2) organization and efficiency, (3) communication skills, (4) content, (5) clinical judgment, and (6) humanistic skills/professionalism. Each domain is graded on a 9‐point rating scale, modeled on the widely used Mini‐CEX (Clinical Evaluation Exercise) for real‐time observation of clinical history and exam skills in internal medicine clerkships and residencies (13=unsatisfactory, 46=marginal/satisfactory, 79=superior).[12] This familiar 9‐point scale is utilized in graduate medical education evaluation of the ACGME core competencies.

To standardize the evaluation, the instrument uses performance‐based anchors for evaluating both the sender and the receiver of the handoff information. The anchors are derived from functional evaluation of the roles of senders and receivers in our preliminary work at both the University of Chicago and Yale University, best practices in other high‐reliability industries, guidelines from the Joint Commission and the SHM, and prior studies of effective communication in clinical systems.[5, 6, 13]

After piloting the Handoff CEX with the University of Chicago's internal medicine residency program (n=280 handoff evaluations), a strong correlation was noted between the measures of content (medical knowledge), patient care, clinical judgment, organization/efficiency, and communication skills. Moreover, the Handoff CEX's Cronbach , or measurement of internal reliability and consistency, was very high (=0.95). Given the potential of redundant items, and to increase ease of use of the instrument, factor analysis was used to reduce the instrument to yield a shorter 3‐item tool, the Handoff Mini‐CEX, that assessed 3 of the initial items: setting, communication skills, and professionalism. Overall, performance on these 3 items were responsible for 82% of the variance of overall sign‐out quality (see Supporting Information, Appendix 2, in the online version of this article).

Establishing Construct Validity of the Handoff Mini‐CEX

To establish construct validity of the Handoff Mini‐CEX, we adapted a protocol used by Holmboe and colleagues to report the construct validity of the Handoff Mini‐CEX, which is based on the development and use of video scenarios depicting varying levels of clinical performance.[14] A clinical scenario script, based on prior observational work, was developed, which represented an internal medicine resident (the sender) signing out 3 different patients to colleagues (intern [postgraduate year 1] and resident). This scenario was developed to explicitly include observable components of professionalism, communication, and setting. Three levels of performancesuperior, satisfactory, and unsatisfactorywere defined and described for the 3 domains. These levels were defined, and separate scripts were written using this information, demonstrating varying levels of performance in each of the domains of interest, using the descriptive anchors of the Handoff Mini‐CEX.

After constructing the superior, or gold standard, script that showcases superior communication, professionalism, and setting, individual domains of performance were changed (eg, to satisfactory or unsatisfactory), while holding the other 2 constant at the superior level of performance. For example, superior communication requires that the sender provides anticipatory guidance and includes clinical rationale, whereas unsatisfactory communication includes vague language about overnight events and a disorganized presentation of patients. Superior professionalism requires no inappropriate comments by the sender about patients, family, and staff as well as a presentation focused on the most urgent patients. Unsatisfactory professionalism is shown by a hurried and inattentive sign‐out, with inappropriate comments about patients, family, and staff. Finally, a superior setting is one in which the receiver is listening attentively and discourages interruptions, whereas an unsatisfactory setting finds the sender or receiver answering pages during the handoff surrounded by background noise. We omitted the satisfactory level for setting due to the difficulties in creating subtleties in the environment.

Permutations of each of these domains resulted in 6 scripts depicting different levels of sender performance (see Supporting Information, Appendix 3, in the online version of this article). Only the performance level of the sender was changed, and the receivers of the handoff performance remained consistent, using best practices for receivers, such as attentive listening, asking questions, reading back, and taking notes during the handoff. The scripts were developed by 2 investigators (V.M.A., S.B.), then reviewed and edited independently by other investigators (J.M.F., P.S.) to achieve consensus. Actors were recruited to perform the video scenarios and were trained by the physician investigators (J.M.F., V.M.A.). The part of the sender was played by a study investigator (P.S.) with prior acting experience, and who had accrued over 40 hours of experience observing handoffs to depict varying levels of handoff performance. The digital video recordings ranged in length from 2.00 minutes to 4.08 minutes. All digital videos were recorded using a Sony XDCAM PMW‐EX3 HD camcorder (Sony Corp., Tokyo, Japan.

Participants

Faculty from the University of Chicago Medical Center and Yale University were included. At the University of Chicago, faculty were recruited to participate via email by the study investigators to the Research in Medical Education (RIME) listhost, which includes program directors, clerkship directors, and medical educators. Two sessions were offered and administered. Continuing medical education (CME) credit was provided for participation, as this workshop was given in conjunction with the RIME CME conference. Evaluations were deidentified using a unique identifier for each rater. At Yale University, the workshop on handoffs was offered as part of 2 seminars for program directors and chief residents from all specialties. During these seminars, program directors and chief residents used anonymous evaluation rating forms that did not capture rater identifiers. No other incentive was provided for participation. Although neither faculty at the University of Chicago nor Yale University received any formal training on handoff evaluation, they did receive a short introduction to the importance of handoffs and the goals of the workshop. The protocol was deemed exempt by the institutional review board at the University of Chicago.

Workshop Protocol

After a brief introduction, faculty viewed the tapes in random order on a projected screen. Participants were instructed to use the Handoff Mini‐CEX to rate whichever element(s) of handoff quality they believed they could suitably evaluate while watching the tapes. The videos were rated on the Handoff Mini‐CEX form, and participants anonymously completed the forms independently without any contact with other participants. The lead investigators proctored all sessions. At University of Chicago, participants viewed and rated all 6 videos over the course of an hour. At Yale University, due to time constraints in the program director and chief resident seminars, participants reviewed 1 of the videos in seminar 1 (unsatisfactory professionalism) and 2 in the other seminar (unsatisfactory communication, unsatisfactory professionalism) (Table 1).

Script Matrix
 UnsatisfactorySatisfactorySuperior
  • NOTE: Abbreviations: CBC, complete blood count; CCU, coronary care unit; ECG, electrocardiogram.

  • Denotes video scenario seen by Yale University raters. All videos were seen by University of Chicago raters.

CommunicationScript 3 (n=36)aScript 2 (n=13)Script 1 (n=13)
Uses vague language about overnight events, missing critical patient information, disorganized.Insufficient level of clinical detail, directions are not as thorough, handoff is generally on task and sufficient.Anticipatory guidance provided, rationale explained; important information is included, highlights sick patients.
Look in the record; I'm sure it's in there. And oh yeah, I need you to check enzymes and finish ruling her out.So the only thing to do is to check labs; you know, check CBC and cardiac enzymes.So for today, I need you to check post‐transfusion hemoglobin to make sure it's back to the baseline of 10. If it's under 10, then transfuse her 2 units, but hopefully it will be bumped up. Also continue to check cardiac enzymes; the next set is coming at 2 pm, and we need to continue the rule out. If her enzymes are positive or she has other ECG changes, definitely call the cardio fellow, since they'll want to take her to the CCU.
ProfessionalismScript 5 (n=39)aScript 4 (n=22)aScript 1
Hurried, inattentive, rushing to leave, inappropriate comments (re: patients, family, staff).Some tangential comments (re: patients, family, staff).Appropriate comments (re: patients, family, staff), focused on task.
[D]efinitely call the cards fellow, since they'll want to take her to the CCU. And let me tell you, if you don't call her, she'll rip you a new one.Let's breeze through them quickly so I can get out of here, I've had a rough day. I'll start with the sickest first, and oh my God she's a train wreck! 
SettingScript 6 (n=13) Script 1
Answering pages during handoff, interruptions (people entering room, phone ringing). Attentive listening, no interruptions, pager silenced.

Data Collection and Statistical Analysis

Using combined data from University of Chicago and Yale University, descriptive statistics were reported as raw scores on the Handoff Mini‐CEX. To assess internal consistency of the tool, Cronbach was used. To assess inter‐rater reliability of these attending physician ratings on the tool, we performed a Kendall coefficient of concordance analysis after collapsing the ratings into 3 categories (unsatisfactory, satisfactory, superior). In addition, we also calculated intraclass correlation coefficients for each item using the raw data and generalizability analysis to calculate the number of raters that would be needed to achieve a desired reliability of 0.95. To ascertain if faculty were able to detect varying levels of performance depicted in the video, an ordinal test of trend on the communication, professionalism, and setting scores was performed.

To assess for rater bias, we were able to use the identifiers on the University of Chicago data to perform a 2‐way analysis of variance (ANOVA) to assess if faculty scores were associated with performance level after controlling for faculty. The results of the faculty rater coefficients and P values in the 2‐way ANOVA were also examined for any evidence of rater bias. All calculations were performed in Stata 11.0 (StataCorp, College Station, TX) with statistical significance defined as P<0.05.

RESULTS

Forty‐seven faculty members (14=site 1; 33=site 2) participated in the validation workshops (2 at the University of Chicago, and 2 at Yale University), which were held in August 2011 and September 2011, providing a total of 172 observations of a possible 191 (90%).

The overall handoff quality ratings for the superior, gold standard video (superior communication, professionalism, and communication) ranged from 7 to 9 with a mean of 8.5 (standard deviation [SD] 0.7). The overall ratings for the video depicting satisfactory communication (satisfactory communication, superior professionalism and setting) ranged from 5 to 9 with a mean of 7.3 (SD 1.1). The overall ratings for the unsatisfactory communication (unsatisfactory communication, superior professionalism and setting) video ranged from 1 to 7 with a mean of 2.6 (SD 1.2). The overall ratings for the satisfactory professionalism video (satisfactory professionalism, superior communication and setting) ranged from 4 to 8 with a mean of 5.7 (SD 1.3). The overall ratings for the unsatisfactory professionalism (unsatisfactory professionalism, superior communication and setting) video ranged from 2 to 5 with a mean of 2.4 (SD 1.03). Finally, the overall ratings for the unsatisfactory setting (unsatisfactory setting, superior communication and professionalism) video ranged from 1 to 8 with a mean of 3.1 (SD 1.7).

Figure 1 demonstrates that for the domain of communication, the raters were able to discern the unsatisfactory performance but had difficulty reliably distinguishing between superior and satisfactory performance. Figure 2 illustrates that for the domain of professionalism, raters were able to detect the videos' changing levels of performance at the extremes of behavior, with unsatisfactory and superior displays more readily identified. Figure 3 shows that for the domain of setting, the raters were able to discern the unsatisfactory versus superior level of the changing setting. Of note, we also found a moderate significant correlation between ratings of professionalism and communication (r=0.47, P<0.001).

Figure 1
Faculty ratings of communication by performance. The handoff Clinical Examination Exercise ratings are a 9‐point scale: 1–3 = unsatisfactory, 4–6 = satisfactory, 7–9 = superior.
Figure 2
Faculty ratings of professionalism by performance. The handoff Clinical Examination Exercise ratings are a 9‐point scale: 1–3 = unsatisfactory, 4–6 = satisfactory, 7–9 = superior.
Figure 3
Faculty ratings of setting by performance. The handoff Clinical Examination Exercise ratings are a 9‐point scale: 1–3 = unsatisfactory, 4–6 = satisfactory, 7–9 = superior.

The Cronbach , or measurement of internal reliability and consistency, for the Handoff Mini‐CEX (3 items plus overall) was 0.77, indicating high internal reliability and consistency. Using data from University of Chicago, where raters were labeled with a unique identifier, the Kendall coefficient of concordance was calculated to be 0.79, demonstrating high inter‐rater reliability of the faculty raters. High inter‐rater reliability was also seen using intraclass coefficients for each domain: communication (0.84), professionalism (0.68), setting (0.83), and overall (0.89). Using generalizability analysis, the average reliability was determined to be above 0.9 for all domains (0.99 for overall).

Last, the 2‐way ANOVA (n=75 observations from 13 raters) revealed no evidence of rater bias when examining the coefficient for attending rater (P=0.55 for professionalism, P=0.45 for communication, P=0.92 for setting). The range of scores for each video, however, was broad (Table 2).

Faculty's Mini‐Handoff Clinical Examination Exercise Ratings by Level of Performance Depicted in Video
 UnsatisfactorySatisfactorySuperior 
MeanMedianRangeMeanMedianRangeMeanMedianRangePb
  • NOTE: Clinical Examination Exercise ratings are on a 9‐point scale: 13=unsatisfactory, 46=satisfactory, 79=superior.

  • P value is from 2‐way analysis of variance examining the level of performance on rating of that construct controlling for rater.

Professionalism2.32144.44387.07390.026
Communication2.831678596.67190.005
Setting3.1318 7.58290.005

DISCUSSION

This study demonstrates that valid conclusions on handoff performance can be drawn using the Handoff CEX as the instrument to rate handoff quality. Utilizing standardized videos depicting varying levels of performance communication, professionalism, and setting, the Handoff Mini‐CEX has demonstrated potential to discern between increasing levels of performance, providing evidence for the construct validity of the instrument.

We observed that faculty could reliably detect unsatisfactory professionalism with ease, and that there was a distinct correlation between faculty ratings and the internally set levels of performance displayed in the videos. This trend demonstrated that faculty were able to discern different levels of professionalism using the Handoff Mini‐CEX. It became more difficult, however, for faculty to detect superior professionalism when the domain of communication was permuted. If the sender of the handoff was professional but the information delivered was disorganized, inaccurate, and missing crucial pieces of information, the faculty perceived this ineffective communication as unprofessional. Prior literature on professionalism has found that communication is a necessary component of professional behavior, and consequently, being a competent communicator is necessary to fulfill ones duty as a professional physician.[15, 16]

This is of note because we did find a moderate significant correlation between ratings of professionalism and communication. It is possible that this distinction would be made clearer with formal rater training in the future prior to any evaluations. However, it is also possible that professionalism and communication, due to a synergistic role between the 2 domains, cannot be separated. If this is the case, it would be important to educate clinicians to present patients in a concise, clear, and accurate way with a professional demeanor. Acknowledging professional responsibility as an integral piece of patient care is also critical in effectively communicating patient information.[5]

We also noted that faculty could detect unsatisfactory communication consistently; however, they were unable to differentiate between satisfactory and superior communication reliably or consistently. Because the unsatisfactory professionalism, unsatisfactory setting, and satisfactory professionalism videos all demonstrated superior communication, we believe that the faculty penalized communication when distractions, in the form of interruptions and rude behavior by the resident giving the handoff, interrupted the flow of the handoff. Thus, the wide ranges in scores observed by some raters may be attributed to this interaction between the Handoff Mini‐CEX domains. In the future, definitions of the anchors, including at the middle spectrum of performance, and rater training may improve the ability of raters to distinguish performance between each domain.

The overall value of the Handoff Mini‐CEX is in its ease of use, in part due to its brevity, as well as evidence for its validity in distinguishing between varying levels of performance. Given the emphasis on monitoring handoff quality and performance, the Handoff Mini‐CEX provides a standard foundation from which baseline handoff performance can be easily measured and improved. Moreover, it can also be used to give individual feedback to a specific practicing clinician on their practices and an opportunity to improve. This is particularly important given current recommendations by the Joint Commission, that handoffs are standardized, and by the ACGME, that residents are competent in handoff skills. Moreover, given the creation of the SHM's handoff recommendations and handoffs as a core competency for hospitalists, the tool provides the ability for hospitalist programs to actually assess their handoff practices as baseline measurements for any quality improvement activities that may take place.

Faculty were able to discern the superior and unsatisfactory levels of setting with ease. After watching and rating the videos, participants said that the chaotic scene of the unsatisfactory setting video had significant authenticity, and that they were constantly interrupted during their own handoffs by pages, phone calls, and people entering the handoff space. System‐level fixes, such as protected time and dedicated space for handoffs, and discouraging pages to be sent during the designated handoff time, could mitigate the reality of unsatisfactory settings.[17, 18]

Our study has several limitations. First, although this study was held at 2 sites, it included a small number of faculty, which can impact the generalizability of our findings. Implementation varied at Yale University and the University of Chicago, preventing use of all data for all analyses. Furthermore, institutional culture may also impact faculty raters' perceptions, so future work aims at repeating our protocol at partner institutions, increasing both the number and diversity of participants. We were also unable to compare the new shorter Handoff Mini‐CEX to the larger 9‐item Handoff CEX in this study.

Despite these limitations, we believe that the Handoff Mini‐CEX, has future potential as an instrument with which to make valid and reliable conclusions about handoff quality, and could be used to both evaluate handoff quality and as an educational tool for trainees and faculty on effective handoff communication.

Disclosures

This work was supported by the National Institute on Aging Short‐Term Aging‐Related Research Program (5T35AG029795), Agency for Healthcare Research and Quality (1 R03HS018278‐01), and the University of Chicago Department of Medicine Excellence in Medical Education Award. Dr. Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Arora is funded by National Institute on Aging Career Development Award K23AG033763. Prior presentations of these data include the 2011 Association of American Medical Colleges meeting in Denver, Colorado, the 2012 Association of Program Directors of Internal Medicine meeting in Atlanta, Georgia, and the 2012 Society of General Internal Medicine Meeting in Orlando, Florida.

Over the last decade, there has been an unprecedented focus on physician handoffs in US hospitals. One major reason for this are the reductions in residency duty hours that have been mandated by the American Council for Graduate Medical Education (ACGME), first in 2003 and subsequently revised in 2011.[1, 2] As residents work fewer hours, experts believe that potential safety gains from reduced fatigue are countered by an increase in the number of handoffs, which represent a risk due to the potential miscommunication. Prior studies show that critical patient information is often lost or altered during this transfer of clinical information and professional responsibility, which can result in patient harm.[3, 4] As a result of these concerns, the ACGME now requires residency programs to ensure and monitor effective, structured hand‐over processes to facilitate both continuity of care and patient safety. Programs must ensure that residents are competent in communicating with team members in the hand‐over process.[2] Moreover, handoffs have also been a major improvement focus for organizations with broader scope than teaching hospitals, including the World Health Organization, Joint Commission, and the Society for Hospital Medicine (SHM).[5, 6, 7]

Despite this focus on handoffs, monitoring quality of handoffs has proven challenging due to lack of a reliable and validated tool to measure handoff quality. More recently, the Accreditation Council of Graduate Medical Education's introduction of the Next Accreditation System, with its focus on direct observation of clinical skills to achieve milestones, makes it crucial for residency educators to have valid tools to measure competence in handoffs. As a result, it is critical that instruments to measure handoff performance are not only created but also validated.[8]

To help fill this gap, we previously reported on the development of a 9‐item Handoff Clinical Examination Exercise (CEX) assessment tool. The Handoff CEX, designed for use by those participating in the handoff or by a third‐party observer, can be used to rate the quality of patient handoffs in domains such as professionalism and communication skills between the receiver and sender of patient information.[9, 10] Despite prior demonstration of feasibility of use, the initial tool was perceived as lengthy and redundant. In addition, although the tool has been shown to discriminate between performance of novice and expert nurses, the construct validity of this tool has not been established.[11] Establishing construct validity is important to ensuring that the tool can measure the construct in question, namely whether it detects those who are actually competent to perform handoffs safely and effectively. We present here the results of the development of a shorter Handoff Mini‐CEX, along with the formal establishment of its construct validity, namely its ability to distinguish between levels of performance in 3 domains of handoff quality.

METHODS

Adaption of the Handoff CEX and Development of the Abbreviated Tool

The 9‐item Handoff CEX is a paper‐based instrument that was created by the investigators (L.I.H., J.M.F., V.M.A.) to evaluate either the sender or the receiver of handoff communications and has been used in prior studies (see Supporting Information, Appendix 1, in the online version of this article).[9, 10] The evaluation may be conducted by either an observer or by a handoff participant. The instrument includes 6 domains: (1) setting, (2) organization and efficiency, (3) communication skills, (4) content, (5) clinical judgment, and (6) humanistic skills/professionalism. Each domain is graded on a 9‐point rating scale, modeled on the widely used Mini‐CEX (Clinical Evaluation Exercise) for real‐time observation of clinical history and exam skills in internal medicine clerkships and residencies (13=unsatisfactory, 46=marginal/satisfactory, 79=superior).[12] This familiar 9‐point scale is utilized in graduate medical education evaluation of the ACGME core competencies.

To standardize the evaluation, the instrument uses performance‐based anchors for evaluating both the sender and the receiver of the handoff information. The anchors are derived from functional evaluation of the roles of senders and receivers in our preliminary work at both the University of Chicago and Yale University, best practices in other high‐reliability industries, guidelines from the Joint Commission and the SHM, and prior studies of effective communication in clinical systems.[5, 6, 13]

After piloting the Handoff CEX with the University of Chicago's internal medicine residency program (n=280 handoff evaluations), a strong correlation was noted between the measures of content (medical knowledge), patient care, clinical judgment, organization/efficiency, and communication skills. Moreover, the Handoff CEX's Cronbach , or measurement of internal reliability and consistency, was very high (=0.95). Given the potential of redundant items, and to increase ease of use of the instrument, factor analysis was used to reduce the instrument to yield a shorter 3‐item tool, the Handoff Mini‐CEX, that assessed 3 of the initial items: setting, communication skills, and professionalism. Overall, performance on these 3 items were responsible for 82% of the variance of overall sign‐out quality (see Supporting Information, Appendix 2, in the online version of this article).

Establishing Construct Validity of the Handoff Mini‐CEX

To establish construct validity of the Handoff Mini‐CEX, we adapted a protocol used by Holmboe and colleagues to report the construct validity of the Handoff Mini‐CEX, which is based on the development and use of video scenarios depicting varying levels of clinical performance.[14] A clinical scenario script, based on prior observational work, was developed, which represented an internal medicine resident (the sender) signing out 3 different patients to colleagues (intern [postgraduate year 1] and resident). This scenario was developed to explicitly include observable components of professionalism, communication, and setting. Three levels of performancesuperior, satisfactory, and unsatisfactorywere defined and described for the 3 domains. These levels were defined, and separate scripts were written using this information, demonstrating varying levels of performance in each of the domains of interest, using the descriptive anchors of the Handoff Mini‐CEX.

After constructing the superior, or gold standard, script that showcases superior communication, professionalism, and setting, individual domains of performance were changed (eg, to satisfactory or unsatisfactory), while holding the other 2 constant at the superior level of performance. For example, superior communication requires that the sender provides anticipatory guidance and includes clinical rationale, whereas unsatisfactory communication includes vague language about overnight events and a disorganized presentation of patients. Superior professionalism requires no inappropriate comments by the sender about patients, family, and staff as well as a presentation focused on the most urgent patients. Unsatisfactory professionalism is shown by a hurried and inattentive sign‐out, with inappropriate comments about patients, family, and staff. Finally, a superior setting is one in which the receiver is listening attentively and discourages interruptions, whereas an unsatisfactory setting finds the sender or receiver answering pages during the handoff surrounded by background noise. We omitted the satisfactory level for setting due to the difficulties in creating subtleties in the environment.

Permutations of each of these domains resulted in 6 scripts depicting different levels of sender performance (see Supporting Information, Appendix 3, in the online version of this article). Only the performance level of the sender was changed, and the receivers of the handoff performance remained consistent, using best practices for receivers, such as attentive listening, asking questions, reading back, and taking notes during the handoff. The scripts were developed by 2 investigators (V.M.A., S.B.), then reviewed and edited independently by other investigators (J.M.F., P.S.) to achieve consensus. Actors were recruited to perform the video scenarios and were trained by the physician investigators (J.M.F., V.M.A.). The part of the sender was played by a study investigator (P.S.) with prior acting experience, and who had accrued over 40 hours of experience observing handoffs to depict varying levels of handoff performance. The digital video recordings ranged in length from 2.00 minutes to 4.08 minutes. All digital videos were recorded using a Sony XDCAM PMW‐EX3 HD camcorder (Sony Corp., Tokyo, Japan.

Participants

Faculty from the University of Chicago Medical Center and Yale University were included. At the University of Chicago, faculty were recruited to participate via email by the study investigators to the Research in Medical Education (RIME) listhost, which includes program directors, clerkship directors, and medical educators. Two sessions were offered and administered. Continuing medical education (CME) credit was provided for participation, as this workshop was given in conjunction with the RIME CME conference. Evaluations were deidentified using a unique identifier for each rater. At Yale University, the workshop on handoffs was offered as part of 2 seminars for program directors and chief residents from all specialties. During these seminars, program directors and chief residents used anonymous evaluation rating forms that did not capture rater identifiers. No other incentive was provided for participation. Although neither faculty at the University of Chicago nor Yale University received any formal training on handoff evaluation, they did receive a short introduction to the importance of handoffs and the goals of the workshop. The protocol was deemed exempt by the institutional review board at the University of Chicago.

Workshop Protocol

After a brief introduction, faculty viewed the tapes in random order on a projected screen. Participants were instructed to use the Handoff Mini‐CEX to rate whichever element(s) of handoff quality they believed they could suitably evaluate while watching the tapes. The videos were rated on the Handoff Mini‐CEX form, and participants anonymously completed the forms independently without any contact with other participants. The lead investigators proctored all sessions. At University of Chicago, participants viewed and rated all 6 videos over the course of an hour. At Yale University, due to time constraints in the program director and chief resident seminars, participants reviewed 1 of the videos in seminar 1 (unsatisfactory professionalism) and 2 in the other seminar (unsatisfactory communication, unsatisfactory professionalism) (Table 1).

Script Matrix
 UnsatisfactorySatisfactorySuperior
  • NOTE: Abbreviations: CBC, complete blood count; CCU, coronary care unit; ECG, electrocardiogram.

  • Denotes video scenario seen by Yale University raters. All videos were seen by University of Chicago raters.

CommunicationScript 3 (n=36)aScript 2 (n=13)Script 1 (n=13)
Uses vague language about overnight events, missing critical patient information, disorganized.Insufficient level of clinical detail, directions are not as thorough, handoff is generally on task and sufficient.Anticipatory guidance provided, rationale explained; important information is included, highlights sick patients.
Look in the record; I'm sure it's in there. And oh yeah, I need you to check enzymes and finish ruling her out.So the only thing to do is to check labs; you know, check CBC and cardiac enzymes.So for today, I need you to check post‐transfusion hemoglobin to make sure it's back to the baseline of 10. If it's under 10, then transfuse her 2 units, but hopefully it will be bumped up. Also continue to check cardiac enzymes; the next set is coming at 2 pm, and we need to continue the rule out. If her enzymes are positive or she has other ECG changes, definitely call the cardio fellow, since they'll want to take her to the CCU.
ProfessionalismScript 5 (n=39)aScript 4 (n=22)aScript 1
Hurried, inattentive, rushing to leave, inappropriate comments (re: patients, family, staff).Some tangential comments (re: patients, family, staff).Appropriate comments (re: patients, family, staff), focused on task.
[D]efinitely call the cards fellow, since they'll want to take her to the CCU. And let me tell you, if you don't call her, she'll rip you a new one.Let's breeze through them quickly so I can get out of here, I've had a rough day. I'll start with the sickest first, and oh my God she's a train wreck! 
SettingScript 6 (n=13) Script 1
Answering pages during handoff, interruptions (people entering room, phone ringing). Attentive listening, no interruptions, pager silenced.

Data Collection and Statistical Analysis

Using combined data from University of Chicago and Yale University, descriptive statistics were reported as raw scores on the Handoff Mini‐CEX. To assess internal consistency of the tool, Cronbach was used. To assess inter‐rater reliability of these attending physician ratings on the tool, we performed a Kendall coefficient of concordance analysis after collapsing the ratings into 3 categories (unsatisfactory, satisfactory, superior). In addition, we also calculated intraclass correlation coefficients for each item using the raw data and generalizability analysis to calculate the number of raters that would be needed to achieve a desired reliability of 0.95. To ascertain if faculty were able to detect varying levels of performance depicted in the video, an ordinal test of trend on the communication, professionalism, and setting scores was performed.

To assess for rater bias, we were able to use the identifiers on the University of Chicago data to perform a 2‐way analysis of variance (ANOVA) to assess if faculty scores were associated with performance level after controlling for faculty. The results of the faculty rater coefficients and P values in the 2‐way ANOVA were also examined for any evidence of rater bias. All calculations were performed in Stata 11.0 (StataCorp, College Station, TX) with statistical significance defined as P<0.05.

RESULTS

Forty‐seven faculty members (14=site 1; 33=site 2) participated in the validation workshops (2 at the University of Chicago, and 2 at Yale University), which were held in August 2011 and September 2011, providing a total of 172 observations of a possible 191 (90%).

The overall handoff quality ratings for the superior, gold standard video (superior communication, professionalism, and communication) ranged from 7 to 9 with a mean of 8.5 (standard deviation [SD] 0.7). The overall ratings for the video depicting satisfactory communication (satisfactory communication, superior professionalism and setting) ranged from 5 to 9 with a mean of 7.3 (SD 1.1). The overall ratings for the unsatisfactory communication (unsatisfactory communication, superior professionalism and setting) video ranged from 1 to 7 with a mean of 2.6 (SD 1.2). The overall ratings for the satisfactory professionalism video (satisfactory professionalism, superior communication and setting) ranged from 4 to 8 with a mean of 5.7 (SD 1.3). The overall ratings for the unsatisfactory professionalism (unsatisfactory professionalism, superior communication and setting) video ranged from 2 to 5 with a mean of 2.4 (SD 1.03). Finally, the overall ratings for the unsatisfactory setting (unsatisfactory setting, superior communication and professionalism) video ranged from 1 to 8 with a mean of 3.1 (SD 1.7).

Figure 1 demonstrates that for the domain of communication, the raters were able to discern the unsatisfactory performance but had difficulty reliably distinguishing between superior and satisfactory performance. Figure 2 illustrates that for the domain of professionalism, raters were able to detect the videos' changing levels of performance at the extremes of behavior, with unsatisfactory and superior displays more readily identified. Figure 3 shows that for the domain of setting, the raters were able to discern the unsatisfactory versus superior level of the changing setting. Of note, we also found a moderate significant correlation between ratings of professionalism and communication (r=0.47, P<0.001).

Figure 1
Faculty ratings of communication by performance. The handoff Clinical Examination Exercise ratings are a 9‐point scale: 1–3 = unsatisfactory, 4–6 = satisfactory, 7–9 = superior.
Figure 2
Faculty ratings of professionalism by performance. The handoff Clinical Examination Exercise ratings are a 9‐point scale: 1–3 = unsatisfactory, 4–6 = satisfactory, 7–9 = superior.
Figure 3
Faculty ratings of setting by performance. The handoff Clinical Examination Exercise ratings are a 9‐point scale: 1–3 = unsatisfactory, 4–6 = satisfactory, 7–9 = superior.

The Cronbach , or measurement of internal reliability and consistency, for the Handoff Mini‐CEX (3 items plus overall) was 0.77, indicating high internal reliability and consistency. Using data from University of Chicago, where raters were labeled with a unique identifier, the Kendall coefficient of concordance was calculated to be 0.79, demonstrating high inter‐rater reliability of the faculty raters. High inter‐rater reliability was also seen using intraclass coefficients for each domain: communication (0.84), professionalism (0.68), setting (0.83), and overall (0.89). Using generalizability analysis, the average reliability was determined to be above 0.9 for all domains (0.99 for overall).

Last, the 2‐way ANOVA (n=75 observations from 13 raters) revealed no evidence of rater bias when examining the coefficient for attending rater (P=0.55 for professionalism, P=0.45 for communication, P=0.92 for setting). The range of scores for each video, however, was broad (Table 2).

Faculty's Mini‐Handoff Clinical Examination Exercise Ratings by Level of Performance Depicted in Video
 UnsatisfactorySatisfactorySuperior 
MeanMedianRangeMeanMedianRangeMeanMedianRangePb
  • NOTE: Clinical Examination Exercise ratings are on a 9‐point scale: 13=unsatisfactory, 46=satisfactory, 79=superior.

  • P value is from 2‐way analysis of variance examining the level of performance on rating of that construct controlling for rater.

Professionalism2.32144.44387.07390.026
Communication2.831678596.67190.005
Setting3.1318 7.58290.005

DISCUSSION

This study demonstrates that valid conclusions on handoff performance can be drawn using the Handoff CEX as the instrument to rate handoff quality. Utilizing standardized videos depicting varying levels of performance communication, professionalism, and setting, the Handoff Mini‐CEX has demonstrated potential to discern between increasing levels of performance, providing evidence for the construct validity of the instrument.

We observed that faculty could reliably detect unsatisfactory professionalism with ease, and that there was a distinct correlation between faculty ratings and the internally set levels of performance displayed in the videos. This trend demonstrated that faculty were able to discern different levels of professionalism using the Handoff Mini‐CEX. It became more difficult, however, for faculty to detect superior professionalism when the domain of communication was permuted. If the sender of the handoff was professional but the information delivered was disorganized, inaccurate, and missing crucial pieces of information, the faculty perceived this ineffective communication as unprofessional. Prior literature on professionalism has found that communication is a necessary component of professional behavior, and consequently, being a competent communicator is necessary to fulfill ones duty as a professional physician.[15, 16]

This is of note because we did find a moderate significant correlation between ratings of professionalism and communication. It is possible that this distinction would be made clearer with formal rater training in the future prior to any evaluations. However, it is also possible that professionalism and communication, due to a synergistic role between the 2 domains, cannot be separated. If this is the case, it would be important to educate clinicians to present patients in a concise, clear, and accurate way with a professional demeanor. Acknowledging professional responsibility as an integral piece of patient care is also critical in effectively communicating patient information.[5]

We also noted that faculty could detect unsatisfactory communication consistently; however, they were unable to differentiate between satisfactory and superior communication reliably or consistently. Because the unsatisfactory professionalism, unsatisfactory setting, and satisfactory professionalism videos all demonstrated superior communication, we believe that the faculty penalized communication when distractions, in the form of interruptions and rude behavior by the resident giving the handoff, interrupted the flow of the handoff. Thus, the wide ranges in scores observed by some raters may be attributed to this interaction between the Handoff Mini‐CEX domains. In the future, definitions of the anchors, including at the middle spectrum of performance, and rater training may improve the ability of raters to distinguish performance between each domain.

The overall value of the Handoff Mini‐CEX is in its ease of use, in part due to its brevity, as well as evidence for its validity in distinguishing between varying levels of performance. Given the emphasis on monitoring handoff quality and performance, the Handoff Mini‐CEX provides a standard foundation from which baseline handoff performance can be easily measured and improved. Moreover, it can also be used to give individual feedback to a specific practicing clinician on their practices and an opportunity to improve. This is particularly important given current recommendations by the Joint Commission, that handoffs are standardized, and by the ACGME, that residents are competent in handoff skills. Moreover, given the creation of the SHM's handoff recommendations and handoffs as a core competency for hospitalists, the tool provides the ability for hospitalist programs to actually assess their handoff practices as baseline measurements for any quality improvement activities that may take place.

Faculty were able to discern the superior and unsatisfactory levels of setting with ease. After watching and rating the videos, participants said that the chaotic scene of the unsatisfactory setting video had significant authenticity, and that they were constantly interrupted during their own handoffs by pages, phone calls, and people entering the handoff space. System‐level fixes, such as protected time and dedicated space for handoffs, and discouraging pages to be sent during the designated handoff time, could mitigate the reality of unsatisfactory settings.[17, 18]

Our study has several limitations. First, although this study was held at 2 sites, it included a small number of faculty, which can impact the generalizability of our findings. Implementation varied at Yale University and the University of Chicago, preventing use of all data for all analyses. Furthermore, institutional culture may also impact faculty raters' perceptions, so future work aims at repeating our protocol at partner institutions, increasing both the number and diversity of participants. We were also unable to compare the new shorter Handoff Mini‐CEX to the larger 9‐item Handoff CEX in this study.

Despite these limitations, we believe that the Handoff Mini‐CEX, has future potential as an instrument with which to make valid and reliable conclusions about handoff quality, and could be used to both evaluate handoff quality and as an educational tool for trainees and faculty on effective handoff communication.

Disclosures

This work was supported by the National Institute on Aging Short‐Term Aging‐Related Research Program (5T35AG029795), Agency for Healthcare Research and Quality (1 R03HS018278‐01), and the University of Chicago Department of Medicine Excellence in Medical Education Award. Dr. Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Arora is funded by National Institute on Aging Career Development Award K23AG033763. Prior presentations of these data include the 2011 Association of American Medical Colleges meeting in Denver, Colorado, the 2012 Association of Program Directors of Internal Medicine meeting in Atlanta, Georgia, and the 2012 Society of General Internal Medicine Meeting in Orlando, Florida.

References
  1. Nasca TJ, Day SH, Amis ES. The new recommendations on duty hours from the ACGME task force. New Engl J Med. 2010;363(2):e3.
  2. ACGME common program requirements. Effective July 1, 2011. Available at: http://www.acgme.org/acgmeweb/Portals/0/PDFs/Common_Program_Requirements_07012011[2].pdf. Accessed February 8, 2014.
  3. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):17551760.
  4. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Healthcare. 2005;14(6):401407.
  5. Arora VM, Manjarrez E, Dressler DD, Basaviah P, Halasyamani L, Kripalani S. Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433440.
  6. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32(11):646655.
  7. World Health Organization Collaborating Centre for Patient Safety. Solutions on communication during patient hand‐overs. 2007; Volume 1, Solution 1. Available at: http://www.who.int/patientsafety/solutions/patientsafety/PS‐Solution3.pdf. Accessed February 8, 2014.
  8. Patterson ES, Wears RL. Patient handoffs: standardized and reliable measurement tools remain elusive. Jt Comm J Qual Patient Saf. 2010;36(2):5261.
  9. Horwitz L, Rand D, Staisiunas P, et al. Development of a handoff evaluation tool for shift‐to‐shift physician handoffs: the handoff CEX. J Hosp Med. 2013;8(4):191200.
  10. Farnan JM, Paro JAM, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129134.
  11. Horwitz LI, Dombroski J, Murphy TE, Farnan JM, Johnson JK, Arora VM. Validation of a handoff tool: the Handoff CEX. J Clin Nurs. 2013;22(9‐10):14771486.
  12. Norcini JJ, Blank LL, Duffy FD, Fortna GS. The mini‐CEX: a method for assessing clinical skills. Ann Intern Med. 2003;138(6):476481.
  13. Patterson ES, Roth EM, Woods DD, Chow R, Gomes JO. Handoff strategies in settings with high consequences for failure: lessons for health care operations. Int J Qual Health Care. 2004;16(2):125132.
  14. Holmboe ES, Huot S, Chung J, Norcini J, Hawkins RE. Construct validity of the miniclinical evaluation exercise (miniCEX). Acad Med. 2003;78(8):826830.
  15. Reddy ST, Farnan JM, Yoon JD, et al. Third‐year medical students' participation in and perceptions of unprofessional behaviors. Acad Med. 2007;82(10 suppl):S35S39.
  16. Hafferty FW. Professionalism—the next wave. N Engl J Med. 2006;355(20):21512152.
  17. Chang VY, Arora VM, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491496.
  18. Greenstein EA, Arora VM, Staisiunas PG, Banerjee SS, Farnan JM. Characterising physician listening behaviour during hospitalist handoffs using the HEAR checklist. BMJ Qual Saf. 2013;22(3):203209.
References
  1. Nasca TJ, Day SH, Amis ES. The new recommendations on duty hours from the ACGME task force. New Engl J Med. 2010;363(2):e3.
  2. ACGME common program requirements. Effective July 1, 2011. Available at: http://www.acgme.org/acgmeweb/Portals/0/PDFs/Common_Program_Requirements_07012011[2].pdf. Accessed February 8, 2014.
  3. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):17551760.
  4. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Healthcare. 2005;14(6):401407.
  5. Arora VM, Manjarrez E, Dressler DD, Basaviah P, Halasyamani L, Kripalani S. Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433440.
  6. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32(11):646655.
  7. World Health Organization Collaborating Centre for Patient Safety. Solutions on communication during patient hand‐overs. 2007; Volume 1, Solution 1. Available at: http://www.who.int/patientsafety/solutions/patientsafety/PS‐Solution3.pdf. Accessed February 8, 2014.
  8. Patterson ES, Wears RL. Patient handoffs: standardized and reliable measurement tools remain elusive. Jt Comm J Qual Patient Saf. 2010;36(2):5261.
  9. Horwitz L, Rand D, Staisiunas P, et al. Development of a handoff evaluation tool for shift‐to‐shift physician handoffs: the handoff CEX. J Hosp Med. 2013;8(4):191200.
  10. Farnan JM, Paro JAM, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129134.
  11. Horwitz LI, Dombroski J, Murphy TE, Farnan JM, Johnson JK, Arora VM. Validation of a handoff tool: the Handoff CEX. J Clin Nurs. 2013;22(9‐10):14771486.
  12. Norcini JJ, Blank LL, Duffy FD, Fortna GS. The mini‐CEX: a method for assessing clinical skills. Ann Intern Med. 2003;138(6):476481.
  13. Patterson ES, Roth EM, Woods DD, Chow R, Gomes JO. Handoff strategies in settings with high consequences for failure: lessons for health care operations. Int J Qual Health Care. 2004;16(2):125132.
  14. Holmboe ES, Huot S, Chung J, Norcini J, Hawkins RE. Construct validity of the miniclinical evaluation exercise (miniCEX). Acad Med. 2003;78(8):826830.
  15. Reddy ST, Farnan JM, Yoon JD, et al. Third‐year medical students' participation in and perceptions of unprofessional behaviors. Acad Med. 2007;82(10 suppl):S35S39.
  16. Hafferty FW. Professionalism—the next wave. N Engl J Med. 2006;355(20):21512152.
  17. Chang VY, Arora VM, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491496.
  18. Greenstein EA, Arora VM, Staisiunas PG, Banerjee SS, Farnan JM. Characterising physician listening behaviour during hospitalist handoffs using the HEAR checklist. BMJ Qual Saf. 2013;22(3):203209.
Issue
Journal of Hospital Medicine - 9(7)
Issue
Journal of Hospital Medicine - 9(7)
Page Number
441-446
Page Number
441-446
Publications
Publications
Article Type
Display Headline
Using standardized videos to validate a measure of handoff quality: The handoff mini‐clinical examination exercise
Display Headline
Using standardized videos to validate a measure of handoff quality: The handoff mini‐clinical examination exercise
Sections
Article Source

© 2014 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Vineet Arora, MD, 5841 South Maryland Ave., MC 2007, AMB W216, Chicago, IL 60637; Telephone: 773‐702‐8157; Fax: 773–834‐2238; E‐mail: varora@medicine.bsd.uchicago.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Entrusting Residents with Tasks

Article Type
Changed
Sun, 05/21/2017 - 14:40
Display Headline
How do supervising physicians decide to entrust residents with unsupervised tasks? A qualitative analysis

Determining when residents are independently prepared to perform clinical care tasks safely is not easy or understood. Educators have struggled to identify robust ways to evaluate trainees and their preparedness to treat patients while unsupervised. Trust allows the trainee to experience increasing levels of participation and responsibility in the workplace in a way that builds competence for future practice. The breadth of knowledge and skills required to become a competent and safe physician, coupled with the busy workload confound this challenge. Notably, a technically proficient trainee may not have the clinical judgment to treat patients without supervision.

The Accreditation Council of Graduate Medical Education (ACGME) has previously outlined 6 core competencies for residency training: patient care, medical knowledge, practice‐based learning and improvement, interpersonal and communication skills, professionalism, and systems‐based practice.[1] A systematic literature review suggests that traditional trainee evaluation tools are difficult to use and unreliable in measuring the competencies independently from one another, whereas certain competencies are consistently difficult to quantify in a reliable and valid way.[2] The evaluation of trainees' clinical performance despite efforts to create objective tools remain strongly influenced by subjective measures and continues to be highly variable among different evaluators.[3] Objectively measuring resident autonomy and readiness to supervise junior colleagues remains imprecise.[4]

The ACGME's Next Accreditation System (NAS) incorporates educational milestones as part of the reporting of resident training outcomes.[5] The milestones allow for the translation of the core competencies into integrative and observable abilities. Furthermore, the milestone categories are stratified into tiers to allow progress to be measured longitudinally and by task complexity using a novel assessment strategy.

The development of trust between supervisors and trainees is a critical step in decisions to allow increased responsibility and the provision of autonomous decision making, which is an important aspect of physician training. Identifying the factors that influence the supervisors' evaluation of resident competency and capability is at the crux of trainee maturation as well as patient safety.[4] Trust, defined as believability and discernment by attendings of resident physicians, plays a large role in attending evaluations of residents during their clinical rotations.[3] Trust impacts the decisions of successful performance of entrustable professional activities (EPAs), or those tasks that require mastery prior to completion of training milestones.[6] A study of entrustment decisions made by attending anesthesiologists identified the factors that contribute to the amount of autonomy given to residents, such as trainee trustworthiness, medical knowledge, and level of training.[4] The aim of our study, building on this study, was 2‐fold: (1) use deductive qualitative analysis to apply this framework to existing resident and attending data, and (2) define the categories within this framework and describe how internal medicine attending and resident physician perceptions of trust can impact clinical decision making and patient care.

METHODS

We are reporting on a secondary data analysis of interview transcripts from a study conducted on the inpatient general medicine service at the University of Chicago, an academic tertiary care medical center. The methods for data collection and full consent have been outlined previously.[7, 8, 9] The institutional review board of the University of Chicago approved this study.

Briefly, between January 2006 and November 2006, all eligible internal medicine resident physicians, postgraduate year (PGY)‐2 or PGY‐3, and attending physicians, either generalists or hospitalists, were privately interviewed within 1 week of their final call night on the inpatient general medicine rotation to assess decision making and clinical supervision during the rotation. All interviews were conducted by 1 investigator (J.F.), and discussions were audio taped and transcribed for analysis. Interviews were conducted at the conclusion of the rotation to prevent any influence on resident and attending behavior during the rotation.

The critical incident technique, a procedure used for collecting direct observations of human behavior that have critical significance on the decision‐making process, was used to solicit examples of ineffective supervision, inquiring about 2 to 3 important clinical decisions made on the most recent call night, with probes to identify issues of trust, autonomy, and decision making.[10] A critical incident can be described as one that makes a significant contribution, either positively or negatively, on the process.

Appreciative inquiry, a technique that aims to uncover the best things about the clinical encounter being explored, was used to solicit examples of effective supervision. Probes are used to identify factors, either personal or situational, that influenced the withholding or provision of resident autonomy during periods of clinical care delivery.[11]

All identifiable information was removed from the interview transcripts to protect participant and patient confidentiality. Deductive qualitative analysis was performed using the conceptual EPA framework, which describes several factors that influence the attending physicians' decisions to deem a resident trustworthy to independently fulfill a specific clinical task.[4] These factors include (1) the nature of the task, (2) the qualities of the supervisor, (3) the qualities of the trainee and the quality of the relationship between the supervisor and the trainee, and (4) the circumstances surrounding the clinical task.

The deidentified, anonymous transcripts were reviewed by 2 investigators (K.J.C., J.M.F.) and analyzed using the constant comparative methods to deductively map the content to the existing framework and generate novel sub themes.[12, 13, 14] Novel categories within each of the domains were inductively generated. Two reviewers (K.J.C., J.M.F.) independently applied the themes to a randomly selected 10% portion of the interview transcripts to assess the inter‐rater reliability. The inter‐rater agreement was assessed using the generalized kappa statistic. The discrepancies between reviewers regarding assignment of codes were resolved via discussion and third party adjudication until consensus was achieved on thematic structure. The codes were then applied to the entire dataset.

RESULTS

Between January 2006 and November 2006, 46 of 50 (88%) attending physicians and 44 of 50 (92%) resident physicians were interviewed following the conclusion of their general medicine inpatient rotation. Of attending physicians, 55% were male, 45% were female, and 38% were academic faculty hospitalists. Of the residents who completed interviews, 47% were male, 53% were female, 52% were PGY‐2, and 45% were PGY‐3.

A total of 535 mentions of trust were abstracted from the transcripts. The 4 major domains that influence trusttrainee factors (Table 1), supervisor factors (Table 2), task factors (Table 3), and systems factors (Table 4)were deductively coded with several emerging novel categories and subthemes. The domains were consistent across the postgraduate year of trainee. No differences in themes were noted, other than those explicitly stated, between the postgraduate years.

Trainee Factors
Domain (N)Category (N)Subtheme (N)Definition and Representative Comment
  • NOTE: Abbreviations: A, attending comment; N, number of mentions of specific domain, category, or subtheme; R, resident comment.

Trainee factors (170); characteristics specific to the trainee that either promote or discourage trust.Personal characteristics (78); traits that impact attendings' decision regarding trust/allowance of autonomy.Confidence and overconfidence (29)Displayed level of comfort when approaching specific clinical situations. I think I havea personality and presenting style [that] people think that I know what I am talkingabout and they just let me run with it. (R)
Accountability (18)Sense of responsibility, including ability to follow‐up on details regarding patient care. [What] bothered me the most was that that kind of lack of accountability for patient careand it makes the whole dynamic of rounds much more stressful. I ended up asking him to page me every day to run the list. (A)
Familiarity/ reputation (18)Comfort with trainee gained through prior working experience, or reputation of the trainee based on discussion with other supervisors. I do have to get to know someone a little to develop that level of trust, to know that it is okay to not check the labs every day, okay to not talk to them every afternoon. (A)
Honesty (13)Sense trainee is not withholding information in order to impact decision making toward a specific outcome. [The residents] have more information than I do and they can clearly spin that information, and it is very difficult to unravelunless you treat them like a hostile witness on the stand.(A)
Clinical attributes (92); skills demonstrated in the context of patient care that promote or inhibit trust.Leadership (19)Ability to organize, teach, and manage coresidents, interns, and students. I want them to be in chargedeciding the plan and sitting down with the team before rounds. (A)
Communication (12)Establishing and encouraging conversation with supervisor regarding decision making.Some residents call me regularly and let me know what's going on and others don't, and those who don't I really have trouble withif you're not calling to check in, then I don't trust your judgment. (A)
Specialty (6)Trainee future career plans. Whether it's right or wrong, nonmedicine interns may not be as attentive to smaller details, and so I had to be attentive to smaller details on [his] patients. (R2)
Medical knowledge (39)Ability to display appropriate level of clinical acumen and apply evidence‐based medicine. I definitelygo on my own gestalt of talking with them and deciding if what they do is reasonable. If they can't explain things to me, that's when I worry. (A)
Recognition of limitations (16)Trainee's ability to recognize his/her own weaknesses, accept criticism, and solicit help when appropriate. The first thing is that they know their limits and ask for help either in rounds or outside of rounds. That indicates to me that as they are out there on their own they are less likely to do things that they don't understand. (A)
Supervisor Factors
Domain (N)Major Category (N)Subtheme (N)Definition and Representative Comment
  • NOTE: Abbreviations: A, attending comment; N, number of mentions of specific domain, category, or subtheme; R, resident comment.

Supervisor factors (120); characteristics specific to the supervisor which either promote or discourage trust.Approachability (34); personality traits, such as approachability, which impact the trainees' perception regarding trust/allowance of autonomy. Sense that the attending physician is available to and receptive to questions from trainees. I think [attending physicians] being approachable and available to you if you need them is really helpful. (R)
Clinical attributes (86); skills demonstrated in the context of patient care that promote or inhibit trust.Institutional obligation (17)Attending physician is the one contractually and legally responsible for the provision of high‐quality and appropriate patient care. If [the residents] have a good reason I can be argued out of my position. I am ultimately responsible andhave to choose if there is some serious dispute. (A)
Experience and expertise (29)Clinical experience, area of specialty, and research interests of the attending physician. You have to be confident in your own clinical skills and knowledge, confident enough that you can say its okay for me to let go a little bit. (A)
Observation‐based evaluation (27)Evaluation of trainee decision‐making ability during the early part of the attending/trainee relationship. It's usually the first post‐call day experience, the first on‐call and post‐call day experience. One of the big things is [if they can] tell if a patient is sick or not sickif they are missing at that level then I get very nervous. I really get a sense [of] how they think about patients. (A)
Educational obligation (13)Acknowledging the role of the attending as clinical teacher. My theory with the interns was that they should do it because that's how you learn. (R)
Task Factors
Domain (N)Major Category (N)Subtheme (N)Definition
  • NOTE: Abbreviations: N, number of mentions of specific domain, category, or subtheme.

Task factors (146); details or characteristics of the task that encouraged or impeded contacting the supervisor.Clinical characteristics (103)Case complexity (25)Evaluation of the level of difficulty in patient management. I don't expect to be always looking over [the resident's] shoulder, I don't check labs everyday, and I don't call them if I see potassium of 3; I assume that they are going to take care of it.
Family/ethical dilemma (10)Uncertainty regarding respecting the wishes of patients and other ethical dilemmas. There was 1 time I called because we had a very sick patient who had a lot of family asking for more aggressive measures, and I called to be a part of the conversation.
Interdepartment collaboration (18)Difficulties when treating patients managed by multiple consult services. I have called [the attending] when I have had trouble pushing things through the systemif we had trouble getting tests or trouble with a particular consult team I would call him.
Urgency/severity of illness (13)Clinical condition of patient requires immediate or urgent intervention. If I have something that is really pressing I would probably page my attending. If it's a question [of] just something that I didn't know the answer to [or] wasn't that urgent I could turn to my fellow residents.
Transitions of care (37)Communication with supervisor because of concern/uncertainty regarding patient transition decisions. We wanted to know if it was okay to discharge somebody or if something changes where something in the plan changes. I usually text page her or call her.
Situation or environment characteristics (49)Proximity of attending physicians and support staff (10)Availability of attending physicians and staff resources . I have been called in once or twice to help with a lumbar puncture or paracentesis, but not too often. The procedure service makes life much easier than it used to be.
Team culture (33)Presence or absence of a collaborative and supportive group environment. I had a team that I did trust. I think we communicated well; we were all sort of on the same page.
Time of day (6)Time of the task. Once its past 11 pm, I feel like I shouldn't call, the threshold is higherthe patient has to be sicker.
Systems Factors
Domain (N)Major Categories (N)Definition
  • NOTE: Abbreviations: N, number of mentions of specific domain, category, or subtheme.

Systems factors (99); unmodifiable factors not related to personal characteristics or knowledge of trainee or supervisor.Workload (15)Increasing trainee clinical workload results in a more intensive experience. They [residents] get 10 patients within a pretty concentrated timeso they really have to absorb a lot of information in a short period of time.
Institutional culture (4)Anticipated quality of the trainee because of the status of the institution. I assume that our residents and interns are top notch, so I go in with this real assumption that I expect the best of them because we are [the best].
Clinical experience of trainee (36)Types of clinical experience prior to supervisor/trainee interaction. The interns have done as much [general inpatient medicine] months as I havethey had both done like 2 or 3 months really close together, so they were sort of at their peak knowledge.
Level of training (25)Postgraduate year of trainee. It depends on the experience level of the resident. A second year who just finished internship, I am going to supervise more closely and be more detail oriented; a fourth year medicine‐pediatrics resident who is almost done, I will supervise a lot less.
Duty hours/efficiency pressures (5)Absence of residents due to other competing factors, including compliance with work‐hour restrictions. Before the work‐hour [restrictions], when [residents] were here all the time and knew everything about the patients, I found them to be a lot more reliableand now they are still supposed to be in charge, but hell I am here more often than they are. I am here every day, I have more information than they do. How can you run the show if you are not here every day?
Philosophy of medical education (14)Belief that trainees learn by the provision of completely autonomous decision making. When you are not around, [the residents] have autonomy, they are the people making the initial decisions and making the initial assessments. They are the ones who are there in the middle of the night, the ones who are there at 3 o'clock in the afternoon. The resident is supposed to have room to make decisions. When I am not there, it's not my show.

Trainee Factors

Attending and resident physicians both cited trainee factors as major determinants of granting entrustment (Table 1). Within the domain, the categories described included trainee personal characteristics and clinical characteristics. Of the subthemes noted within the major category of personal characteristics, the perceived confidence or overconfidence of the trainee was most often mentioned. Other subthemes included accountability, familiarity, and honesty. Attending physicians reported using perceived resident confidence as a gauge of the trainee's true ability and comfort. Conversely, some attending physicians reported that perceived overconfidence was a red flag that warranted increased scrutiny. Overconfidence was identified by faculty as trainees with an inability to recognize their limitations in either technical skill or knowledge. Confidence was noted in trainees that recognized their own limitations while also enacting effective management plans, and those physicians that prioritized the patient needs over their personal needs.

The clinical attributes of trainees described by attendings included: leadership skills, communication skills, anticipated specialty, medical knowledge, and perceived recognition of limitations. All participants expressed that the possession of adequate medical knowledge was the most important clinical skills‐related factor in the development of trust. Trainee demonstration of judgment, including applying evidence‐based practice, was used to support attending physician's decision to give residents more autonomy in managing patients. Many attending physicians described a specific pattern of observation and evaluation, in which they would rely on impressions shaped early in the rotation to inform their decisions of entrustment throughout the rotation. The use of this early litmus test was highlighted by several attending physicians. This litmus test described the importance of behavior on the first day/call night and postcall interactions as particularly important opportunities to gauge the ability of a resident to triage new patient admissions, manage their anxiety and uncertainty, and demonstrate maturity and professionalism. Several faculty members discussed examples of their litmus test including checking and knowing laboratory data prior to rounds but not mentioning their findings until they had noted the resident was unaware ([I]f I see a 2 g hemoglobin drop when I check the [electronic medical record {EMR}] and they don't bring it up, I will bring it to their attention, and then I'll get more involved.) or assessing the management of both straightforward and complex patients. They would then use this initial impression to determine their degree of involvement in the care of the patient.

The quality and nature of the communication skills, particularly the increased frequency of contact between resident and attending, was used as a barometer of trainee judgment. Furthermore, attending physicians expressed that they would often micromanage patient care if they did not trust a trainee's ability to reliably and frequently communicate patient status as well as the attendings concerns and uncertainty about future decisions. Some level of uncertainty was generally seen in a positive light by attending physicians, because it signaled that trainees had a mature understanding of their limitations. Finally, the trainee's expressed future specialty, especially if the trainee was a preliminary PGY‐1 resident, or a more senior resident anticipating subspecialty training in a procedural specialty, impacted the degree of autonomy provided.

Supervisor Factors

Supervisor characteristics were further categorized into their approachability and clinical attributes (Table 2). Approachability as a proxy for quality of the relationship, was cited as the personality characteristic that most influenced trust by the residents. This was often described by both attending and resident physicians as the presence of a supportive team atmosphere created through explicit declaration of availability to help with patient care tasks. Some attending physicians described the importance of expressing enthusiasm when receiving queries from their team to foster an atmosphere of nonjudgmental collaboration.

The clinical experience and knowledge base of the attending physician played a role in the provision of autonomy, particularly in times of disagreement about particular clinical decisions. Conversely, attending physicians who had spent less time on inpatient general medicine were more willing to yield to resident suggestions.

Task Factors

The domain of task factors was further divided into the categories that pertained to the clinical aspects of the task and those that pertained to the context, that is the environment in which the entrustment decisions were made (Table 3). Clinical characteristics included case complexity, presence of an ethical dilemma, interdepartmental collaboration, urgency/severity of situation, and transitions of care. The environmental characteristics included physical proximity of supervisors/support, team culture, and time of day. Increasing case complexity, especially the coexistence of legal and/or ethical dilemmas, was often mentioned as a factor driving greater attending involvement. Conversely, straightforward clinical decisions, such as electrolyte repletion, were described as sufficiently easy to allow limited attending involvement. Transitions of care, such as patient discharge or transfer, required greater communication and attending involvement or guidance, regardless of case complexity.

Attending and resident physicians reported that the team dynamics played a large role in the development, granting, or discouragement of trust. Teams with a positive rapport reported a collaborative environment that fostered increased trust by the attending and led to greater resident autonomy. Conversely, team discord that influenced the supervisor‐trainee relationship, often defined as toxic attitudes within the team, was often singled out as the reason attending physicians would feel the need to engage more directly in patient care and by extension have less trust in residents to manage their patients.

Systems Factors

Systems factors were described as the nonmodifiable factors, unrelated to either the characteristics of the supervisor, trainee, or the clinical task (Table 4). The subthemes that emerged included workload, institutional culture, trainee experience, level of training, and duty hours/efficiency pressures. Residents and attending physicians noted that trainee PGY and clinical experience commonly influenced the provision of autonomy and supervision by attendings. Participants reported that the importance of adequate clinical experience was of greater concern given the new duty‐hour restrictions, increased workload, as well as efficiency pressures. Attending physicians noted that trainee absences, even when required to comply with duty‐hour restrictions, had a negative effect on entrustment‐granting decisions. Many attendings felt that a trainee had to be physically present to make informed decisions on the inpatient medicine service.

DISCUSSION

Clinical supervisors must hold the quality of care constant while balancing the amount of supervision and autonomy provided to learners in procedural tasks and clinical decision making. We found that the development of trust is multifactorial and highly contextual. It occurs under the broad constructs of task, supervisor, trainee, and environmental factors, and is well described in prior work. We also demonstrate that often what determines these broader factors is highly subjective, frequently independent of objective measures of trainee performance. Many decisions are based on personal characteristics, such as the perception of honesty, disposition, perceived confidence or perceived overconfidence of the trainee, prior experience, and expressed future field of specialty.

Our findings are consistent with prior research, but go further in describing and demonstrating the existence and innovative use of factors, other than clinical knowledge and skill, in the formation of a multidimensional construct of trust. Kennedy et al. identified 4 dimensions of trust knowledge and skill, discernment, conscientiousness, and truthfulness[15]and demonstrated that supervising physicians rely on specific processes to assess trainee trustworthiness, specifically the use of double checks and language cues. This is consistent with our results, which demonstrate that many attending physicians independently verify information, such as laboratory findings, to inform their perceptions of trainee honesty, attention to detail, and ability to follow orders reliably. Furthermore, our subthemes of communication and the demonstration of logical clinical reasoning correspond to Kennedy's use of language cues.[15] We found that language cues are used as markers of trustworthiness, particularly early on in the rotation, as a litmus test to gauge the trainee's integrity and ability to assess and treat patients unsupervised.

To date, much has been written about the importance of direct observation in the evaluation of trainees.[16, 17, 18, 19] Our results demonstrate that supervising clinicians use a multifactorial, highly nuanced, and subjective process despite validated performance‐based assessment methods, such as the objective structured clinical exam or mini‐clinical evaluation exercise, to assess competence and grant entrustement.[3] Several factors utilized to determine trustworthiness in addition to direct observation are subjective in nature, specifically the trainee's prior experience and expressed career choice.

It is encouraging that attending physicians make use of direct observations to inform decisions of entrustment, albeit in an informal and unstructured way. They also seem to take into account the context and setting in which the observation occurs, and consider both the environmental factors as well as factors that relate to the task itself.[20] For example, attendings and residents reported that team dynamics played a large role in influencing trust decisions. We also found that attending physicians rely on indirect observation and will inquire among their colleagues and other senior residents to gain information about their trainees abilities and integrity. Evaluation tools that facilitate sharing of trainees' level of preparedness, prior feedback, and experience could facilitate the determination of readiness to complete EPAs as well as the reporting of achieved milestones in accordance with the ACGME NAS.

Sharing knowledge about trainees among attendings is common and of increasing importance in the context of attending physicians' shortened exposure to trainees due to the residency work‐hour restrictions and growing productivity pressures. In our study, attending physicians described work‐hour restrictions as detrimental to trainee trustworthiness, either in the context of decreased accountability for patient care or as intrinsic to the nature of forced absences that kept trainees from fully participating in daily ward activities and knowing their patients. Attending physicians felt that trainees did not know their patients well enough to be able to make independent decisions about care. The increased transition to a shift‐based structure of inpatient medicine may result in increasingly less time for direct observation and make it more difficult for attendings to justify their decisions about engendering trust. In addition, the increased fragmentation that is noted in training secondary to the work‐hour regulations may in fact have consequences on the development of clinical skill and decision making, such that increased attention to the need for supervision and longer lead to entrustment may be needed in certain circumstances. Attendings need guidance on how to improve their ability to observe trainees in the context of the new work environment, and how to role model decision making more effectively in the compressed time exposure to housestaff.

Our study has several limitations. The organizational structure and culture of our institution are unique to 1 academic setting. This may undermine our ability to generalize these research findings and analysis to the population at large.[21] In addition, recall bias may have played into the interpretation of the interview content given the timing with which they were performed after the conclusion of the rotation. The study interviews took place in 2006, and it is reasonable to believe that some perceptions concerning duty‐hour restrictions and competency‐based graduate medical education have changed. However, from our ongoing research over the past 5 years[4] and our personal experience with entrustment factors, we believe that the participants' perceptions of trust and competency are valid and have largely remained unchanged, given the similarity in findings to the accepted ten Cate framework. In addition, this work was done following the first iteration of the work‐hour regulations but prior to the implementation of explicit supervisory levels, so it may indeed represent a truer state of the supervisory relationship before external regulations were applied. Finally, this work represents an internal medicine residency training program and may not be generalizable to other specialties that posses different cultural factors that impact the decision for entrustment. However, the congruence of our data with that of the original work of ten Cate, which was done in gynecology,[6] and that of Sterkenberg et al. in anesthesiology,[4] supports our key factors being ubiquitous to all training programs.

In conclusion, we provide new insights into subjective factors that inform the perceptions of trust and entrustment decisions by supervising physicians, specifically subjective trainee characteristics, team dynamics, and informal observation. There was agreement among attendings about which elements of competence are considered most important in their entrustment decisions related to trainee, supervisor, task, and environmental factors. Rather than undervaluing the use of personal factors in the determination of trust, we believe that acknowledgement and appreciation of these factors may be important to give supervisors more confidence and better tools to assess resident physicians, and to understand how their personality traits relate to and impact their professional competence. Our findings are relevant for the development of assessment instruments to evaluate whether medical graduates are ready for safe practice without supervision.

ACKNOWLEDGEMENTS

Disclosures: Dr. Kevin Choo was supported by Scholarship and Discovery, University of Chicago, while in his role as a fourth‐year medical student. This study received institutional review board approval prior to evaluation of our human participants. Portions of this study were presented as an oral abstract at the 35th Annual Meeting of the Society of General Internal Medicine, Orlando, Florida, May 912, 2012.

Files
References
  1. Accreditation Council for Graduate Medical Education Common Program Requirements. Available at: http://www.acgme.org/acgmeweb/tabid/429/ProgramandInstitutionalAccreditation/CommonProgramRequirements.aspx, Accessed November 30, 2013.
  2. Lurie SJ, Mooney CJ, Lyness JM. Measurement of the general competencies of the Accreditation Council for Graduate Medical Education: a systematic review. Acad Med. 2009;84:301309.
  3. Ginsburg S, McIlroy J, Oulanova O, Eva K, Regehr G. Toward authentic clinical evaluation: pitfalls in the pursuit of competency. Acad Med. 2010;85(5):780786.
  4. Sterkenberg A, Barach P, Kalkman C, Gielen M, Cate O. When do supervising physicians decide to entrust residents with unsupervised tasks? Acad Med. 2010;85(9):14081417.
  5. Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system—rationale and benefits. N Eng J Med. 2012;366(11):10511056.
  6. Cate O. Trust, competence and the supervisor's role in postgraduate training. BMJ. 2006;333:748751.
  7. Farnan JM, Johnson JK, Meltzer DO, Humphrey HJ, Arora VM. Clinical Decision Making and impact on patient care: a qualitative study. Qual Saf Health Care. 2008;17(2):122126.
  8. Farnan JM, Johnson JK, Meltzer DO, et al. Strategies for effective on call supervision for internal medicine residents: the superb/safety model. J Grad Med Educ. 2010;2(1):4652.
  9. Farnan JM, Johnson JK, Meltzer DO, Humphrey HJ, Arora VM. On‐call supervision and resident autonomy: from micromanager to absentee attending. Am J Med. 2009;122(8):784788.
  10. Flanagan JC. The critical incident technique. Psychol Bull. 1954;51.4:327359.
  11. Grant S, Humphris M. Critical evaluation of appreciative inquiry: bridging an apparent paradox. Action Res. 2006;4(4):401418.
  12. Strauss A, Corbin J. Basics of Qualitative Research. 2nd ed. Thousand Oaks, CA: Sage Publications; 1998.
  13. Fraenkel JR, Wallen NE. How to Design and Evaluate Research in Education. New York, NY: McGraw Hill; 2003.
  14. Miles MB, Huberman AM. Qualitative Data Analysis. Thousand Oaks, CA: Sage; 1994.
  15. Kennedy TJT, Regehr G, Baker GR, Lingard L. Point‐of‐care assessment of medical trainee competence for independent clinical work. Acad Med. 2008;84:S89S92.
  16. Cate O, Schelle F. Viewpoint: competency‐based postgraduate training: can we bridge the gap between theory and clinical practice? Acad Med. 2007;82(6):542547.
  17. Dijksterhuis MJK, Voorhuis M, Teunissen PW, et al. Assessment of competence and progressive independence in postgraduate clinical training. Med Educ. 2009;43:11561165.
  18. Kogan JR, Holmboe ES, Hauer KE. Tools for direct observation and assessment of clinical skills of medical trainees: a systematic review. JAMA. 2009;302(12):13161326.
  19. Epstein RM. Assessment in medical education. N Engl J Med. 2007;356:387396.
  20. Schraagen JM, Schouten A, Smit M, Beek D, Ven J. Barach P. A prospective study of paediatric cardiac surgical microsystems: assessing the relationships between non‐routine events, teamwork and patient outcomes. BMJ Qual Saf. 2011;20(7):599603.
  21. Finfgeld‐Connett D. Generalizability and transferability of meta‐synthesis research findings. J Adv Nurs. 2010;66(2):246254.
Article PDF
Issue
Journal of Hospital Medicine - 9(3)
Publications
Page Number
169-175
Sections
Files
Files
Article PDF
Article PDF

Determining when residents are independently prepared to perform clinical care tasks safely is not easy or understood. Educators have struggled to identify robust ways to evaluate trainees and their preparedness to treat patients while unsupervised. Trust allows the trainee to experience increasing levels of participation and responsibility in the workplace in a way that builds competence for future practice. The breadth of knowledge and skills required to become a competent and safe physician, coupled with the busy workload confound this challenge. Notably, a technically proficient trainee may not have the clinical judgment to treat patients without supervision.

The Accreditation Council of Graduate Medical Education (ACGME) has previously outlined 6 core competencies for residency training: patient care, medical knowledge, practice‐based learning and improvement, interpersonal and communication skills, professionalism, and systems‐based practice.[1] A systematic literature review suggests that traditional trainee evaluation tools are difficult to use and unreliable in measuring the competencies independently from one another, whereas certain competencies are consistently difficult to quantify in a reliable and valid way.[2] The evaluation of trainees' clinical performance despite efforts to create objective tools remain strongly influenced by subjective measures and continues to be highly variable among different evaluators.[3] Objectively measuring resident autonomy and readiness to supervise junior colleagues remains imprecise.[4]

The ACGME's Next Accreditation System (NAS) incorporates educational milestones as part of the reporting of resident training outcomes.[5] The milestones allow for the translation of the core competencies into integrative and observable abilities. Furthermore, the milestone categories are stratified into tiers to allow progress to be measured longitudinally and by task complexity using a novel assessment strategy.

The development of trust between supervisors and trainees is a critical step in decisions to allow increased responsibility and the provision of autonomous decision making, which is an important aspect of physician training. Identifying the factors that influence the supervisors' evaluation of resident competency and capability is at the crux of trainee maturation as well as patient safety.[4] Trust, defined as believability and discernment by attendings of resident physicians, plays a large role in attending evaluations of residents during their clinical rotations.[3] Trust impacts the decisions of successful performance of entrustable professional activities (EPAs), or those tasks that require mastery prior to completion of training milestones.[6] A study of entrustment decisions made by attending anesthesiologists identified the factors that contribute to the amount of autonomy given to residents, such as trainee trustworthiness, medical knowledge, and level of training.[4] The aim of our study, building on this study, was 2‐fold: (1) use deductive qualitative analysis to apply this framework to existing resident and attending data, and (2) define the categories within this framework and describe how internal medicine attending and resident physician perceptions of trust can impact clinical decision making and patient care.

METHODS

We are reporting on a secondary data analysis of interview transcripts from a study conducted on the inpatient general medicine service at the University of Chicago, an academic tertiary care medical center. The methods for data collection and full consent have been outlined previously.[7, 8, 9] The institutional review board of the University of Chicago approved this study.

Briefly, between January 2006 and November 2006, all eligible internal medicine resident physicians, postgraduate year (PGY)‐2 or PGY‐3, and attending physicians, either generalists or hospitalists, were privately interviewed within 1 week of their final call night on the inpatient general medicine rotation to assess decision making and clinical supervision during the rotation. All interviews were conducted by 1 investigator (J.F.), and discussions were audio taped and transcribed for analysis. Interviews were conducted at the conclusion of the rotation to prevent any influence on resident and attending behavior during the rotation.

The critical incident technique, a procedure used for collecting direct observations of human behavior that have critical significance on the decision‐making process, was used to solicit examples of ineffective supervision, inquiring about 2 to 3 important clinical decisions made on the most recent call night, with probes to identify issues of trust, autonomy, and decision making.[10] A critical incident can be described as one that makes a significant contribution, either positively or negatively, on the process.

Appreciative inquiry, a technique that aims to uncover the best things about the clinical encounter being explored, was used to solicit examples of effective supervision. Probes are used to identify factors, either personal or situational, that influenced the withholding or provision of resident autonomy during periods of clinical care delivery.[11]

All identifiable information was removed from the interview transcripts to protect participant and patient confidentiality. Deductive qualitative analysis was performed using the conceptual EPA framework, which describes several factors that influence the attending physicians' decisions to deem a resident trustworthy to independently fulfill a specific clinical task.[4] These factors include (1) the nature of the task, (2) the qualities of the supervisor, (3) the qualities of the trainee and the quality of the relationship between the supervisor and the trainee, and (4) the circumstances surrounding the clinical task.

The deidentified, anonymous transcripts were reviewed by 2 investigators (K.J.C., J.M.F.) and analyzed using the constant comparative methods to deductively map the content to the existing framework and generate novel sub themes.[12, 13, 14] Novel categories within each of the domains were inductively generated. Two reviewers (K.J.C., J.M.F.) independently applied the themes to a randomly selected 10% portion of the interview transcripts to assess the inter‐rater reliability. The inter‐rater agreement was assessed using the generalized kappa statistic. The discrepancies between reviewers regarding assignment of codes were resolved via discussion and third party adjudication until consensus was achieved on thematic structure. The codes were then applied to the entire dataset.

RESULTS

Between January 2006 and November 2006, 46 of 50 (88%) attending physicians and 44 of 50 (92%) resident physicians were interviewed following the conclusion of their general medicine inpatient rotation. Of attending physicians, 55% were male, 45% were female, and 38% were academic faculty hospitalists. Of the residents who completed interviews, 47% were male, 53% were female, 52% were PGY‐2, and 45% were PGY‐3.

A total of 535 mentions of trust were abstracted from the transcripts. The 4 major domains that influence trusttrainee factors (Table 1), supervisor factors (Table 2), task factors (Table 3), and systems factors (Table 4)were deductively coded with several emerging novel categories and subthemes. The domains were consistent across the postgraduate year of trainee. No differences in themes were noted, other than those explicitly stated, between the postgraduate years.

Trainee Factors
Domain (N)Category (N)Subtheme (N)Definition and Representative Comment
  • NOTE: Abbreviations: A, attending comment; N, number of mentions of specific domain, category, or subtheme; R, resident comment.

Trainee factors (170); characteristics specific to the trainee that either promote or discourage trust.Personal characteristics (78); traits that impact attendings' decision regarding trust/allowance of autonomy.Confidence and overconfidence (29)Displayed level of comfort when approaching specific clinical situations. I think I havea personality and presenting style [that] people think that I know what I am talkingabout and they just let me run with it. (R)
Accountability (18)Sense of responsibility, including ability to follow‐up on details regarding patient care. [What] bothered me the most was that that kind of lack of accountability for patient careand it makes the whole dynamic of rounds much more stressful. I ended up asking him to page me every day to run the list. (A)
Familiarity/ reputation (18)Comfort with trainee gained through prior working experience, or reputation of the trainee based on discussion with other supervisors. I do have to get to know someone a little to develop that level of trust, to know that it is okay to not check the labs every day, okay to not talk to them every afternoon. (A)
Honesty (13)Sense trainee is not withholding information in order to impact decision making toward a specific outcome. [The residents] have more information than I do and they can clearly spin that information, and it is very difficult to unravelunless you treat them like a hostile witness on the stand.(A)
Clinical attributes (92); skills demonstrated in the context of patient care that promote or inhibit trust.Leadership (19)Ability to organize, teach, and manage coresidents, interns, and students. I want them to be in chargedeciding the plan and sitting down with the team before rounds. (A)
Communication (12)Establishing and encouraging conversation with supervisor regarding decision making.Some residents call me regularly and let me know what's going on and others don't, and those who don't I really have trouble withif you're not calling to check in, then I don't trust your judgment. (A)
Specialty (6)Trainee future career plans. Whether it's right or wrong, nonmedicine interns may not be as attentive to smaller details, and so I had to be attentive to smaller details on [his] patients. (R2)
Medical knowledge (39)Ability to display appropriate level of clinical acumen and apply evidence‐based medicine. I definitelygo on my own gestalt of talking with them and deciding if what they do is reasonable. If they can't explain things to me, that's when I worry. (A)
Recognition of limitations (16)Trainee's ability to recognize his/her own weaknesses, accept criticism, and solicit help when appropriate. The first thing is that they know their limits and ask for help either in rounds or outside of rounds. That indicates to me that as they are out there on their own they are less likely to do things that they don't understand. (A)
Supervisor Factors
Domain (N)Major Category (N)Subtheme (N)Definition and Representative Comment
  • NOTE: Abbreviations: A, attending comment; N, number of mentions of specific domain, category, or subtheme; R, resident comment.

Supervisor factors (120); characteristics specific to the supervisor which either promote or discourage trust.Approachability (34); personality traits, such as approachability, which impact the trainees' perception regarding trust/allowance of autonomy. Sense that the attending physician is available to and receptive to questions from trainees. I think [attending physicians] being approachable and available to you if you need them is really helpful. (R)
Clinical attributes (86); skills demonstrated in the context of patient care that promote or inhibit trust.Institutional obligation (17)Attending physician is the one contractually and legally responsible for the provision of high‐quality and appropriate patient care. If [the residents] have a good reason I can be argued out of my position. I am ultimately responsible andhave to choose if there is some serious dispute. (A)
Experience and expertise (29)Clinical experience, area of specialty, and research interests of the attending physician. You have to be confident in your own clinical skills and knowledge, confident enough that you can say its okay for me to let go a little bit. (A)
Observation‐based evaluation (27)Evaluation of trainee decision‐making ability during the early part of the attending/trainee relationship. It's usually the first post‐call day experience, the first on‐call and post‐call day experience. One of the big things is [if they can] tell if a patient is sick or not sickif they are missing at that level then I get very nervous. I really get a sense [of] how they think about patients. (A)
Educational obligation (13)Acknowledging the role of the attending as clinical teacher. My theory with the interns was that they should do it because that's how you learn. (R)
Task Factors
Domain (N)Major Category (N)Subtheme (N)Definition
  • NOTE: Abbreviations: N, number of mentions of specific domain, category, or subtheme.

Task factors (146); details or characteristics of the task that encouraged or impeded contacting the supervisor.Clinical characteristics (103)Case complexity (25)Evaluation of the level of difficulty in patient management. I don't expect to be always looking over [the resident's] shoulder, I don't check labs everyday, and I don't call them if I see potassium of 3; I assume that they are going to take care of it.
Family/ethical dilemma (10)Uncertainty regarding respecting the wishes of patients and other ethical dilemmas. There was 1 time I called because we had a very sick patient who had a lot of family asking for more aggressive measures, and I called to be a part of the conversation.
Interdepartment collaboration (18)Difficulties when treating patients managed by multiple consult services. I have called [the attending] when I have had trouble pushing things through the systemif we had trouble getting tests or trouble with a particular consult team I would call him.
Urgency/severity of illness (13)Clinical condition of patient requires immediate or urgent intervention. If I have something that is really pressing I would probably page my attending. If it's a question [of] just something that I didn't know the answer to [or] wasn't that urgent I could turn to my fellow residents.
Transitions of care (37)Communication with supervisor because of concern/uncertainty regarding patient transition decisions. We wanted to know if it was okay to discharge somebody or if something changes where something in the plan changes. I usually text page her or call her.
Situation or environment characteristics (49)Proximity of attending physicians and support staff (10)Availability of attending physicians and staff resources . I have been called in once or twice to help with a lumbar puncture or paracentesis, but not too often. The procedure service makes life much easier than it used to be.
Team culture (33)Presence or absence of a collaborative and supportive group environment. I had a team that I did trust. I think we communicated well; we were all sort of on the same page.
Time of day (6)Time of the task. Once its past 11 pm, I feel like I shouldn't call, the threshold is higherthe patient has to be sicker.
Systems Factors
Domain (N)Major Categories (N)Definition
  • NOTE: Abbreviations: N, number of mentions of specific domain, category, or subtheme.

Systems factors (99); unmodifiable factors not related to personal characteristics or knowledge of trainee or supervisor.Workload (15)Increasing trainee clinical workload results in a more intensive experience. They [residents] get 10 patients within a pretty concentrated timeso they really have to absorb a lot of information in a short period of time.
Institutional culture (4)Anticipated quality of the trainee because of the status of the institution. I assume that our residents and interns are top notch, so I go in with this real assumption that I expect the best of them because we are [the best].
Clinical experience of trainee (36)Types of clinical experience prior to supervisor/trainee interaction. The interns have done as much [general inpatient medicine] months as I havethey had both done like 2 or 3 months really close together, so they were sort of at their peak knowledge.
Level of training (25)Postgraduate year of trainee. It depends on the experience level of the resident. A second year who just finished internship, I am going to supervise more closely and be more detail oriented; a fourth year medicine‐pediatrics resident who is almost done, I will supervise a lot less.
Duty hours/efficiency pressures (5)Absence of residents due to other competing factors, including compliance with work‐hour restrictions. Before the work‐hour [restrictions], when [residents] were here all the time and knew everything about the patients, I found them to be a lot more reliableand now they are still supposed to be in charge, but hell I am here more often than they are. I am here every day, I have more information than they do. How can you run the show if you are not here every day?
Philosophy of medical education (14)Belief that trainees learn by the provision of completely autonomous decision making. When you are not around, [the residents] have autonomy, they are the people making the initial decisions and making the initial assessments. They are the ones who are there in the middle of the night, the ones who are there at 3 o'clock in the afternoon. The resident is supposed to have room to make decisions. When I am not there, it's not my show.

Trainee Factors

Attending and resident physicians both cited trainee factors as major determinants of granting entrustment (Table 1). Within the domain, the categories described included trainee personal characteristics and clinical characteristics. Of the subthemes noted within the major category of personal characteristics, the perceived confidence or overconfidence of the trainee was most often mentioned. Other subthemes included accountability, familiarity, and honesty. Attending physicians reported using perceived resident confidence as a gauge of the trainee's true ability and comfort. Conversely, some attending physicians reported that perceived overconfidence was a red flag that warranted increased scrutiny. Overconfidence was identified by faculty as trainees with an inability to recognize their limitations in either technical skill or knowledge. Confidence was noted in trainees that recognized their own limitations while also enacting effective management plans, and those physicians that prioritized the patient needs over their personal needs.

The clinical attributes of trainees described by attendings included: leadership skills, communication skills, anticipated specialty, medical knowledge, and perceived recognition of limitations. All participants expressed that the possession of adequate medical knowledge was the most important clinical skills‐related factor in the development of trust. Trainee demonstration of judgment, including applying evidence‐based practice, was used to support attending physician's decision to give residents more autonomy in managing patients. Many attending physicians described a specific pattern of observation and evaluation, in which they would rely on impressions shaped early in the rotation to inform their decisions of entrustment throughout the rotation. The use of this early litmus test was highlighted by several attending physicians. This litmus test described the importance of behavior on the first day/call night and postcall interactions as particularly important opportunities to gauge the ability of a resident to triage new patient admissions, manage their anxiety and uncertainty, and demonstrate maturity and professionalism. Several faculty members discussed examples of their litmus test including checking and knowing laboratory data prior to rounds but not mentioning their findings until they had noted the resident was unaware ([I]f I see a 2 g hemoglobin drop when I check the [electronic medical record {EMR}] and they don't bring it up, I will bring it to their attention, and then I'll get more involved.) or assessing the management of both straightforward and complex patients. They would then use this initial impression to determine their degree of involvement in the care of the patient.

The quality and nature of the communication skills, particularly the increased frequency of contact between resident and attending, was used as a barometer of trainee judgment. Furthermore, attending physicians expressed that they would often micromanage patient care if they did not trust a trainee's ability to reliably and frequently communicate patient status as well as the attendings concerns and uncertainty about future decisions. Some level of uncertainty was generally seen in a positive light by attending physicians, because it signaled that trainees had a mature understanding of their limitations. Finally, the trainee's expressed future specialty, especially if the trainee was a preliminary PGY‐1 resident, or a more senior resident anticipating subspecialty training in a procedural specialty, impacted the degree of autonomy provided.

Supervisor Factors

Supervisor characteristics were further categorized into their approachability and clinical attributes (Table 2). Approachability as a proxy for quality of the relationship, was cited as the personality characteristic that most influenced trust by the residents. This was often described by both attending and resident physicians as the presence of a supportive team atmosphere created through explicit declaration of availability to help with patient care tasks. Some attending physicians described the importance of expressing enthusiasm when receiving queries from their team to foster an atmosphere of nonjudgmental collaboration.

The clinical experience and knowledge base of the attending physician played a role in the provision of autonomy, particularly in times of disagreement about particular clinical decisions. Conversely, attending physicians who had spent less time on inpatient general medicine were more willing to yield to resident suggestions.

Task Factors

The domain of task factors was further divided into the categories that pertained to the clinical aspects of the task and those that pertained to the context, that is the environment in which the entrustment decisions were made (Table 3). Clinical characteristics included case complexity, presence of an ethical dilemma, interdepartmental collaboration, urgency/severity of situation, and transitions of care. The environmental characteristics included physical proximity of supervisors/support, team culture, and time of day. Increasing case complexity, especially the coexistence of legal and/or ethical dilemmas, was often mentioned as a factor driving greater attending involvement. Conversely, straightforward clinical decisions, such as electrolyte repletion, were described as sufficiently easy to allow limited attending involvement. Transitions of care, such as patient discharge or transfer, required greater communication and attending involvement or guidance, regardless of case complexity.

Attending and resident physicians reported that the team dynamics played a large role in the development, granting, or discouragement of trust. Teams with a positive rapport reported a collaborative environment that fostered increased trust by the attending and led to greater resident autonomy. Conversely, team discord that influenced the supervisor‐trainee relationship, often defined as toxic attitudes within the team, was often singled out as the reason attending physicians would feel the need to engage more directly in patient care and by extension have less trust in residents to manage their patients.

Systems Factors

Systems factors were described as the nonmodifiable factors, unrelated to either the characteristics of the supervisor, trainee, or the clinical task (Table 4). The subthemes that emerged included workload, institutional culture, trainee experience, level of training, and duty hours/efficiency pressures. Residents and attending physicians noted that trainee PGY and clinical experience commonly influenced the provision of autonomy and supervision by attendings. Participants reported that the importance of adequate clinical experience was of greater concern given the new duty‐hour restrictions, increased workload, as well as efficiency pressures. Attending physicians noted that trainee absences, even when required to comply with duty‐hour restrictions, had a negative effect on entrustment‐granting decisions. Many attendings felt that a trainee had to be physically present to make informed decisions on the inpatient medicine service.

DISCUSSION

Clinical supervisors must hold the quality of care constant while balancing the amount of supervision and autonomy provided to learners in procedural tasks and clinical decision making. We found that the development of trust is multifactorial and highly contextual. It occurs under the broad constructs of task, supervisor, trainee, and environmental factors, and is well described in prior work. We also demonstrate that often what determines these broader factors is highly subjective, frequently independent of objective measures of trainee performance. Many decisions are based on personal characteristics, such as the perception of honesty, disposition, perceived confidence or perceived overconfidence of the trainee, prior experience, and expressed future field of specialty.

Our findings are consistent with prior research, but go further in describing and demonstrating the existence and innovative use of factors, other than clinical knowledge and skill, in the formation of a multidimensional construct of trust. Kennedy et al. identified 4 dimensions of trust knowledge and skill, discernment, conscientiousness, and truthfulness[15]and demonstrated that supervising physicians rely on specific processes to assess trainee trustworthiness, specifically the use of double checks and language cues. This is consistent with our results, which demonstrate that many attending physicians independently verify information, such as laboratory findings, to inform their perceptions of trainee honesty, attention to detail, and ability to follow orders reliably. Furthermore, our subthemes of communication and the demonstration of logical clinical reasoning correspond to Kennedy's use of language cues.[15] We found that language cues are used as markers of trustworthiness, particularly early on in the rotation, as a litmus test to gauge the trainee's integrity and ability to assess and treat patients unsupervised.

To date, much has been written about the importance of direct observation in the evaluation of trainees.[16, 17, 18, 19] Our results demonstrate that supervising clinicians use a multifactorial, highly nuanced, and subjective process despite validated performance‐based assessment methods, such as the objective structured clinical exam or mini‐clinical evaluation exercise, to assess competence and grant entrustement.[3] Several factors utilized to determine trustworthiness in addition to direct observation are subjective in nature, specifically the trainee's prior experience and expressed career choice.

It is encouraging that attending physicians make use of direct observations to inform decisions of entrustment, albeit in an informal and unstructured way. They also seem to take into account the context and setting in which the observation occurs, and consider both the environmental factors as well as factors that relate to the task itself.[20] For example, attendings and residents reported that team dynamics played a large role in influencing trust decisions. We also found that attending physicians rely on indirect observation and will inquire among their colleagues and other senior residents to gain information about their trainees abilities and integrity. Evaluation tools that facilitate sharing of trainees' level of preparedness, prior feedback, and experience could facilitate the determination of readiness to complete EPAs as well as the reporting of achieved milestones in accordance with the ACGME NAS.

Sharing knowledge about trainees among attendings is common and of increasing importance in the context of attending physicians' shortened exposure to trainees due to the residency work‐hour restrictions and growing productivity pressures. In our study, attending physicians described work‐hour restrictions as detrimental to trainee trustworthiness, either in the context of decreased accountability for patient care or as intrinsic to the nature of forced absences that kept trainees from fully participating in daily ward activities and knowing their patients. Attending physicians felt that trainees did not know their patients well enough to be able to make independent decisions about care. The increased transition to a shift‐based structure of inpatient medicine may result in increasingly less time for direct observation and make it more difficult for attendings to justify their decisions about engendering trust. In addition, the increased fragmentation that is noted in training secondary to the work‐hour regulations may in fact have consequences on the development of clinical skill and decision making, such that increased attention to the need for supervision and longer lead to entrustment may be needed in certain circumstances. Attendings need guidance on how to improve their ability to observe trainees in the context of the new work environment, and how to role model decision making more effectively in the compressed time exposure to housestaff.

Our study has several limitations. The organizational structure and culture of our institution are unique to 1 academic setting. This may undermine our ability to generalize these research findings and analysis to the population at large.[21] In addition, recall bias may have played into the interpretation of the interview content given the timing with which they were performed after the conclusion of the rotation. The study interviews took place in 2006, and it is reasonable to believe that some perceptions concerning duty‐hour restrictions and competency‐based graduate medical education have changed. However, from our ongoing research over the past 5 years[4] and our personal experience with entrustment factors, we believe that the participants' perceptions of trust and competency are valid and have largely remained unchanged, given the similarity in findings to the accepted ten Cate framework. In addition, this work was done following the first iteration of the work‐hour regulations but prior to the implementation of explicit supervisory levels, so it may indeed represent a truer state of the supervisory relationship before external regulations were applied. Finally, this work represents an internal medicine residency training program and may not be generalizable to other specialties that posses different cultural factors that impact the decision for entrustment. However, the congruence of our data with that of the original work of ten Cate, which was done in gynecology,[6] and that of Sterkenberg et al. in anesthesiology,[4] supports our key factors being ubiquitous to all training programs.

In conclusion, we provide new insights into subjective factors that inform the perceptions of trust and entrustment decisions by supervising physicians, specifically subjective trainee characteristics, team dynamics, and informal observation. There was agreement among attendings about which elements of competence are considered most important in their entrustment decisions related to trainee, supervisor, task, and environmental factors. Rather than undervaluing the use of personal factors in the determination of trust, we believe that acknowledgement and appreciation of these factors may be important to give supervisors more confidence and better tools to assess resident physicians, and to understand how their personality traits relate to and impact their professional competence. Our findings are relevant for the development of assessment instruments to evaluate whether medical graduates are ready for safe practice without supervision.

ACKNOWLEDGEMENTS

Disclosures: Dr. Kevin Choo was supported by Scholarship and Discovery, University of Chicago, while in his role as a fourth‐year medical student. This study received institutional review board approval prior to evaluation of our human participants. Portions of this study were presented as an oral abstract at the 35th Annual Meeting of the Society of General Internal Medicine, Orlando, Florida, May 912, 2012.

Determining when residents are independently prepared to perform clinical care tasks safely is not easy or understood. Educators have struggled to identify robust ways to evaluate trainees and their preparedness to treat patients while unsupervised. Trust allows the trainee to experience increasing levels of participation and responsibility in the workplace in a way that builds competence for future practice. The breadth of knowledge and skills required to become a competent and safe physician, coupled with the busy workload confound this challenge. Notably, a technically proficient trainee may not have the clinical judgment to treat patients without supervision.

The Accreditation Council of Graduate Medical Education (ACGME) has previously outlined 6 core competencies for residency training: patient care, medical knowledge, practice‐based learning and improvement, interpersonal and communication skills, professionalism, and systems‐based practice.[1] A systematic literature review suggests that traditional trainee evaluation tools are difficult to use and unreliable in measuring the competencies independently from one another, whereas certain competencies are consistently difficult to quantify in a reliable and valid way.[2] The evaluation of trainees' clinical performance despite efforts to create objective tools remain strongly influenced by subjective measures and continues to be highly variable among different evaluators.[3] Objectively measuring resident autonomy and readiness to supervise junior colleagues remains imprecise.[4]

The ACGME's Next Accreditation System (NAS) incorporates educational milestones as part of the reporting of resident training outcomes.[5] The milestones allow for the translation of the core competencies into integrative and observable abilities. Furthermore, the milestone categories are stratified into tiers to allow progress to be measured longitudinally and by task complexity using a novel assessment strategy.

The development of trust between supervisors and trainees is a critical step in decisions to allow increased responsibility and the provision of autonomous decision making, which is an important aspect of physician training. Identifying the factors that influence the supervisors' evaluation of resident competency and capability is at the crux of trainee maturation as well as patient safety.[4] Trust, defined as believability and discernment by attendings of resident physicians, plays a large role in attending evaluations of residents during their clinical rotations.[3] Trust impacts the decisions of successful performance of entrustable professional activities (EPAs), or those tasks that require mastery prior to completion of training milestones.[6] A study of entrustment decisions made by attending anesthesiologists identified the factors that contribute to the amount of autonomy given to residents, such as trainee trustworthiness, medical knowledge, and level of training.[4] The aim of our study, building on this study, was 2‐fold: (1) use deductive qualitative analysis to apply this framework to existing resident and attending data, and (2) define the categories within this framework and describe how internal medicine attending and resident physician perceptions of trust can impact clinical decision making and patient care.

METHODS

We are reporting on a secondary data analysis of interview transcripts from a study conducted on the inpatient general medicine service at the University of Chicago, an academic tertiary care medical center. The methods for data collection and full consent have been outlined previously.[7, 8, 9] The institutional review board of the University of Chicago approved this study.

Briefly, between January 2006 and November 2006, all eligible internal medicine resident physicians, postgraduate year (PGY)‐2 or PGY‐3, and attending physicians, either generalists or hospitalists, were privately interviewed within 1 week of their final call night on the inpatient general medicine rotation to assess decision making and clinical supervision during the rotation. All interviews were conducted by 1 investigator (J.F.), and discussions were audio taped and transcribed for analysis. Interviews were conducted at the conclusion of the rotation to prevent any influence on resident and attending behavior during the rotation.

The critical incident technique, a procedure used for collecting direct observations of human behavior that have critical significance on the decision‐making process, was used to solicit examples of ineffective supervision, inquiring about 2 to 3 important clinical decisions made on the most recent call night, with probes to identify issues of trust, autonomy, and decision making.[10] A critical incident can be described as one that makes a significant contribution, either positively or negatively, on the process.

Appreciative inquiry, a technique that aims to uncover the best things about the clinical encounter being explored, was used to solicit examples of effective supervision. Probes are used to identify factors, either personal or situational, that influenced the withholding or provision of resident autonomy during periods of clinical care delivery.[11]

All identifiable information was removed from the interview transcripts to protect participant and patient confidentiality. Deductive qualitative analysis was performed using the conceptual EPA framework, which describes several factors that influence the attending physicians' decisions to deem a resident trustworthy to independently fulfill a specific clinical task.[4] These factors include (1) the nature of the task, (2) the qualities of the supervisor, (3) the qualities of the trainee and the quality of the relationship between the supervisor and the trainee, and (4) the circumstances surrounding the clinical task.

The deidentified, anonymous transcripts were reviewed by 2 investigators (K.J.C., J.M.F.) and analyzed using the constant comparative methods to deductively map the content to the existing framework and generate novel sub themes.[12, 13, 14] Novel categories within each of the domains were inductively generated. Two reviewers (K.J.C., J.M.F.) independently applied the themes to a randomly selected 10% portion of the interview transcripts to assess the inter‐rater reliability. The inter‐rater agreement was assessed using the generalized kappa statistic. The discrepancies between reviewers regarding assignment of codes were resolved via discussion and third party adjudication until consensus was achieved on thematic structure. The codes were then applied to the entire dataset.

RESULTS

Between January 2006 and November 2006, 46 of 50 (88%) attending physicians and 44 of 50 (92%) resident physicians were interviewed following the conclusion of their general medicine inpatient rotation. Of attending physicians, 55% were male, 45% were female, and 38% were academic faculty hospitalists. Of the residents who completed interviews, 47% were male, 53% were female, 52% were PGY‐2, and 45% were PGY‐3.

A total of 535 mentions of trust were abstracted from the transcripts. The 4 major domains that influence trusttrainee factors (Table 1), supervisor factors (Table 2), task factors (Table 3), and systems factors (Table 4)were deductively coded with several emerging novel categories and subthemes. The domains were consistent across the postgraduate year of trainee. No differences in themes were noted, other than those explicitly stated, between the postgraduate years.

Trainee Factors
Domain (N)Category (N)Subtheme (N)Definition and Representative Comment
  • NOTE: Abbreviations: A, attending comment; N, number of mentions of specific domain, category, or subtheme; R, resident comment.

Trainee factors (170); characteristics specific to the trainee that either promote or discourage trust.Personal characteristics (78); traits that impact attendings' decision regarding trust/allowance of autonomy.Confidence and overconfidence (29)Displayed level of comfort when approaching specific clinical situations. I think I havea personality and presenting style [that] people think that I know what I am talkingabout and they just let me run with it. (R)
Accountability (18)Sense of responsibility, including ability to follow‐up on details regarding patient care. [What] bothered me the most was that that kind of lack of accountability for patient careand it makes the whole dynamic of rounds much more stressful. I ended up asking him to page me every day to run the list. (A)
Familiarity/ reputation (18)Comfort with trainee gained through prior working experience, or reputation of the trainee based on discussion with other supervisors. I do have to get to know someone a little to develop that level of trust, to know that it is okay to not check the labs every day, okay to not talk to them every afternoon. (A)
Honesty (13)Sense trainee is not withholding information in order to impact decision making toward a specific outcome. [The residents] have more information than I do and they can clearly spin that information, and it is very difficult to unravelunless you treat them like a hostile witness on the stand.(A)
Clinical attributes (92); skills demonstrated in the context of patient care that promote or inhibit trust.Leadership (19)Ability to organize, teach, and manage coresidents, interns, and students. I want them to be in chargedeciding the plan and sitting down with the team before rounds. (A)
Communication (12)Establishing and encouraging conversation with supervisor regarding decision making.Some residents call me regularly and let me know what's going on and others don't, and those who don't I really have trouble withif you're not calling to check in, then I don't trust your judgment. (A)
Specialty (6)Trainee future career plans. Whether it's right or wrong, nonmedicine interns may not be as attentive to smaller details, and so I had to be attentive to smaller details on [his] patients. (R2)
Medical knowledge (39)Ability to display appropriate level of clinical acumen and apply evidence‐based medicine. I definitelygo on my own gestalt of talking with them and deciding if what they do is reasonable. If they can't explain things to me, that's when I worry. (A)
Recognition of limitations (16)Trainee's ability to recognize his/her own weaknesses, accept criticism, and solicit help when appropriate. The first thing is that they know their limits and ask for help either in rounds or outside of rounds. That indicates to me that as they are out there on their own they are less likely to do things that they don't understand. (A)
Supervisor Factors
Domain (N)Major Category (N)Subtheme (N)Definition and Representative Comment
  • NOTE: Abbreviations: A, attending comment; N, number of mentions of specific domain, category, or subtheme; R, resident comment.

Supervisor factors (120); characteristics specific to the supervisor which either promote or discourage trust.Approachability (34); personality traits, such as approachability, which impact the trainees' perception regarding trust/allowance of autonomy. Sense that the attending physician is available to and receptive to questions from trainees. I think [attending physicians] being approachable and available to you if you need them is really helpful. (R)
Clinical attributes (86); skills demonstrated in the context of patient care that promote or inhibit trust.Institutional obligation (17)Attending physician is the one contractually and legally responsible for the provision of high‐quality and appropriate patient care. If [the residents] have a good reason I can be argued out of my position. I am ultimately responsible andhave to choose if there is some serious dispute. (A)
Experience and expertise (29)Clinical experience, area of specialty, and research interests of the attending physician. You have to be confident in your own clinical skills and knowledge, confident enough that you can say its okay for me to let go a little bit. (A)
Observation‐based evaluation (27)Evaluation of trainee decision‐making ability during the early part of the attending/trainee relationship. It's usually the first post‐call day experience, the first on‐call and post‐call day experience. One of the big things is [if they can] tell if a patient is sick or not sickif they are missing at that level then I get very nervous. I really get a sense [of] how they think about patients. (A)
Educational obligation (13)Acknowledging the role of the attending as clinical teacher. My theory with the interns was that they should do it because that's how you learn. (R)
Task Factors
Domain (N)Major Category (N)Subtheme (N)Definition
  • NOTE: Abbreviations: N, number of mentions of specific domain, category, or subtheme.

Task factors (146); details or characteristics of the task that encouraged or impeded contacting the supervisor.Clinical characteristics (103)Case complexity (25)Evaluation of the level of difficulty in patient management. I don't expect to be always looking over [the resident's] shoulder, I don't check labs everyday, and I don't call them if I see potassium of 3; I assume that they are going to take care of it.
Family/ethical dilemma (10)Uncertainty regarding respecting the wishes of patients and other ethical dilemmas. There was 1 time I called because we had a very sick patient who had a lot of family asking for more aggressive measures, and I called to be a part of the conversation.
Interdepartment collaboration (18)Difficulties when treating patients managed by multiple consult services. I have called [the attending] when I have had trouble pushing things through the systemif we had trouble getting tests or trouble with a particular consult team I would call him.
Urgency/severity of illness (13)Clinical condition of patient requires immediate or urgent intervention. If I have something that is really pressing I would probably page my attending. If it's a question [of] just something that I didn't know the answer to [or] wasn't that urgent I could turn to my fellow residents.
Transitions of care (37)Communication with supervisor because of concern/uncertainty regarding patient transition decisions. We wanted to know if it was okay to discharge somebody or if something changes where something in the plan changes. I usually text page her or call her.
Situation or environment characteristics (49)Proximity of attending physicians and support staff (10)Availability of attending physicians and staff resources . I have been called in once or twice to help with a lumbar puncture or paracentesis, but not too often. The procedure service makes life much easier than it used to be.
Team culture (33)Presence or absence of a collaborative and supportive group environment. I had a team that I did trust. I think we communicated well; we were all sort of on the same page.
Time of day (6)Time of the task. Once its past 11 pm, I feel like I shouldn't call, the threshold is higherthe patient has to be sicker.
Systems Factors
Domain (N)Major Categories (N)Definition
  • NOTE: Abbreviations: N, number of mentions of specific domain, category, or subtheme.

Systems factors (99); unmodifiable factors not related to personal characteristics or knowledge of trainee or supervisor.Workload (15)Increasing trainee clinical workload results in a more intensive experience. They [residents] get 10 patients within a pretty concentrated timeso they really have to absorb a lot of information in a short period of time.
Institutional culture (4)Anticipated quality of the trainee because of the status of the institution. I assume that our residents and interns are top notch, so I go in with this real assumption that I expect the best of them because we are [the best].
Clinical experience of trainee (36)Types of clinical experience prior to supervisor/trainee interaction. The interns have done as much [general inpatient medicine] months as I havethey had both done like 2 or 3 months really close together, so they were sort of at their peak knowledge.
Level of training (25)Postgraduate year of trainee. It depends on the experience level of the resident. A second year who just finished internship, I am going to supervise more closely and be more detail oriented; a fourth year medicine‐pediatrics resident who is almost done, I will supervise a lot less.
Duty hours/efficiency pressures (5)Absence of residents due to other competing factors, including compliance with work‐hour restrictions. Before the work‐hour [restrictions], when [residents] were here all the time and knew everything about the patients, I found them to be a lot more reliableand now they are still supposed to be in charge, but hell I am here more often than they are. I am here every day, I have more information than they do. How can you run the show if you are not here every day?
Philosophy of medical education (14)Belief that trainees learn by the provision of completely autonomous decision making. When you are not around, [the residents] have autonomy, they are the people making the initial decisions and making the initial assessments. They are the ones who are there in the middle of the night, the ones who are there at 3 o'clock in the afternoon. The resident is supposed to have room to make decisions. When I am not there, it's not my show.

Trainee Factors

Attending and resident physicians both cited trainee factors as major determinants of granting entrustment (Table 1). Within the domain, the categories described included trainee personal characteristics and clinical characteristics. Of the subthemes noted within the major category of personal characteristics, the perceived confidence or overconfidence of the trainee was most often mentioned. Other subthemes included accountability, familiarity, and honesty. Attending physicians reported using perceived resident confidence as a gauge of the trainee's true ability and comfort. Conversely, some attending physicians reported that perceived overconfidence was a red flag that warranted increased scrutiny. Overconfidence was identified by faculty as trainees with an inability to recognize their limitations in either technical skill or knowledge. Confidence was noted in trainees that recognized their own limitations while also enacting effective management plans, and those physicians that prioritized the patient needs over their personal needs.

The clinical attributes of trainees described by attendings included: leadership skills, communication skills, anticipated specialty, medical knowledge, and perceived recognition of limitations. All participants expressed that the possession of adequate medical knowledge was the most important clinical skills‐related factor in the development of trust. Trainee demonstration of judgment, including applying evidence‐based practice, was used to support attending physician's decision to give residents more autonomy in managing patients. Many attending physicians described a specific pattern of observation and evaluation, in which they would rely on impressions shaped early in the rotation to inform their decisions of entrustment throughout the rotation. The use of this early litmus test was highlighted by several attending physicians. This litmus test described the importance of behavior on the first day/call night and postcall interactions as particularly important opportunities to gauge the ability of a resident to triage new patient admissions, manage their anxiety and uncertainty, and demonstrate maturity and professionalism. Several faculty members discussed examples of their litmus test including checking and knowing laboratory data prior to rounds but not mentioning their findings until they had noted the resident was unaware ([I]f I see a 2 g hemoglobin drop when I check the [electronic medical record {EMR}] and they don't bring it up, I will bring it to their attention, and then I'll get more involved.) or assessing the management of both straightforward and complex patients. They would then use this initial impression to determine their degree of involvement in the care of the patient.

The quality and nature of the communication skills, particularly the increased frequency of contact between resident and attending, was used as a barometer of trainee judgment. Furthermore, attending physicians expressed that they would often micromanage patient care if they did not trust a trainee's ability to reliably and frequently communicate patient status as well as the attendings concerns and uncertainty about future decisions. Some level of uncertainty was generally seen in a positive light by attending physicians, because it signaled that trainees had a mature understanding of their limitations. Finally, the trainee's expressed future specialty, especially if the trainee was a preliminary PGY‐1 resident, or a more senior resident anticipating subspecialty training in a procedural specialty, impacted the degree of autonomy provided.

Supervisor Factors

Supervisor characteristics were further categorized into their approachability and clinical attributes (Table 2). Approachability as a proxy for quality of the relationship, was cited as the personality characteristic that most influenced trust by the residents. This was often described by both attending and resident physicians as the presence of a supportive team atmosphere created through explicit declaration of availability to help with patient care tasks. Some attending physicians described the importance of expressing enthusiasm when receiving queries from their team to foster an atmosphere of nonjudgmental collaboration.

The clinical experience and knowledge base of the attending physician played a role in the provision of autonomy, particularly in times of disagreement about particular clinical decisions. Conversely, attending physicians who had spent less time on inpatient general medicine were more willing to yield to resident suggestions.

Task Factors

The domain of task factors was further divided into the categories that pertained to the clinical aspects of the task and those that pertained to the context, that is the environment in which the entrustment decisions were made (Table 3). Clinical characteristics included case complexity, presence of an ethical dilemma, interdepartmental collaboration, urgency/severity of situation, and transitions of care. The environmental characteristics included physical proximity of supervisors/support, team culture, and time of day. Increasing case complexity, especially the coexistence of legal and/or ethical dilemmas, was often mentioned as a factor driving greater attending involvement. Conversely, straightforward clinical decisions, such as electrolyte repletion, were described as sufficiently easy to allow limited attending involvement. Transitions of care, such as patient discharge or transfer, required greater communication and attending involvement or guidance, regardless of case complexity.

Attending and resident physicians reported that the team dynamics played a large role in the development, granting, or discouragement of trust. Teams with a positive rapport reported a collaborative environment that fostered increased trust by the attending and led to greater resident autonomy. Conversely, team discord that influenced the supervisor‐trainee relationship, often defined as toxic attitudes within the team, was often singled out as the reason attending physicians would feel the need to engage more directly in patient care and by extension have less trust in residents to manage their patients.

Systems Factors

Systems factors were described as the nonmodifiable factors, unrelated to either the characteristics of the supervisor, trainee, or the clinical task (Table 4). The subthemes that emerged included workload, institutional culture, trainee experience, level of training, and duty hours/efficiency pressures. Residents and attending physicians noted that trainee PGY and clinical experience commonly influenced the provision of autonomy and supervision by attendings. Participants reported that the importance of adequate clinical experience was of greater concern given the new duty‐hour restrictions, increased workload, as well as efficiency pressures. Attending physicians noted that trainee absences, even when required to comply with duty‐hour restrictions, had a negative effect on entrustment‐granting decisions. Many attendings felt that a trainee had to be physically present to make informed decisions on the inpatient medicine service.

DISCUSSION

Clinical supervisors must hold the quality of care constant while balancing the amount of supervision and autonomy provided to learners in procedural tasks and clinical decision making. We found that the development of trust is multifactorial and highly contextual. It occurs under the broad constructs of task, supervisor, trainee, and environmental factors, and is well described in prior work. We also demonstrate that often what determines these broader factors is highly subjective, frequently independent of objective measures of trainee performance. Many decisions are based on personal characteristics, such as the perception of honesty, disposition, perceived confidence or perceived overconfidence of the trainee, prior experience, and expressed future field of specialty.

Our findings are consistent with prior research, but go further in describing and demonstrating the existence and innovative use of factors, other than clinical knowledge and skill, in the formation of a multidimensional construct of trust. Kennedy et al. identified 4 dimensions of trust knowledge and skill, discernment, conscientiousness, and truthfulness[15]and demonstrated that supervising physicians rely on specific processes to assess trainee trustworthiness, specifically the use of double checks and language cues. This is consistent with our results, which demonstrate that many attending physicians independently verify information, such as laboratory findings, to inform their perceptions of trainee honesty, attention to detail, and ability to follow orders reliably. Furthermore, our subthemes of communication and the demonstration of logical clinical reasoning correspond to Kennedy's use of language cues.[15] We found that language cues are used as markers of trustworthiness, particularly early on in the rotation, as a litmus test to gauge the trainee's integrity and ability to assess and treat patients unsupervised.

To date, much has been written about the importance of direct observation in the evaluation of trainees.[16, 17, 18, 19] Our results demonstrate that supervising clinicians use a multifactorial, highly nuanced, and subjective process despite validated performance‐based assessment methods, such as the objective structured clinical exam or mini‐clinical evaluation exercise, to assess competence and grant entrustement.[3] Several factors utilized to determine trustworthiness in addition to direct observation are subjective in nature, specifically the trainee's prior experience and expressed career choice.

It is encouraging that attending physicians make use of direct observations to inform decisions of entrustment, albeit in an informal and unstructured way. They also seem to take into account the context and setting in which the observation occurs, and consider both the environmental factors as well as factors that relate to the task itself.[20] For example, attendings and residents reported that team dynamics played a large role in influencing trust decisions. We also found that attending physicians rely on indirect observation and will inquire among their colleagues and other senior residents to gain information about their trainees abilities and integrity. Evaluation tools that facilitate sharing of trainees' level of preparedness, prior feedback, and experience could facilitate the determination of readiness to complete EPAs as well as the reporting of achieved milestones in accordance with the ACGME NAS.

Sharing knowledge about trainees among attendings is common and of increasing importance in the context of attending physicians' shortened exposure to trainees due to the residency work‐hour restrictions and growing productivity pressures. In our study, attending physicians described work‐hour restrictions as detrimental to trainee trustworthiness, either in the context of decreased accountability for patient care or as intrinsic to the nature of forced absences that kept trainees from fully participating in daily ward activities and knowing their patients. Attending physicians felt that trainees did not know their patients well enough to be able to make independent decisions about care. The increased transition to a shift‐based structure of inpatient medicine may result in increasingly less time for direct observation and make it more difficult for attendings to justify their decisions about engendering trust. In addition, the increased fragmentation that is noted in training secondary to the work‐hour regulations may in fact have consequences on the development of clinical skill and decision making, such that increased attention to the need for supervision and longer lead to entrustment may be needed in certain circumstances. Attendings need guidance on how to improve their ability to observe trainees in the context of the new work environment, and how to role model decision making more effectively in the compressed time exposure to housestaff.

Our study has several limitations. The organizational structure and culture of our institution are unique to 1 academic setting. This may undermine our ability to generalize these research findings and analysis to the population at large.[21] In addition, recall bias may have played into the interpretation of the interview content given the timing with which they were performed after the conclusion of the rotation. The study interviews took place in 2006, and it is reasonable to believe that some perceptions concerning duty‐hour restrictions and competency‐based graduate medical education have changed. However, from our ongoing research over the past 5 years[4] and our personal experience with entrustment factors, we believe that the participants' perceptions of trust and competency are valid and have largely remained unchanged, given the similarity in findings to the accepted ten Cate framework. In addition, this work was done following the first iteration of the work‐hour regulations but prior to the implementation of explicit supervisory levels, so it may indeed represent a truer state of the supervisory relationship before external regulations were applied. Finally, this work represents an internal medicine residency training program and may not be generalizable to other specialties that posses different cultural factors that impact the decision for entrustment. However, the congruence of our data with that of the original work of ten Cate, which was done in gynecology,[6] and that of Sterkenberg et al. in anesthesiology,[4] supports our key factors being ubiquitous to all training programs.

In conclusion, we provide new insights into subjective factors that inform the perceptions of trust and entrustment decisions by supervising physicians, specifically subjective trainee characteristics, team dynamics, and informal observation. There was agreement among attendings about which elements of competence are considered most important in their entrustment decisions related to trainee, supervisor, task, and environmental factors. Rather than undervaluing the use of personal factors in the determination of trust, we believe that acknowledgement and appreciation of these factors may be important to give supervisors more confidence and better tools to assess resident physicians, and to understand how their personality traits relate to and impact their professional competence. Our findings are relevant for the development of assessment instruments to evaluate whether medical graduates are ready for safe practice without supervision.

ACKNOWLEDGEMENTS

Disclosures: Dr. Kevin Choo was supported by Scholarship and Discovery, University of Chicago, while in his role as a fourth‐year medical student. This study received institutional review board approval prior to evaluation of our human participants. Portions of this study were presented as an oral abstract at the 35th Annual Meeting of the Society of General Internal Medicine, Orlando, Florida, May 912, 2012.

References
  1. Accreditation Council for Graduate Medical Education Common Program Requirements. Available at: http://www.acgme.org/acgmeweb/tabid/429/ProgramandInstitutionalAccreditation/CommonProgramRequirements.aspx, Accessed November 30, 2013.
  2. Lurie SJ, Mooney CJ, Lyness JM. Measurement of the general competencies of the Accreditation Council for Graduate Medical Education: a systematic review. Acad Med. 2009;84:301309.
  3. Ginsburg S, McIlroy J, Oulanova O, Eva K, Regehr G. Toward authentic clinical evaluation: pitfalls in the pursuit of competency. Acad Med. 2010;85(5):780786.
  4. Sterkenberg A, Barach P, Kalkman C, Gielen M, Cate O. When do supervising physicians decide to entrust residents with unsupervised tasks? Acad Med. 2010;85(9):14081417.
  5. Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system—rationale and benefits. N Eng J Med. 2012;366(11):10511056.
  6. Cate O. Trust, competence and the supervisor's role in postgraduate training. BMJ. 2006;333:748751.
  7. Farnan JM, Johnson JK, Meltzer DO, Humphrey HJ, Arora VM. Clinical Decision Making and impact on patient care: a qualitative study. Qual Saf Health Care. 2008;17(2):122126.
  8. Farnan JM, Johnson JK, Meltzer DO, et al. Strategies for effective on call supervision for internal medicine residents: the superb/safety model. J Grad Med Educ. 2010;2(1):4652.
  9. Farnan JM, Johnson JK, Meltzer DO, Humphrey HJ, Arora VM. On‐call supervision and resident autonomy: from micromanager to absentee attending. Am J Med. 2009;122(8):784788.
  10. Flanagan JC. The critical incident technique. Psychol Bull. 1954;51.4:327359.
  11. Grant S, Humphris M. Critical evaluation of appreciative inquiry: bridging an apparent paradox. Action Res. 2006;4(4):401418.
  12. Strauss A, Corbin J. Basics of Qualitative Research. 2nd ed. Thousand Oaks, CA: Sage Publications; 1998.
  13. Fraenkel JR, Wallen NE. How to Design and Evaluate Research in Education. New York, NY: McGraw Hill; 2003.
  14. Miles MB, Huberman AM. Qualitative Data Analysis. Thousand Oaks, CA: Sage; 1994.
  15. Kennedy TJT, Regehr G, Baker GR, Lingard L. Point‐of‐care assessment of medical trainee competence for independent clinical work. Acad Med. 2008;84:S89S92.
  16. Cate O, Schelle F. Viewpoint: competency‐based postgraduate training: can we bridge the gap between theory and clinical practice? Acad Med. 2007;82(6):542547.
  17. Dijksterhuis MJK, Voorhuis M, Teunissen PW, et al. Assessment of competence and progressive independence in postgraduate clinical training. Med Educ. 2009;43:11561165.
  18. Kogan JR, Holmboe ES, Hauer KE. Tools for direct observation and assessment of clinical skills of medical trainees: a systematic review. JAMA. 2009;302(12):13161326.
  19. Epstein RM. Assessment in medical education. N Engl J Med. 2007;356:387396.
  20. Schraagen JM, Schouten A, Smit M, Beek D, Ven J. Barach P. A prospective study of paediatric cardiac surgical microsystems: assessing the relationships between non‐routine events, teamwork and patient outcomes. BMJ Qual Saf. 2011;20(7):599603.
  21. Finfgeld‐Connett D. Generalizability and transferability of meta‐synthesis research findings. J Adv Nurs. 2010;66(2):246254.
References
  1. Accreditation Council for Graduate Medical Education Common Program Requirements. Available at: http://www.acgme.org/acgmeweb/tabid/429/ProgramandInstitutionalAccreditation/CommonProgramRequirements.aspx, Accessed November 30, 2013.
  2. Lurie SJ, Mooney CJ, Lyness JM. Measurement of the general competencies of the Accreditation Council for Graduate Medical Education: a systematic review. Acad Med. 2009;84:301309.
  3. Ginsburg S, McIlroy J, Oulanova O, Eva K, Regehr G. Toward authentic clinical evaluation: pitfalls in the pursuit of competency. Acad Med. 2010;85(5):780786.
  4. Sterkenberg A, Barach P, Kalkman C, Gielen M, Cate O. When do supervising physicians decide to entrust residents with unsupervised tasks? Acad Med. 2010;85(9):14081417.
  5. Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system—rationale and benefits. N Eng J Med. 2012;366(11):10511056.
  6. Cate O. Trust, competence and the supervisor's role in postgraduate training. BMJ. 2006;333:748751.
  7. Farnan JM, Johnson JK, Meltzer DO, Humphrey HJ, Arora VM. Clinical Decision Making and impact on patient care: a qualitative study. Qual Saf Health Care. 2008;17(2):122126.
  8. Farnan JM, Johnson JK, Meltzer DO, et al. Strategies for effective on call supervision for internal medicine residents: the superb/safety model. J Grad Med Educ. 2010;2(1):4652.
  9. Farnan JM, Johnson JK, Meltzer DO, Humphrey HJ, Arora VM. On‐call supervision and resident autonomy: from micromanager to absentee attending. Am J Med. 2009;122(8):784788.
  10. Flanagan JC. The critical incident technique. Psychol Bull. 1954;51.4:327359.
  11. Grant S, Humphris M. Critical evaluation of appreciative inquiry: bridging an apparent paradox. Action Res. 2006;4(4):401418.
  12. Strauss A, Corbin J. Basics of Qualitative Research. 2nd ed. Thousand Oaks, CA: Sage Publications; 1998.
  13. Fraenkel JR, Wallen NE. How to Design and Evaluate Research in Education. New York, NY: McGraw Hill; 2003.
  14. Miles MB, Huberman AM. Qualitative Data Analysis. Thousand Oaks, CA: Sage; 1994.
  15. Kennedy TJT, Regehr G, Baker GR, Lingard L. Point‐of‐care assessment of medical trainee competence for independent clinical work. Acad Med. 2008;84:S89S92.
  16. Cate O, Schelle F. Viewpoint: competency‐based postgraduate training: can we bridge the gap between theory and clinical practice? Acad Med. 2007;82(6):542547.
  17. Dijksterhuis MJK, Voorhuis M, Teunissen PW, et al. Assessment of competence and progressive independence in postgraduate clinical training. Med Educ. 2009;43:11561165.
  18. Kogan JR, Holmboe ES, Hauer KE. Tools for direct observation and assessment of clinical skills of medical trainees: a systematic review. JAMA. 2009;302(12):13161326.
  19. Epstein RM. Assessment in medical education. N Engl J Med. 2007;356:387396.
  20. Schraagen JM, Schouten A, Smit M, Beek D, Ven J. Barach P. A prospective study of paediatric cardiac surgical microsystems: assessing the relationships between non‐routine events, teamwork and patient outcomes. BMJ Qual Saf. 2011;20(7):599603.
  21. Finfgeld‐Connett D. Generalizability and transferability of meta‐synthesis research findings. J Adv Nurs. 2010;66(2):246254.
Issue
Journal of Hospital Medicine - 9(3)
Issue
Journal of Hospital Medicine - 9(3)
Page Number
169-175
Page Number
169-175
Publications
Publications
Article Type
Display Headline
How do supervising physicians decide to entrust residents with unsupervised tasks? A qualitative analysis
Display Headline
How do supervising physicians decide to entrust residents with unsupervised tasks? A qualitative analysis
Sections
Article Source

© 2014 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Jeanne M. Farnan, MD, 5841 S. Maryland Avenue, MC 2007, W216, Chicago, IL 60637; Telephone: 773‐834‐3401; Fax: 773‐834‐2238; E‐mail: jfarnan@medicine.bsd.uchicago.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Promoting Professionalism

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Promoting professionalism via a video‐based educational workshop for academic hospitalists and housestaff

Unprofessional behavior in the inpatient setting has the potential to impact care delivery and the quality of trainee's educational experience. These behaviors, from disparaging colleagues to blocking admissions, can negatively impact the learning environment. The learning environment or conditions created by the patient care team's actions play a critical role in the development of trainees.[1, 2] The rising presence of hospitalists in the inpatient setting raises the question of how their actions impact the learning environment. Professional behavior has been defined as a core competency for hospitalists by the Society of Hospital Medicine.[3] Professional behavior of all team members, from faculty to trainee, can impact the learning environment and patient safety.[4, 5] However, few educational materials exist to train faculty and housestaff on recognizing and ameliorating unprofessional behaviors.

A prior assessment regarding hospitalists' lapses in professionalism identified scenarios that demonstrated increased participation by hospitalists at 3 institutions.[6] Participants reported observation or participation in specific unprofessional behaviors and rated their perception of these behaviors. Additional work within those residency environments demonstrated that residents' perceptions of and participation in these behaviors increased throughout training, with environmental characteristics, specifically faculty behavior, influencing trainee professional development and acclimation of these behaviors.[7, 8]

Although overall participation in egregious behavior was low, resident participation in 3 categories of unprofessional behavior increased during internship. Those scenarios included disparaging the emergency room or primary care physician for missed findings or management decisions, blocking or not taking admissions appropriate for the service in question, and misrepresenting a test as urgent to expedite obtaining the test. We developed our intervention focused on these areas to address professionalism lapses that occur during internship. Our earlier work showed faculty role models influenced trainee behavior. For this reason, we provided education to both residents and hospitalists to maximize the impact of the intervention.

We present here a novel, interactive, video‐based workshop curriculum for faculty and trainees that aims to illustrate unprofessional behaviors and outlines the role faculty may play in promoting such behaviors. In addition, we review the result of postworkshop evaluation on intent to change behavior and satisfaction.

METHODS

A grant from the American Board of Internal Medicine Foundation supported this project. The working group that resulted, the Chicago Professional Practice Project and Outcomes, included faculty representation from 3 Chicago‐area hospitals: the University of Chicago, Northwestern University, and NorthShore University HealthSystem. Academic hospitalists at these sites were invited to participate. Each site also has an internal medicine residency program in which hospitalists were expected to attend the teaching service. Given this, resident trainees at all participating sites, and 1 community teaching affiliate program (Mercy Hospital and Medical Center) where academic hospitalists at the University of Chicago rotate, were recruited for participation. Faculty champions were identified for each site, and 1 internal and external faculty representative from the working group served to debrief and facilitate. Trainee workshops were administered by 1 internal and external collaborator, and for the community site, 2 external faculty members. Workshops were held during established educational conference times, and lunch was provided.

Scripts highlighting each of the behaviors identified in the prior survey were developed and peer reviewed for clarity and face validity across the 3 sites. Medical student and resident actors were trained utilizing the finalized scripts, and a performance artist affiliated with the Screen Actors Guild assisted in their preparation for filming. All videos were filmed at the University of Chicago Pritzker School of Medicine Clinical Performance Center. The final videos ranged in length from 4 to 7 minutes and included title, cast, and funding source. As an example, 1 video highlighted the unprofessional behavior of misrepresenting a test as urgent to prioritize one's patient in the queue. This video included a resident, intern, and attending on inpatient rounds during which the resident encouraged the intern to misrepresent the patient's status to expedite obtaining the study and facilitate the patient's discharge. The resident stressed that he would be in the clinic and had many patients to see, highlighting the impact of workload on unprofessional behavior, and aggressively persuaded the intern to sell her test to have it performed the same day. When this occurred, the attending applauded the intern for her strong work.

A moderator guide and debriefing tools were developed to facilitate discussion. The duration of each of the workshops was approximately 60 minutes. After welcoming remarks, participants were provided tools to utilize during the viewing of each video. These checklists noted the roles of those depicted in the video, asked to identify positive or negative behaviors displayed, and included questions regarding how behaviors could be detrimental and how the situation could have been prevented. After viewing the videos, participants divided into small groups to discuss the individual exhibiting the unprofessional behavior, their perceived motivation for said behavior, and its impact on the team culture and patient care. Following a small‐group discussion, large‐group debriefing was performed, addressing the barriers and facilitators to professional behavior. Two videos were shown at each workshop, and participants completed a postworkshop evaluation. Videos chosen for viewing were based upon preworkshop survey results that highlighted areas of concern at that specific site.

Postworkshop paper‐based evaluations assessed participants' perception of displayed behaviors on a Likert‐type scale (1=unprofessional to 5=professional) utilizing items validated in prior work,[6, 7, 8] their level of agreement regarding the impact of video‐based exercises, and intent to change behavior using a Likert‐type scale (1=strongly disagree to 5=strongly agree). A constructed‐response section for comments regarding their experience was included. Descriptive statistics and Wilcoxon rank sum analyses were performed.

RESULTS

Forty‐four academic hospitalist faculty members (44/83; 53%) and 244 resident trainees (244/356; 68%) participated. When queried regarding their perception of the displayed behaviors in the videos, nearly 100% of faculty and trainees felt disparaging the emergency department or primary care physician for missed findings or clinical decisions was somewhat unprofessional or unprofessional. Ninety percent of hospitalists and 93% of trainees rated celebrating a blocked admission as somewhat unprofessional or unprofessional (Table 1).

Hospitalist and Resident Perception of Portrayed Behaviors
Behavior Faculty Rated as Unprofessional or Somewhat Unprofessional (n = 44) Housestaff Rated as Unprofessional or Somewhat Unprofessional (n=244)
  • NOTE: Abbreviations: ED/PCP, emergency department/primary care physician.

Disparaging the ED/PCP to colleagues for findings later discovered on the floor or patient care management decisions 95.6% 97.5%
Refusing an admission that could be considered appropriate for your service (eg, blocking) 86.4% 95.1%
Celebrating a blocked admission 90.1% 93.0%
Ordering a routine test as urgent to get it expedited 77.2% 80.3%

The scenarios portrayed were well received, with more than 85% of faculty and trainees agreeing that the behaviors displayed were realistic. Those who perceived videos as very realistic were more likely to report intent to change behavior (93% vs 53%, P=0.01). Nearly two‐thirds of faculty and 67% of housestaff expressed agreement that they intended to change behavior based upon the experience (Table 2).

Postworkshop Evaluation
Evaluation Item Faculty Level of Agreement (StronglyAgree or Agree) (n=44) Housestaff Level of Agreement (Strongly Agree or Agree) (n=244)
The scenarios portrayed in the videos were realistic 86.4% 86.9%
I will change my behavior as a result of this exercise 65.9% 67.2%
I feel that this was a useful and effective exercise 65.9% 77.1%

Qualitative comments in the constructed‐response portion of the evaluation noted the effectiveness of the interactive materials. In addition, the need for focused faculty development was identified by 1 respondent who stated: If unprofessional behavior is the unwritten curriculum, there needs to be an explicit, written curriculum to address it. Finally, the aim of facilitating self‐reflection is echoed in this faculty respondent's comment: Always good to be reminded of our behaviors and the influence they have on others and from this resident physician It helps to re‐evaluate how you talk to people.

CONCLUSIONS

Faculty can be a large determinant of the learning environment and impact trainees' professional development.[9] Hospitalists should be encouraged to embrace faculty role‐modeling of effective professional behaviors, especially given their increased presence in the inpatient learning environment. In addition, resident trainees and their behaviors contribute to the learning environment and influence the further professional development of more junior trainees.[10] Targeting professionalism education toward previously identified and prevalent unprofessional behaviors in the inpatient care of patients may serve to affect the most change among providers who practice in this setting. Individualized assessment of the learning environment may aid in identifying common scenarios that may plague a specific learning culture, allowing for relevant and targeted discussion of factors that promote and perpetuate such behaviors.[11]

Interactive, video‐based modules provided an effective way to promote interactive reflection and robust discussion. This model of experiential learning is an effective form of professional development as it engages the learner and stimulates ongoing incorporation of the topics addressed.[12, 13] Creating a shared concrete experience among targeted learners, using the video‐based scenarios, stimulates reflective observation, and ultimately experimentation, or incorporation into practice.[14]

There are several limitations to our evaluation including that we focused solely on academic hospitalist programs, and our sample size for faculty and residents was small. Also, we only addressed a small, though representative, sample of unprofessional behaviors and have not yet linked intervention to actual behavior change. Finally, the script scenarios that we used in this study were not previously published as they were created specifically for this intervention. Validity evidence for these scenarios include that they were based upon the results of earlier work from our institutions and underwent thorough peer review for content and clarity. Further studies will be required to do this. However, we do believe that these are positive findings for utilizing this type of interactive curriculum for professionalism education to promote self‐reflection and behavior change.

Video‐based professionalism education is a feasible, interactive mechanism to encourage self‐reflection and intent to change behavior among faculty and resident physicians. Future study is underway to conduct longitudinal assessments of the learning environments at the participating institutions to assess culture change, perceptions of behaviors, and sustainability of this type of intervention.

Disclosures: The authors acknowledge funding from the American Board of Internal Medicine. The funders had no role in the design of the study; the collection, analysis, and interpretation of the data; or the decision to approve publication of the finished manuscript. Results from this work have been presented at the Midwest Society of General Internal Medicine Regional Meeting, Chicago, Illinois, September 2011; Midwest Society of Hospital Medicine Regional Meeting, Chicago, Illinois, October 2011, and Society of Hospital Medicine Annual Meeting, San Diego, California, April 2012. The authors declare that they do not have any conflicts of interest to disclose.

Files
References
  1. Liaison Committee on Medical Education. Functions and structure of a medical school. Available at: http://www.lcme.org/functions.pdf. Accessed October 10, 2012.
  2. Gillespie C, Paik S, Ark T, Zabar S, Kalet A. Residents' perceptions of their own professionalism and the professionalism of their learning environment. J Grad Med Educ. 2009;1:208215.
  3. Society of Hospital Medicine. The core competencies in hospital medicine. http://www.hospitalmedicine.org/Content/NavigationMenu/Education/CoreCurriculum/Core_Competencies.htm. Accessed October 10, 2012.
  4. The Joint Commission. Behaviors that undermine a culture of safety. Sentinel Event Alert. 2008;(40):1–3. http://www.jointcommission.org/assets/1/18/SEA_40.pdf. Accessed October 10, 2012.
  5. Rosenstein AH, O'Daniel M. A survey of the impact of disruptive behaviors and communication defects on patient safety. Jt Comm J Qual Patient Saf. 2008;34:464471.
  6. Reddy ST, Iwaz JA, Didwania AK, et al. Participation in unprofessional behaviors among hospitalists: a multicenter study. J Hosp Med. 2012;7(7):543550.
  7. Arora VM, Wayne DB, Anderson RA et al. Participation in and perceptions of unprofessional behaviors among incoming internal medicine interns. JAMA. 2008;300:11321134.
  8. Arora VM, Wayne DB, Anderson RA, et al., Changes in perception of and participation in unprofessional behaviors during internship. Acad Med. 2010;85:S76S80.
  9. Schumacher DJ, Slovin SR, Riebschleger MP, et al. Perspective: beyond counting hours: the importance of supervision, professionalism, transitions of care, and workload in residency training. Acad Med. 2012;87(7):883888.
  10. Haidet P, Stein H. The role of the student‐teacher relationship in the formation of physicians: the hidden curriculum as process. J Gen Intern Med. 2006;21:S16S20.
  11. Thrush CR, Spollen JJ, Tariq SG, et al. Evidence for validity of a survey to measure the learning environment for professionalism. Med Teach. 2011;33(12):e683e688.
  12. Kolb DA. Experiential Learning: Experience as the Source of Learning and Development. Englewood Cliffs, NJ: Prentice Hall; 1984.
  13. Armstrong E, Parsa‐Parsi R. How can physicians' learning style drive educational planning? Acad Med. 2005;80:68084.
  14. Ber R, Alroy G. Twenty years of experience using trigger films as a teaching tool. Acad Med. 2001;76:656658.
Article PDF
Issue
Journal of Hospital Medicine - 8(7)
Publications
Page Number
386-389
Sections
Files
Files
Article PDF
Article PDF

Unprofessional behavior in the inpatient setting has the potential to impact care delivery and the quality of trainee's educational experience. These behaviors, from disparaging colleagues to blocking admissions, can negatively impact the learning environment. The learning environment or conditions created by the patient care team's actions play a critical role in the development of trainees.[1, 2] The rising presence of hospitalists in the inpatient setting raises the question of how their actions impact the learning environment. Professional behavior has been defined as a core competency for hospitalists by the Society of Hospital Medicine.[3] Professional behavior of all team members, from faculty to trainee, can impact the learning environment and patient safety.[4, 5] However, few educational materials exist to train faculty and housestaff on recognizing and ameliorating unprofessional behaviors.

A prior assessment regarding hospitalists' lapses in professionalism identified scenarios that demonstrated increased participation by hospitalists at 3 institutions.[6] Participants reported observation or participation in specific unprofessional behaviors and rated their perception of these behaviors. Additional work within those residency environments demonstrated that residents' perceptions of and participation in these behaviors increased throughout training, with environmental characteristics, specifically faculty behavior, influencing trainee professional development and acclimation of these behaviors.[7, 8]

Although overall participation in egregious behavior was low, resident participation in 3 categories of unprofessional behavior increased during internship. Those scenarios included disparaging the emergency room or primary care physician for missed findings or management decisions, blocking or not taking admissions appropriate for the service in question, and misrepresenting a test as urgent to expedite obtaining the test. We developed our intervention focused on these areas to address professionalism lapses that occur during internship. Our earlier work showed faculty role models influenced trainee behavior. For this reason, we provided education to both residents and hospitalists to maximize the impact of the intervention.

We present here a novel, interactive, video‐based workshop curriculum for faculty and trainees that aims to illustrate unprofessional behaviors and outlines the role faculty may play in promoting such behaviors. In addition, we review the result of postworkshop evaluation on intent to change behavior and satisfaction.

METHODS

A grant from the American Board of Internal Medicine Foundation supported this project. The working group that resulted, the Chicago Professional Practice Project and Outcomes, included faculty representation from 3 Chicago‐area hospitals: the University of Chicago, Northwestern University, and NorthShore University HealthSystem. Academic hospitalists at these sites were invited to participate. Each site also has an internal medicine residency program in which hospitalists were expected to attend the teaching service. Given this, resident trainees at all participating sites, and 1 community teaching affiliate program (Mercy Hospital and Medical Center) where academic hospitalists at the University of Chicago rotate, were recruited for participation. Faculty champions were identified for each site, and 1 internal and external faculty representative from the working group served to debrief and facilitate. Trainee workshops were administered by 1 internal and external collaborator, and for the community site, 2 external faculty members. Workshops were held during established educational conference times, and lunch was provided.

Scripts highlighting each of the behaviors identified in the prior survey were developed and peer reviewed for clarity and face validity across the 3 sites. Medical student and resident actors were trained utilizing the finalized scripts, and a performance artist affiliated with the Screen Actors Guild assisted in their preparation for filming. All videos were filmed at the University of Chicago Pritzker School of Medicine Clinical Performance Center. The final videos ranged in length from 4 to 7 minutes and included title, cast, and funding source. As an example, 1 video highlighted the unprofessional behavior of misrepresenting a test as urgent to prioritize one's patient in the queue. This video included a resident, intern, and attending on inpatient rounds during which the resident encouraged the intern to misrepresent the patient's status to expedite obtaining the study and facilitate the patient's discharge. The resident stressed that he would be in the clinic and had many patients to see, highlighting the impact of workload on unprofessional behavior, and aggressively persuaded the intern to sell her test to have it performed the same day. When this occurred, the attending applauded the intern for her strong work.

A moderator guide and debriefing tools were developed to facilitate discussion. The duration of each of the workshops was approximately 60 minutes. After welcoming remarks, participants were provided tools to utilize during the viewing of each video. These checklists noted the roles of those depicted in the video, asked to identify positive or negative behaviors displayed, and included questions regarding how behaviors could be detrimental and how the situation could have been prevented. After viewing the videos, participants divided into small groups to discuss the individual exhibiting the unprofessional behavior, their perceived motivation for said behavior, and its impact on the team culture and patient care. Following a small‐group discussion, large‐group debriefing was performed, addressing the barriers and facilitators to professional behavior. Two videos were shown at each workshop, and participants completed a postworkshop evaluation. Videos chosen for viewing were based upon preworkshop survey results that highlighted areas of concern at that specific site.

Postworkshop paper‐based evaluations assessed participants' perception of displayed behaviors on a Likert‐type scale (1=unprofessional to 5=professional) utilizing items validated in prior work,[6, 7, 8] their level of agreement regarding the impact of video‐based exercises, and intent to change behavior using a Likert‐type scale (1=strongly disagree to 5=strongly agree). A constructed‐response section for comments regarding their experience was included. Descriptive statistics and Wilcoxon rank sum analyses were performed.

RESULTS

Forty‐four academic hospitalist faculty members (44/83; 53%) and 244 resident trainees (244/356; 68%) participated. When queried regarding their perception of the displayed behaviors in the videos, nearly 100% of faculty and trainees felt disparaging the emergency department or primary care physician for missed findings or clinical decisions was somewhat unprofessional or unprofessional. Ninety percent of hospitalists and 93% of trainees rated celebrating a blocked admission as somewhat unprofessional or unprofessional (Table 1).

Hospitalist and Resident Perception of Portrayed Behaviors
Behavior Faculty Rated as Unprofessional or Somewhat Unprofessional (n = 44) Housestaff Rated as Unprofessional or Somewhat Unprofessional (n=244)
  • NOTE: Abbreviations: ED/PCP, emergency department/primary care physician.

Disparaging the ED/PCP to colleagues for findings later discovered on the floor or patient care management decisions 95.6% 97.5%
Refusing an admission that could be considered appropriate for your service (eg, blocking) 86.4% 95.1%
Celebrating a blocked admission 90.1% 93.0%
Ordering a routine test as urgent to get it expedited 77.2% 80.3%

The scenarios portrayed were well received, with more than 85% of faculty and trainees agreeing that the behaviors displayed were realistic. Those who perceived videos as very realistic were more likely to report intent to change behavior (93% vs 53%, P=0.01). Nearly two‐thirds of faculty and 67% of housestaff expressed agreement that they intended to change behavior based upon the experience (Table 2).

Postworkshop Evaluation
Evaluation Item Faculty Level of Agreement (StronglyAgree or Agree) (n=44) Housestaff Level of Agreement (Strongly Agree or Agree) (n=244)
The scenarios portrayed in the videos were realistic 86.4% 86.9%
I will change my behavior as a result of this exercise 65.9% 67.2%
I feel that this was a useful and effective exercise 65.9% 77.1%

Qualitative comments in the constructed‐response portion of the evaluation noted the effectiveness of the interactive materials. In addition, the need for focused faculty development was identified by 1 respondent who stated: If unprofessional behavior is the unwritten curriculum, there needs to be an explicit, written curriculum to address it. Finally, the aim of facilitating self‐reflection is echoed in this faculty respondent's comment: Always good to be reminded of our behaviors and the influence they have on others and from this resident physician It helps to re‐evaluate how you talk to people.

CONCLUSIONS

Faculty can be a large determinant of the learning environment and impact trainees' professional development.[9] Hospitalists should be encouraged to embrace faculty role‐modeling of effective professional behaviors, especially given their increased presence in the inpatient learning environment. In addition, resident trainees and their behaviors contribute to the learning environment and influence the further professional development of more junior trainees.[10] Targeting professionalism education toward previously identified and prevalent unprofessional behaviors in the inpatient care of patients may serve to affect the most change among providers who practice in this setting. Individualized assessment of the learning environment may aid in identifying common scenarios that may plague a specific learning culture, allowing for relevant and targeted discussion of factors that promote and perpetuate such behaviors.[11]

Interactive, video‐based modules provided an effective way to promote interactive reflection and robust discussion. This model of experiential learning is an effective form of professional development as it engages the learner and stimulates ongoing incorporation of the topics addressed.[12, 13] Creating a shared concrete experience among targeted learners, using the video‐based scenarios, stimulates reflective observation, and ultimately experimentation, or incorporation into practice.[14]

There are several limitations to our evaluation including that we focused solely on academic hospitalist programs, and our sample size for faculty and residents was small. Also, we only addressed a small, though representative, sample of unprofessional behaviors and have not yet linked intervention to actual behavior change. Finally, the script scenarios that we used in this study were not previously published as they were created specifically for this intervention. Validity evidence for these scenarios include that they were based upon the results of earlier work from our institutions and underwent thorough peer review for content and clarity. Further studies will be required to do this. However, we do believe that these are positive findings for utilizing this type of interactive curriculum for professionalism education to promote self‐reflection and behavior change.

Video‐based professionalism education is a feasible, interactive mechanism to encourage self‐reflection and intent to change behavior among faculty and resident physicians. Future study is underway to conduct longitudinal assessments of the learning environments at the participating institutions to assess culture change, perceptions of behaviors, and sustainability of this type of intervention.

Disclosures: The authors acknowledge funding from the American Board of Internal Medicine. The funders had no role in the design of the study; the collection, analysis, and interpretation of the data; or the decision to approve publication of the finished manuscript. Results from this work have been presented at the Midwest Society of General Internal Medicine Regional Meeting, Chicago, Illinois, September 2011; Midwest Society of Hospital Medicine Regional Meeting, Chicago, Illinois, October 2011, and Society of Hospital Medicine Annual Meeting, San Diego, California, April 2012. The authors declare that they do not have any conflicts of interest to disclose.

Unprofessional behavior in the inpatient setting has the potential to impact care delivery and the quality of trainee's educational experience. These behaviors, from disparaging colleagues to blocking admissions, can negatively impact the learning environment. The learning environment or conditions created by the patient care team's actions play a critical role in the development of trainees.[1, 2] The rising presence of hospitalists in the inpatient setting raises the question of how their actions impact the learning environment. Professional behavior has been defined as a core competency for hospitalists by the Society of Hospital Medicine.[3] Professional behavior of all team members, from faculty to trainee, can impact the learning environment and patient safety.[4, 5] However, few educational materials exist to train faculty and housestaff on recognizing and ameliorating unprofessional behaviors.

A prior assessment regarding hospitalists' lapses in professionalism identified scenarios that demonstrated increased participation by hospitalists at 3 institutions.[6] Participants reported observation or participation in specific unprofessional behaviors and rated their perception of these behaviors. Additional work within those residency environments demonstrated that residents' perceptions of and participation in these behaviors increased throughout training, with environmental characteristics, specifically faculty behavior, influencing trainee professional development and acclimation of these behaviors.[7, 8]

Although overall participation in egregious behavior was low, resident participation in 3 categories of unprofessional behavior increased during internship. Those scenarios included disparaging the emergency room or primary care physician for missed findings or management decisions, blocking or not taking admissions appropriate for the service in question, and misrepresenting a test as urgent to expedite obtaining the test. We developed our intervention focused on these areas to address professionalism lapses that occur during internship. Our earlier work showed faculty role models influenced trainee behavior. For this reason, we provided education to both residents and hospitalists to maximize the impact of the intervention.

We present here a novel, interactive, video‐based workshop curriculum for faculty and trainees that aims to illustrate unprofessional behaviors and outlines the role faculty may play in promoting such behaviors. In addition, we review the result of postworkshop evaluation on intent to change behavior and satisfaction.

METHODS

A grant from the American Board of Internal Medicine Foundation supported this project. The working group that resulted, the Chicago Professional Practice Project and Outcomes, included faculty representation from 3 Chicago‐area hospitals: the University of Chicago, Northwestern University, and NorthShore University HealthSystem. Academic hospitalists at these sites were invited to participate. Each site also has an internal medicine residency program in which hospitalists were expected to attend the teaching service. Given this, resident trainees at all participating sites, and 1 community teaching affiliate program (Mercy Hospital and Medical Center) where academic hospitalists at the University of Chicago rotate, were recruited for participation. Faculty champions were identified for each site, and 1 internal and external faculty representative from the working group served to debrief and facilitate. Trainee workshops were administered by 1 internal and external collaborator, and for the community site, 2 external faculty members. Workshops were held during established educational conference times, and lunch was provided.

Scripts highlighting each of the behaviors identified in the prior survey were developed and peer reviewed for clarity and face validity across the 3 sites. Medical student and resident actors were trained utilizing the finalized scripts, and a performance artist affiliated with the Screen Actors Guild assisted in their preparation for filming. All videos were filmed at the University of Chicago Pritzker School of Medicine Clinical Performance Center. The final videos ranged in length from 4 to 7 minutes and included title, cast, and funding source. As an example, 1 video highlighted the unprofessional behavior of misrepresenting a test as urgent to prioritize one's patient in the queue. This video included a resident, intern, and attending on inpatient rounds during which the resident encouraged the intern to misrepresent the patient's status to expedite obtaining the study and facilitate the patient's discharge. The resident stressed that he would be in the clinic and had many patients to see, highlighting the impact of workload on unprofessional behavior, and aggressively persuaded the intern to sell her test to have it performed the same day. When this occurred, the attending applauded the intern for her strong work.

A moderator guide and debriefing tools were developed to facilitate discussion. The duration of each of the workshops was approximately 60 minutes. After welcoming remarks, participants were provided tools to utilize during the viewing of each video. These checklists noted the roles of those depicted in the video, asked to identify positive or negative behaviors displayed, and included questions regarding how behaviors could be detrimental and how the situation could have been prevented. After viewing the videos, participants divided into small groups to discuss the individual exhibiting the unprofessional behavior, their perceived motivation for said behavior, and its impact on the team culture and patient care. Following a small‐group discussion, large‐group debriefing was performed, addressing the barriers and facilitators to professional behavior. Two videos were shown at each workshop, and participants completed a postworkshop evaluation. Videos chosen for viewing were based upon preworkshop survey results that highlighted areas of concern at that specific site.

Postworkshop paper‐based evaluations assessed participants' perception of displayed behaviors on a Likert‐type scale (1=unprofessional to 5=professional) utilizing items validated in prior work,[6, 7, 8] their level of agreement regarding the impact of video‐based exercises, and intent to change behavior using a Likert‐type scale (1=strongly disagree to 5=strongly agree). A constructed‐response section for comments regarding their experience was included. Descriptive statistics and Wilcoxon rank sum analyses were performed.

RESULTS

Forty‐four academic hospitalist faculty members (44/83; 53%) and 244 resident trainees (244/356; 68%) participated. When queried regarding their perception of the displayed behaviors in the videos, nearly 100% of faculty and trainees felt disparaging the emergency department or primary care physician for missed findings or clinical decisions was somewhat unprofessional or unprofessional. Ninety percent of hospitalists and 93% of trainees rated celebrating a blocked admission as somewhat unprofessional or unprofessional (Table 1).

Hospitalist and Resident Perception of Portrayed Behaviors
Behavior Faculty Rated as Unprofessional or Somewhat Unprofessional (n = 44) Housestaff Rated as Unprofessional or Somewhat Unprofessional (n=244)
  • NOTE: Abbreviations: ED/PCP, emergency department/primary care physician.

Disparaging the ED/PCP to colleagues for findings later discovered on the floor or patient care management decisions 95.6% 97.5%
Refusing an admission that could be considered appropriate for your service (eg, blocking) 86.4% 95.1%
Celebrating a blocked admission 90.1% 93.0%
Ordering a routine test as urgent to get it expedited 77.2% 80.3%

The scenarios portrayed were well received, with more than 85% of faculty and trainees agreeing that the behaviors displayed were realistic. Those who perceived videos as very realistic were more likely to report intent to change behavior (93% vs 53%, P=0.01). Nearly two‐thirds of faculty and 67% of housestaff expressed agreement that they intended to change behavior based upon the experience (Table 2).

Postworkshop Evaluation
Evaluation Item Faculty Level of Agreement (StronglyAgree or Agree) (n=44) Housestaff Level of Agreement (Strongly Agree or Agree) (n=244)
The scenarios portrayed in the videos were realistic 86.4% 86.9%
I will change my behavior as a result of this exercise 65.9% 67.2%
I feel that this was a useful and effective exercise 65.9% 77.1%

Qualitative comments in the constructed‐response portion of the evaluation noted the effectiveness of the interactive materials. In addition, the need for focused faculty development was identified by 1 respondent who stated: If unprofessional behavior is the unwritten curriculum, there needs to be an explicit, written curriculum to address it. Finally, the aim of facilitating self‐reflection is echoed in this faculty respondent's comment: Always good to be reminded of our behaviors and the influence they have on others and from this resident physician It helps to re‐evaluate how you talk to people.

CONCLUSIONS

Faculty can be a large determinant of the learning environment and impact trainees' professional development.[9] Hospitalists should be encouraged to embrace faculty role‐modeling of effective professional behaviors, especially given their increased presence in the inpatient learning environment. In addition, resident trainees and their behaviors contribute to the learning environment and influence the further professional development of more junior trainees.[10] Targeting professionalism education toward previously identified and prevalent unprofessional behaviors in the inpatient care of patients may serve to affect the most change among providers who practice in this setting. Individualized assessment of the learning environment may aid in identifying common scenarios that may plague a specific learning culture, allowing for relevant and targeted discussion of factors that promote and perpetuate such behaviors.[11]

Interactive, video‐based modules provided an effective way to promote interactive reflection and robust discussion. This model of experiential learning is an effective form of professional development as it engages the learner and stimulates ongoing incorporation of the topics addressed.[12, 13] Creating a shared concrete experience among targeted learners, using the video‐based scenarios, stimulates reflective observation, and ultimately experimentation, or incorporation into practice.[14]

There are several limitations to our evaluation including that we focused solely on academic hospitalist programs, and our sample size for faculty and residents was small. Also, we only addressed a small, though representative, sample of unprofessional behaviors and have not yet linked intervention to actual behavior change. Finally, the script scenarios that we used in this study were not previously published as they were created specifically for this intervention. Validity evidence for these scenarios include that they were based upon the results of earlier work from our institutions and underwent thorough peer review for content and clarity. Further studies will be required to do this. However, we do believe that these are positive findings for utilizing this type of interactive curriculum for professionalism education to promote self‐reflection and behavior change.

Video‐based professionalism education is a feasible, interactive mechanism to encourage self‐reflection and intent to change behavior among faculty and resident physicians. Future study is underway to conduct longitudinal assessments of the learning environments at the participating institutions to assess culture change, perceptions of behaviors, and sustainability of this type of intervention.

Disclosures: The authors acknowledge funding from the American Board of Internal Medicine. The funders had no role in the design of the study; the collection, analysis, and interpretation of the data; or the decision to approve publication of the finished manuscript. Results from this work have been presented at the Midwest Society of General Internal Medicine Regional Meeting, Chicago, Illinois, September 2011; Midwest Society of Hospital Medicine Regional Meeting, Chicago, Illinois, October 2011, and Society of Hospital Medicine Annual Meeting, San Diego, California, April 2012. The authors declare that they do not have any conflicts of interest to disclose.

References
  1. Liaison Committee on Medical Education. Functions and structure of a medical school. Available at: http://www.lcme.org/functions.pdf. Accessed October 10, 2012.
  2. Gillespie C, Paik S, Ark T, Zabar S, Kalet A. Residents' perceptions of their own professionalism and the professionalism of their learning environment. J Grad Med Educ. 2009;1:208215.
  3. Society of Hospital Medicine. The core competencies in hospital medicine. http://www.hospitalmedicine.org/Content/NavigationMenu/Education/CoreCurriculum/Core_Competencies.htm. Accessed October 10, 2012.
  4. The Joint Commission. Behaviors that undermine a culture of safety. Sentinel Event Alert. 2008;(40):1–3. http://www.jointcommission.org/assets/1/18/SEA_40.pdf. Accessed October 10, 2012.
  5. Rosenstein AH, O'Daniel M. A survey of the impact of disruptive behaviors and communication defects on patient safety. Jt Comm J Qual Patient Saf. 2008;34:464471.
  6. Reddy ST, Iwaz JA, Didwania AK, et al. Participation in unprofessional behaviors among hospitalists: a multicenter study. J Hosp Med. 2012;7(7):543550.
  7. Arora VM, Wayne DB, Anderson RA et al. Participation in and perceptions of unprofessional behaviors among incoming internal medicine interns. JAMA. 2008;300:11321134.
  8. Arora VM, Wayne DB, Anderson RA, et al., Changes in perception of and participation in unprofessional behaviors during internship. Acad Med. 2010;85:S76S80.
  9. Schumacher DJ, Slovin SR, Riebschleger MP, et al. Perspective: beyond counting hours: the importance of supervision, professionalism, transitions of care, and workload in residency training. Acad Med. 2012;87(7):883888.
  10. Haidet P, Stein H. The role of the student‐teacher relationship in the formation of physicians: the hidden curriculum as process. J Gen Intern Med. 2006;21:S16S20.
  11. Thrush CR, Spollen JJ, Tariq SG, et al. Evidence for validity of a survey to measure the learning environment for professionalism. Med Teach. 2011;33(12):e683e688.
  12. Kolb DA. Experiential Learning: Experience as the Source of Learning and Development. Englewood Cliffs, NJ: Prentice Hall; 1984.
  13. Armstrong E, Parsa‐Parsi R. How can physicians' learning style drive educational planning? Acad Med. 2005;80:68084.
  14. Ber R, Alroy G. Twenty years of experience using trigger films as a teaching tool. Acad Med. 2001;76:656658.
References
  1. Liaison Committee on Medical Education. Functions and structure of a medical school. Available at: http://www.lcme.org/functions.pdf. Accessed October 10, 2012.
  2. Gillespie C, Paik S, Ark T, Zabar S, Kalet A. Residents' perceptions of their own professionalism and the professionalism of their learning environment. J Grad Med Educ. 2009;1:208215.
  3. Society of Hospital Medicine. The core competencies in hospital medicine. http://www.hospitalmedicine.org/Content/NavigationMenu/Education/CoreCurriculum/Core_Competencies.htm. Accessed October 10, 2012.
  4. The Joint Commission. Behaviors that undermine a culture of safety. Sentinel Event Alert. 2008;(40):1–3. http://www.jointcommission.org/assets/1/18/SEA_40.pdf. Accessed October 10, 2012.
  5. Rosenstein AH, O'Daniel M. A survey of the impact of disruptive behaviors and communication defects on patient safety. Jt Comm J Qual Patient Saf. 2008;34:464471.
  6. Reddy ST, Iwaz JA, Didwania AK, et al. Participation in unprofessional behaviors among hospitalists: a multicenter study. J Hosp Med. 2012;7(7):543550.
  7. Arora VM, Wayne DB, Anderson RA et al. Participation in and perceptions of unprofessional behaviors among incoming internal medicine interns. JAMA. 2008;300:11321134.
  8. Arora VM, Wayne DB, Anderson RA, et al., Changes in perception of and participation in unprofessional behaviors during internship. Acad Med. 2010;85:S76S80.
  9. Schumacher DJ, Slovin SR, Riebschleger MP, et al. Perspective: beyond counting hours: the importance of supervision, professionalism, transitions of care, and workload in residency training. Acad Med. 2012;87(7):883888.
  10. Haidet P, Stein H. The role of the student‐teacher relationship in the formation of physicians: the hidden curriculum as process. J Gen Intern Med. 2006;21:S16S20.
  11. Thrush CR, Spollen JJ, Tariq SG, et al. Evidence for validity of a survey to measure the learning environment for professionalism. Med Teach. 2011;33(12):e683e688.
  12. Kolb DA. Experiential Learning: Experience as the Source of Learning and Development. Englewood Cliffs, NJ: Prentice Hall; 1984.
  13. Armstrong E, Parsa‐Parsi R. How can physicians' learning style drive educational planning? Acad Med. 2005;80:68084.
  14. Ber R, Alroy G. Twenty years of experience using trigger films as a teaching tool. Acad Med. 2001;76:656658.
Issue
Journal of Hospital Medicine - 8(7)
Issue
Journal of Hospital Medicine - 8(7)
Page Number
386-389
Page Number
386-389
Publications
Publications
Article Type
Display Headline
Promoting professionalism via a video‐based educational workshop for academic hospitalists and housestaff
Display Headline
Promoting professionalism via a video‐based educational workshop for academic hospitalists and housestaff
Sections
Article Source
© 2013 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Jeanne M. Farnan, MD, 5841 South Maryland Avenue, AMB W216 MC 2007, Chicago, IL 60637; Telephone: 773‐834‐3401; Fax: 773‐834‐2238; E‐mail: jfarnan@medicine.bsd.uchicago.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Hospitalist Teaching Rounds for FUTURE

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
FUTURE: New strategies for hospitalists to overcome challenges in teaching on today's wards

The implementation of resident duty hour restrictions has created a clinical learning environment on the wards quite different from any previous era. The Accreditation Council for Graduate Medical Education issued its first set of regulations limiting consecutive hours worked for residents in 2003, and further restricted hours in 2011.[1] These restrictions have had many implications across several aspects of patient care, education, and clinical training, particularly for hospitalists who spend the majority of their time in this setting and are heavily involved in undergraduate and graduate clinical education in academic medical centers.[2, 3]

As learning environments have been shifting, so has the composition of learners. The Millennial Generation (or Generation Y), defined as those born approximately between 1980 and 2000, represents those young clinicians currently filling the halls of medical schools and ranks of residency and fellowship programs.[4] Interestingly, the current system of restricted work hours is the only system under which the Millennial Generation has ever trained.

As this new generation represents the bulk of current trainees, hospitalist faculty must consider how their teaching styles can be adapted to accommodate these learners. For teaching hospitalists, an approach that considers the learning environment as affected by duty hours, as well as the preferences of Millennial learners, is necessary to educate the next generation of trainees. This article aimed to introduce potential strategies for hospitalists to better align teaching on the wards with the preferences of Millennial learners under the constraints of residency duty hours.

THE NEWEST GENERATION OF LEARNERS

The Millennial Generation has been well described.[4, 5, 6, 7, 8, 9, 10] Broadly speaking, this generation is thought to have been raised by attentive and involved parents, influencing relationships with educators and mentors; they respect authority but do not hesitate to question the relevance of assignments or decisions. Millennials prefer structured learning environments that focus heavily on interaction and experiential learning, and they value design and appearance in how material is presented.[7] Millennials also seek clear expectations and immediate feedback on their performance, and though they have sometimes been criticized for a strong sense of entitlement, they have a strong desire for collaboration and group‐based activity.[5, 6]

One of the most notable and defining characteristics of the Millennial Generation is an affinity for technology and innovation.[7, 8, 9] Web‐based learning tools that are interactive and engaging, such as blogs, podcasts, or streaming videos are familiar and favored methods of learning. Millennials are skilled at finding information and providing answers and data, but may need help with synthesis and application.[5] They take pride in their ability to multitask, but can be prone to doing so inappropriately, particularly with technology that is readily available.[11]

Few studies have explored characteristics of the Millennial Generation specific to medical trainees. One study examined personality characteristics of Millennial medical students compared to Generation X students (those born from 19651980) at a single institution. Millennial students scored higher on warmth, reasoning, emotional stability, rule consciousness, social boldness, sensitivity, apprehension, openness to change, and perfectionism compared to Generation X students. They scored lower on measures for self‐reliance.[12] Additionally, when motives for behavior were studied, Millennial medical students scored higher on needs for affiliation and achievement, and lower on needs for power.[13]

DUTY HOURS: A GENERATION APART

As noted previously, the Millennial Generation is the first to train exclusively in the era of duty hours restrictions. The oldest members of this generation, those born in 1981, were entering medical school at the time of the first duty hours restrictions in 2003, and thus have always been educated, trained, and practiced in an environment in which work hours were an essential part of residency training.

Though duty hours have been an omnipresent part of training for the Millennial Generation, the clinical learning environment that they have known continues to evolve and change. Time for teaching, in particular, has been especially strained by work hour limits, and this has been noted by both attending physicians and trainees with each iteration of work hours limits. Attendings in one study estimated that time spent teaching on general medicine wards was reduced by about 20% following the 2003 limits, and over 40% of residents in a national survey reported that the 2011 limits had worsened the quality of education.[14, 15]

GENERATIONAL STRATEGIES FOR SUCCESS FOR HOSPITALIST TEACHING ATTENDINGS

The time limitations imposed by duty hours restrictions have compelled teaching rounds to become more patient‐care centered and often less learner‐centered, as providing patient care becomes the prime obligation for this limited time period. Millennial learners are accustomed to being the center of attention in educational environments, and changing the focus from education to patient care in the wards setting may be an abrupt transition for some learners.[6] However, hospitalists can help restructure teaching opportunities on the clinical wards by using teaching methods of the highest value to Millennial learners to promote learning under the conditions of duty hours limitations.

An approach using these methods was developed by reviewing recent literature as well as educational innovations that have been presented at scholarly meetings (eg, Sal Khan's presentation at the 2012 Association of American Medical Colleges meeting).[16] The authors discussed potential teaching techniques that were thought to be feasible to implement in the context of the current learning environment, with consideration of learning theories that would be most effective for the target group of learners (eg, adult learning theory).[17] A mnemonic was created to consolidate strategies thought to best represent these techniques. FUTURE is a group of teaching strategies that can be used by hospitalists to improve teaching rounds by Flipping the Wards, Using Documentation to Teach, Technology‐Enabled Teaching, Using Guerilla Teaching Tactics, Rainy Day Teaching, and Embedding Teaching Moments into Rounds.

Flipping the Wards

Millennial learners prefer novel methods of delivery that are interactive and technology based.[7, 8, 9] Lectures and slide‐based presentations frequently do not feature the degree of interactive engagement that they seek, and methods such as case‐based presentations and simulation may be more suitable. The Khan Academy is a not‐for‐profit organization that has been proposed as a model for future directions for medical education.[18] The academy's global classroom houses over 4000 videos and interactive modules to allow students to progress through topics on their own time.[19] Teaching rounds can be similarly flipped such that discussion and group work take place during rounds, whereas lectures, modules, and reading are reserved for individual study.[18]

As time pressures shift the focus of rounds exclusively toward discussion of patient‐care tasks, finding time for teaching outside of rounds can be emphasized to inspire self‐directed learning. When residents need time to tend to immediate patient‐care issues, hospitalist attendings could take the time to search for articles to send to team members. Rather than distributing paper copies that may be lost, cloud‐based data management systems such as Dropbox (Dropbox, San Francisco, CA) or Google Drive (Google Inc., Mountain View, CA) can be used to disseminate articles, which can be pulled up in real time on mobile devices during rounds and later deposited in shared folders accessible to all team members.[20, 21] The advantage of this approach is that it does not require all learners to be present on rounds, which may not be possible with duty hours.

Using Documentation to Teach

Trainees report that one of the most desirable attributes of clinical teachers is when they delineate their clinical reasoning and thought process.[22] Similarly, Millennial learners specifically desire to understand the rationale behind their teachers' actions.[6] Documentation in the medical chart or electronic health record (EHR) can be used to enhance teaching and role‐model clinical reasoning in a transparent and readily available fashion.

Billing requirements necessitate daily attending documentation in the form of an attestation. Hospitalist attendings can use attestations to model thought process and clinical synthesis in the daily assessment of a patient. For example, an attestation one‐liner can be used to concisely summarize the patient's course or highlight the most pressing issue of the day, rather than simply serve as a placeholder for billing or agree with above in reference to housestaff documentation. This practice can demonstrate to residents how to write a short snapshot of a patient's care in addition to improving communication.

Additionally, the EHR can be a useful platform to guide feedback for residents on their clinical performance. Millennial learners prefer specific, immediate feedback, and trainee documentation can serve as a template to show examples of good documentation and clinical reasoning as well as areas needing improvement.[5] These tangible examples of clinical performance are specific and understandable for trainees to guide their self‐learning and improvement.

Technology‐Enabled Teaching

Using technology wisely on the wards can improve efficiency while also taking advantage of teaching methods familiar to Millennial learners. Technology can be used in a positive manner to keep the focus on the patient and enhance teaching when time is limited on rounds. Smartphones and tablets have become an omnipresent part of the clinical environment.[23] Rather than distracting from rounds, these tools can be used to answer clinical questions in real time, thus directly linking the question to the patient's care.

The EHR is a powerful technological resource that is readily available to enhance teaching during a busy ward schedule. Clinical information is electronically accessible at all hours for both trainees and attendings, rather than only at prespecified times on daily rounds, and the Millennial Generation is accustomed to receiving and sharing information in this fashion.[24] Technology platforms that enable simultaneous sharing of information among multiple members of a team can also be used to assist in sharing clinical information in this manner. Health Insurance Portability and Accountability Act‐compliant group text‐messaging applications for smartphones and tablets such as GroupMD (GroupMD, San Francisco, CA) allow members of a team to connect through 1 portal.[25] These discussions can foster communication, inspire clinical questions, and model the practice of timely response to new information.

Using Guerilla Teaching Tactics

Though time may be limited by work hours, there are opportunities embedded into clinical practice to create teaching moments. The principle of guerilla marketing uses unconventional marketing tactics in everyday locales to aggressively promote a product.[26] Similarly, guerilla teaching might be employed on rounds to make teaching points about common patient care issues that occur at nearly every room, such as Foley catheters after seeing one at the beside or hand hygiene after leaving a room. These types of topics are familiar to trainees as well as hospitalist attendings and fulfill the relevance that Millennial learners seek by easily applying them to the patient at hand.

Memory triggers or checklists are another way to systematically introduce guerilla teaching on commonplace topics. The IBCD checklist, for example, has been successfully implemented at our institution to promote adherence to 4 quality measures.[27] IBCD, which stands for immunizations, bedsores, catheters, and deep vein thrombosis prophylaxis, is easily and quickly tacked on as a checklist item at the end of the problem list during a presentation. Similar checklists can serve as teaching points on quality and safety in inpatient care, as well as reminders to consider these issues for every patient.

Rainy Day Teaching

Hospitalist teaching attendings recognize that duty hours have shifted the preferred time for teaching away from busy admission periods such as postcall rounds.[28] The limited time spent reviewing new admissions is now often focused on patient care issues, with much of the discussion eliminated. However, hospitalist attendings can be proactive and save certain teaching moments for rainy day teaching, anticipating topics to introduce during lower census times. Additionally, attending access to the EHRs allows attendings to preview cases the residents have admitted during a call period and may facilitate planning teaching topics for future opportunities.[23]

Though teaching is an essential part of the hospitalist teaching attending role, the Millennial Generation's affinity for teamwork makes it possible to utilize additional team members as teachers for the group. This type of distribution of responsibility, or outsourcing of teaching, can be done in the form of a teaching or float resident. These individuals can be directed to search the literature to answer clinical questions the team may have during rounds and report back, which may influence decision making and patient care as well as provide education.[29]

Embedding Teaching Moments Into Rounds

Dr. Francis W. Peabody may have been addressing students many generations removed from Millennial learners when he implored them to remember that the secret of the care of the patient is in caring for the patient, but his maxim still rings true today.[30] This advice provides an important insight on how the focus can be kept on the patient by emphasizing physical examination and history‐taking skills, which engages learners in hands‐on activity and grounds that education in a patient‐based experience.[31] The Stanford 25 represents a successful project that refocuses the doctorpatient encounter on the bedside.[32] Using a Web‐based platform, this initiative instructs on 25 physical examination maneuvers, utilizing teaching methods that are familiar to Millennial learners and are patient focused.

In addition to emphasizing bedside teaching, smaller moments can be used during rounds to establish an expectation for learning. Hospitalist attendings can create a routine with daily teaching moments, such as an electrocardiogram or a daily Medical Knowledge Self‐Assessment Program question, a source of internal medicine board preparation material published by the American College of Physicians.[33] These are opportunities to inject a quick educational moment that is easily relatable to the patients on the team's service. Using teaching moments that are routine, accessible, and relevant to patient care can help shape Millennial learners' expectations that teaching be a daily occurrence interwoven within clinical care provided during rounds.

There are several limitations to our work. These strategies do not represent a systematic review, and there is little evidence to support that our approach is more effective than conventional teaching methods. Though we address hospitalists specifically, these strategies are likely suitable for all inpatient educators as they have not been well studied in specific groups. With the paucity of literature regarding learning preferences of Millennial medical trainees, it is difficult to know what methods may truly be most desirable in the wards setting, as many of the needs and learning styles considered in our approach are borrowed from other more traditional learning environments. It is unclear how adoptable our strategies may be for educators from other generations; these faculty may have different approaches to teaching. Further research is necessary to identify areas for faculty development in learning new techniques as well as compare the efficacy of our approach to conventional methods with respect to standardized educational outcomes such as In‐Training Exam performance, as well as patient outcomes.

ACCEPTING THE CHALLENGE

The landscape of clinical teaching has shifted considerably in recent years, in both the makeup of learners for whom educators are responsible for teaching as well as the challenges in teaching under the duty hours restrictions. Though rounds are more focused on patient care than in the past, it is possible to work within the current structure to promote successful learning with an approach that considers the preferences of today's learners.

A hospitalist's natural habitat, the busy inpatient wards, is a clinical learning environment with rich potential for innovation and excellence in teaching. The challenges in practicing hospital medicine closely parallel the challenges in teaching under the constraints of duty hours restrictions; both require a creative approach to problem solving and an affinity for teamwork. The hospitalist community is well suited to not only meet these challenges but become leaders in embracing how to teach effectively on today's wards. Maximizing interaction, embracing technology, and encouraging group‐based learning may represent the keys to a successful approach to teaching the Millennial Generation in a post‐duty hours world.

Files
References
  1. Nasca TJ, Day SH, Amis ES; ACGME Duty Hour Task Force. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010;363(2):e3.
  2. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335(7):514517.
  3. Liston BW, O'Dorisio N, Walker C, et al. Hospital medicine in the internal medicine clerkship: results from a national survey. J Hosp Med. 2012;7(7):557561.
  4. Howe N, Strauss W. Millennials Rising: The Next Great Generation. New York, NY: Random House/Vintage Books; 2000.
  5. Eckleberry‐Hunt J, Tucciarone J. The challenges and opportunities of teaching “Generation Y.” J Grad Med Educ.2011;3(4):458461.
  6. Twenge JM. Generational changes and their impact in the classroom: teaching Generation Me. Med Educ. 2009;43(5):398405.
  7. Roberts DH, Newman LR, Schwarzstein RM. Twelve tips for facilitating Millennials' learning. Med Teach. 2012;34(4):274278.
  8. Pew Research Center. Millennials: a portrait of generation next. Available at: http://pewsocialtrends.org/files/2010/10/millennials‐confident‐connected‐open‐to‐change.pdf. Accessed February 28, 2013.
  9. Mohr NM, Moreno‐Walton L, Mills AM, et al. Generational influences in academic emergency medicine: teaching and learning, mentoring, and technology (part I). Acad Emerg Med. 2011;18(2):190199.
  10. Mohr NM, Moreno‐Walton L, Mills AM, et al. Generational influences in academic emergency medicine: structure, function, and culture (part II). Acad Emerg Med. 2011;18(2):200207.
  11. Katz‐Sidlow RJ, Ludwig A, Miller S, Sidlow R. Smartphone use during inpatient attending rounds: prevalence, patterns, and potential for distraction. J Hosp Med. 2012;8:595599.
  12. Borges NJ, Manuel RS, Elam CL, et al. Comparing millennial and generation X medical students at one medical school. Acad Med. 2006;81(6):571576.
  13. Borges NJ, Manuel RS, Elam CL, Jones BJ. Differences in motives between Millennial and Generation X students. Med Educ. 2010;44(6):570576.
  14. Arora V, Meltzer D. Effect of ACGME duty hours on attending physician teaching and satisfaction. Arch Intern Med. 2008;168(11):12261227.
  15. Drolet BC, Christopher DA, Fischer SA. Residents' response to duty‐hours regulations—a follow‐up national survey. N Engl J Med. 2012; 366(24):e35.
  16. Khan S. Innovation arc: new approaches. Presented at: Association of American Colleges of Medicine National Meeting; November 2012; San Francisco, CA.
  17. Spencer JA, Jordan RK. Learner‐centered approaches in medical education. BMJ. 1999;318:12801283.
  18. Prober CG, Heath C. Lecture halls without lectures—a proposal for medical education. N Engl J Med. 2012;366(18):16571659.
  19. The Khan Academy. Available at: https://www.khanacademy.org/. Accessed March 4, 2013.
  20. Dropbox. Dropbox Inc. Available at: https://www.dropbox.com/. Accessed April 19, 2013.
  21. Google Drive. Google Inc. Available at: https://drive.google.com/. Accessed April 19, 2013.
  22. Sutkin G, Wagner E, Harris I, et al. What makes a good clinical teacher in medicine? A review of the literature. Acad Med. 2008;83(5):452466.
  23. Baumgart DC. Smartphones in clinical practice, medical education, and research. Arch Intern Med. 2011;171(14):12941296.
  24. Martin SK, Tulla K, Meltzer DO, et al. Attending use of the electronic health record (EHR) and implications for housestaff supervision. Presented at: Midwest Society of General Internal Medicine Regional Meeting; September 2012; Chicago, IL.
  25. GroupMD. GroupMD Inc. Available at http://group.md. Accessed April 19, 2013.
  26. Levinson J. Guerilla Marketing: Secrets for Making Big Profits From Your Small Business. Boston, MA: Houghton Mifflin; 1984.
  27. Aspesi A, Kauffmann GE, Davis AM, et al. IBCD: development and testing of a checklist to improve quality of care for hospitalized general medical patients. Jt Comm J Qual Patient Saf. 2013;39(4):147156.
  28. Cohen S, Sarkar U. Ice cream rounds. Acad Med. 2013;88(1):66.
  29. Lucas BP, Evans AT, Reilly BM, et al. The impact of evidence on physicians' inpatient treatment decisions. J Gen Intern Med. 2004; 19(5 pt 1):402409.
  30. Peabody FW. Landmark article March 19, 1927: the care of the patient. By Francis W. Peabody. JAMA. 1984;252(6):813818.
  31. Gonzalo JD, Heist BS, Duffy BL, et al. The art of bedside rounds: a multi‐center qualitative study of strategies used by experienced bedside teachers. J Gen Intern Med. 2013;28(3):412420.
  32. Stanford University School of Medicine. Stanford Medicine 25. Available at: http://stanfordmedicine25.stanford.edu/. Accessed February 28, 2013.
  33. Medical Knowledge Self‐Assessment Program 16. The American College of Physicians. Available at: https://mksap.acponline.org. Accessed April 19, 2013.
Article PDF
Issue
Journal of Hospital Medicine - 8(7)
Publications
Page Number
409-413
Sections
Files
Files
Article PDF
Article PDF

The implementation of resident duty hour restrictions has created a clinical learning environment on the wards quite different from any previous era. The Accreditation Council for Graduate Medical Education issued its first set of regulations limiting consecutive hours worked for residents in 2003, and further restricted hours in 2011.[1] These restrictions have had many implications across several aspects of patient care, education, and clinical training, particularly for hospitalists who spend the majority of their time in this setting and are heavily involved in undergraduate and graduate clinical education in academic medical centers.[2, 3]

As learning environments have been shifting, so has the composition of learners. The Millennial Generation (or Generation Y), defined as those born approximately between 1980 and 2000, represents those young clinicians currently filling the halls of medical schools and ranks of residency and fellowship programs.[4] Interestingly, the current system of restricted work hours is the only system under which the Millennial Generation has ever trained.

As this new generation represents the bulk of current trainees, hospitalist faculty must consider how their teaching styles can be adapted to accommodate these learners. For teaching hospitalists, an approach that considers the learning environment as affected by duty hours, as well as the preferences of Millennial learners, is necessary to educate the next generation of trainees. This article aimed to introduce potential strategies for hospitalists to better align teaching on the wards with the preferences of Millennial learners under the constraints of residency duty hours.

THE NEWEST GENERATION OF LEARNERS

The Millennial Generation has been well described.[4, 5, 6, 7, 8, 9, 10] Broadly speaking, this generation is thought to have been raised by attentive and involved parents, influencing relationships with educators and mentors; they respect authority but do not hesitate to question the relevance of assignments or decisions. Millennials prefer structured learning environments that focus heavily on interaction and experiential learning, and they value design and appearance in how material is presented.[7] Millennials also seek clear expectations and immediate feedback on their performance, and though they have sometimes been criticized for a strong sense of entitlement, they have a strong desire for collaboration and group‐based activity.[5, 6]

One of the most notable and defining characteristics of the Millennial Generation is an affinity for technology and innovation.[7, 8, 9] Web‐based learning tools that are interactive and engaging, such as blogs, podcasts, or streaming videos are familiar and favored methods of learning. Millennials are skilled at finding information and providing answers and data, but may need help with synthesis and application.[5] They take pride in their ability to multitask, but can be prone to doing so inappropriately, particularly with technology that is readily available.[11]

Few studies have explored characteristics of the Millennial Generation specific to medical trainees. One study examined personality characteristics of Millennial medical students compared to Generation X students (those born from 19651980) at a single institution. Millennial students scored higher on warmth, reasoning, emotional stability, rule consciousness, social boldness, sensitivity, apprehension, openness to change, and perfectionism compared to Generation X students. They scored lower on measures for self‐reliance.[12] Additionally, when motives for behavior were studied, Millennial medical students scored higher on needs for affiliation and achievement, and lower on needs for power.[13]

DUTY HOURS: A GENERATION APART

As noted previously, the Millennial Generation is the first to train exclusively in the era of duty hours restrictions. The oldest members of this generation, those born in 1981, were entering medical school at the time of the first duty hours restrictions in 2003, and thus have always been educated, trained, and practiced in an environment in which work hours were an essential part of residency training.

Though duty hours have been an omnipresent part of training for the Millennial Generation, the clinical learning environment that they have known continues to evolve and change. Time for teaching, in particular, has been especially strained by work hour limits, and this has been noted by both attending physicians and trainees with each iteration of work hours limits. Attendings in one study estimated that time spent teaching on general medicine wards was reduced by about 20% following the 2003 limits, and over 40% of residents in a national survey reported that the 2011 limits had worsened the quality of education.[14, 15]

GENERATIONAL STRATEGIES FOR SUCCESS FOR HOSPITALIST TEACHING ATTENDINGS

The time limitations imposed by duty hours restrictions have compelled teaching rounds to become more patient‐care centered and often less learner‐centered, as providing patient care becomes the prime obligation for this limited time period. Millennial learners are accustomed to being the center of attention in educational environments, and changing the focus from education to patient care in the wards setting may be an abrupt transition for some learners.[6] However, hospitalists can help restructure teaching opportunities on the clinical wards by using teaching methods of the highest value to Millennial learners to promote learning under the conditions of duty hours limitations.

An approach using these methods was developed by reviewing recent literature as well as educational innovations that have been presented at scholarly meetings (eg, Sal Khan's presentation at the 2012 Association of American Medical Colleges meeting).[16] The authors discussed potential teaching techniques that were thought to be feasible to implement in the context of the current learning environment, with consideration of learning theories that would be most effective for the target group of learners (eg, adult learning theory).[17] A mnemonic was created to consolidate strategies thought to best represent these techniques. FUTURE is a group of teaching strategies that can be used by hospitalists to improve teaching rounds by Flipping the Wards, Using Documentation to Teach, Technology‐Enabled Teaching, Using Guerilla Teaching Tactics, Rainy Day Teaching, and Embedding Teaching Moments into Rounds.

Flipping the Wards

Millennial learners prefer novel methods of delivery that are interactive and technology based.[7, 8, 9] Lectures and slide‐based presentations frequently do not feature the degree of interactive engagement that they seek, and methods such as case‐based presentations and simulation may be more suitable. The Khan Academy is a not‐for‐profit organization that has been proposed as a model for future directions for medical education.[18] The academy's global classroom houses over 4000 videos and interactive modules to allow students to progress through topics on their own time.[19] Teaching rounds can be similarly flipped such that discussion and group work take place during rounds, whereas lectures, modules, and reading are reserved for individual study.[18]

As time pressures shift the focus of rounds exclusively toward discussion of patient‐care tasks, finding time for teaching outside of rounds can be emphasized to inspire self‐directed learning. When residents need time to tend to immediate patient‐care issues, hospitalist attendings could take the time to search for articles to send to team members. Rather than distributing paper copies that may be lost, cloud‐based data management systems such as Dropbox (Dropbox, San Francisco, CA) or Google Drive (Google Inc., Mountain View, CA) can be used to disseminate articles, which can be pulled up in real time on mobile devices during rounds and later deposited in shared folders accessible to all team members.[20, 21] The advantage of this approach is that it does not require all learners to be present on rounds, which may not be possible with duty hours.

Using Documentation to Teach

Trainees report that one of the most desirable attributes of clinical teachers is when they delineate their clinical reasoning and thought process.[22] Similarly, Millennial learners specifically desire to understand the rationale behind their teachers' actions.[6] Documentation in the medical chart or electronic health record (EHR) can be used to enhance teaching and role‐model clinical reasoning in a transparent and readily available fashion.

Billing requirements necessitate daily attending documentation in the form of an attestation. Hospitalist attendings can use attestations to model thought process and clinical synthesis in the daily assessment of a patient. For example, an attestation one‐liner can be used to concisely summarize the patient's course or highlight the most pressing issue of the day, rather than simply serve as a placeholder for billing or agree with above in reference to housestaff documentation. This practice can demonstrate to residents how to write a short snapshot of a patient's care in addition to improving communication.

Additionally, the EHR can be a useful platform to guide feedback for residents on their clinical performance. Millennial learners prefer specific, immediate feedback, and trainee documentation can serve as a template to show examples of good documentation and clinical reasoning as well as areas needing improvement.[5] These tangible examples of clinical performance are specific and understandable for trainees to guide their self‐learning and improvement.

Technology‐Enabled Teaching

Using technology wisely on the wards can improve efficiency while also taking advantage of teaching methods familiar to Millennial learners. Technology can be used in a positive manner to keep the focus on the patient and enhance teaching when time is limited on rounds. Smartphones and tablets have become an omnipresent part of the clinical environment.[23] Rather than distracting from rounds, these tools can be used to answer clinical questions in real time, thus directly linking the question to the patient's care.

The EHR is a powerful technological resource that is readily available to enhance teaching during a busy ward schedule. Clinical information is electronically accessible at all hours for both trainees and attendings, rather than only at prespecified times on daily rounds, and the Millennial Generation is accustomed to receiving and sharing information in this fashion.[24] Technology platforms that enable simultaneous sharing of information among multiple members of a team can also be used to assist in sharing clinical information in this manner. Health Insurance Portability and Accountability Act‐compliant group text‐messaging applications for smartphones and tablets such as GroupMD (GroupMD, San Francisco, CA) allow members of a team to connect through 1 portal.[25] These discussions can foster communication, inspire clinical questions, and model the practice of timely response to new information.

Using Guerilla Teaching Tactics

Though time may be limited by work hours, there are opportunities embedded into clinical practice to create teaching moments. The principle of guerilla marketing uses unconventional marketing tactics in everyday locales to aggressively promote a product.[26] Similarly, guerilla teaching might be employed on rounds to make teaching points about common patient care issues that occur at nearly every room, such as Foley catheters after seeing one at the beside or hand hygiene after leaving a room. These types of topics are familiar to trainees as well as hospitalist attendings and fulfill the relevance that Millennial learners seek by easily applying them to the patient at hand.

Memory triggers or checklists are another way to systematically introduce guerilla teaching on commonplace topics. The IBCD checklist, for example, has been successfully implemented at our institution to promote adherence to 4 quality measures.[27] IBCD, which stands for immunizations, bedsores, catheters, and deep vein thrombosis prophylaxis, is easily and quickly tacked on as a checklist item at the end of the problem list during a presentation. Similar checklists can serve as teaching points on quality and safety in inpatient care, as well as reminders to consider these issues for every patient.

Rainy Day Teaching

Hospitalist teaching attendings recognize that duty hours have shifted the preferred time for teaching away from busy admission periods such as postcall rounds.[28] The limited time spent reviewing new admissions is now often focused on patient care issues, with much of the discussion eliminated. However, hospitalist attendings can be proactive and save certain teaching moments for rainy day teaching, anticipating topics to introduce during lower census times. Additionally, attending access to the EHRs allows attendings to preview cases the residents have admitted during a call period and may facilitate planning teaching topics for future opportunities.[23]

Though teaching is an essential part of the hospitalist teaching attending role, the Millennial Generation's affinity for teamwork makes it possible to utilize additional team members as teachers for the group. This type of distribution of responsibility, or outsourcing of teaching, can be done in the form of a teaching or float resident. These individuals can be directed to search the literature to answer clinical questions the team may have during rounds and report back, which may influence decision making and patient care as well as provide education.[29]

Embedding Teaching Moments Into Rounds

Dr. Francis W. Peabody may have been addressing students many generations removed from Millennial learners when he implored them to remember that the secret of the care of the patient is in caring for the patient, but his maxim still rings true today.[30] This advice provides an important insight on how the focus can be kept on the patient by emphasizing physical examination and history‐taking skills, which engages learners in hands‐on activity and grounds that education in a patient‐based experience.[31] The Stanford 25 represents a successful project that refocuses the doctorpatient encounter on the bedside.[32] Using a Web‐based platform, this initiative instructs on 25 physical examination maneuvers, utilizing teaching methods that are familiar to Millennial learners and are patient focused.

In addition to emphasizing bedside teaching, smaller moments can be used during rounds to establish an expectation for learning. Hospitalist attendings can create a routine with daily teaching moments, such as an electrocardiogram or a daily Medical Knowledge Self‐Assessment Program question, a source of internal medicine board preparation material published by the American College of Physicians.[33] These are opportunities to inject a quick educational moment that is easily relatable to the patients on the team's service. Using teaching moments that are routine, accessible, and relevant to patient care can help shape Millennial learners' expectations that teaching be a daily occurrence interwoven within clinical care provided during rounds.

There are several limitations to our work. These strategies do not represent a systematic review, and there is little evidence to support that our approach is more effective than conventional teaching methods. Though we address hospitalists specifically, these strategies are likely suitable for all inpatient educators as they have not been well studied in specific groups. With the paucity of literature regarding learning preferences of Millennial medical trainees, it is difficult to know what methods may truly be most desirable in the wards setting, as many of the needs and learning styles considered in our approach are borrowed from other more traditional learning environments. It is unclear how adoptable our strategies may be for educators from other generations; these faculty may have different approaches to teaching. Further research is necessary to identify areas for faculty development in learning new techniques as well as compare the efficacy of our approach to conventional methods with respect to standardized educational outcomes such as In‐Training Exam performance, as well as patient outcomes.

ACCEPTING THE CHALLENGE

The landscape of clinical teaching has shifted considerably in recent years, in both the makeup of learners for whom educators are responsible for teaching as well as the challenges in teaching under the duty hours restrictions. Though rounds are more focused on patient care than in the past, it is possible to work within the current structure to promote successful learning with an approach that considers the preferences of today's learners.

A hospitalist's natural habitat, the busy inpatient wards, is a clinical learning environment with rich potential for innovation and excellence in teaching. The challenges in practicing hospital medicine closely parallel the challenges in teaching under the constraints of duty hours restrictions; both require a creative approach to problem solving and an affinity for teamwork. The hospitalist community is well suited to not only meet these challenges but become leaders in embracing how to teach effectively on today's wards. Maximizing interaction, embracing technology, and encouraging group‐based learning may represent the keys to a successful approach to teaching the Millennial Generation in a post‐duty hours world.

The implementation of resident duty hour restrictions has created a clinical learning environment on the wards quite different from any previous era. The Accreditation Council for Graduate Medical Education issued its first set of regulations limiting consecutive hours worked for residents in 2003, and further restricted hours in 2011.[1] These restrictions have had many implications across several aspects of patient care, education, and clinical training, particularly for hospitalists who spend the majority of their time in this setting and are heavily involved in undergraduate and graduate clinical education in academic medical centers.[2, 3]

As learning environments have been shifting, so has the composition of learners. The Millennial Generation (or Generation Y), defined as those born approximately between 1980 and 2000, represents those young clinicians currently filling the halls of medical schools and ranks of residency and fellowship programs.[4] Interestingly, the current system of restricted work hours is the only system under which the Millennial Generation has ever trained.

As this new generation represents the bulk of current trainees, hospitalist faculty must consider how their teaching styles can be adapted to accommodate these learners. For teaching hospitalists, an approach that considers the learning environment as affected by duty hours, as well as the preferences of Millennial learners, is necessary to educate the next generation of trainees. This article aimed to introduce potential strategies for hospitalists to better align teaching on the wards with the preferences of Millennial learners under the constraints of residency duty hours.

THE NEWEST GENERATION OF LEARNERS

The Millennial Generation has been well described.[4, 5, 6, 7, 8, 9, 10] Broadly speaking, this generation is thought to have been raised by attentive and involved parents, influencing relationships with educators and mentors; they respect authority but do not hesitate to question the relevance of assignments or decisions. Millennials prefer structured learning environments that focus heavily on interaction and experiential learning, and they value design and appearance in how material is presented.[7] Millennials also seek clear expectations and immediate feedback on their performance, and though they have sometimes been criticized for a strong sense of entitlement, they have a strong desire for collaboration and group‐based activity.[5, 6]

One of the most notable and defining characteristics of the Millennial Generation is an affinity for technology and innovation.[7, 8, 9] Web‐based learning tools that are interactive and engaging, such as blogs, podcasts, or streaming videos are familiar and favored methods of learning. Millennials are skilled at finding information and providing answers and data, but may need help with synthesis and application.[5] They take pride in their ability to multitask, but can be prone to doing so inappropriately, particularly with technology that is readily available.[11]

Few studies have explored characteristics of the Millennial Generation specific to medical trainees. One study examined personality characteristics of Millennial medical students compared to Generation X students (those born from 19651980) at a single institution. Millennial students scored higher on warmth, reasoning, emotional stability, rule consciousness, social boldness, sensitivity, apprehension, openness to change, and perfectionism compared to Generation X students. They scored lower on measures for self‐reliance.[12] Additionally, when motives for behavior were studied, Millennial medical students scored higher on needs for affiliation and achievement, and lower on needs for power.[13]

DUTY HOURS: A GENERATION APART

As noted previously, the Millennial Generation is the first to train exclusively in the era of duty hours restrictions. The oldest members of this generation, those born in 1981, were entering medical school at the time of the first duty hours restrictions in 2003, and thus have always been educated, trained, and practiced in an environment in which work hours were an essential part of residency training.

Though duty hours have been an omnipresent part of training for the Millennial Generation, the clinical learning environment that they have known continues to evolve and change. Time for teaching, in particular, has been especially strained by work hour limits, and this has been noted by both attending physicians and trainees with each iteration of work hours limits. Attendings in one study estimated that time spent teaching on general medicine wards was reduced by about 20% following the 2003 limits, and over 40% of residents in a national survey reported that the 2011 limits had worsened the quality of education.[14, 15]

GENERATIONAL STRATEGIES FOR SUCCESS FOR HOSPITALIST TEACHING ATTENDINGS

The time limitations imposed by duty hours restrictions have compelled teaching rounds to become more patient‐care centered and often less learner‐centered, as providing patient care becomes the prime obligation for this limited time period. Millennial learners are accustomed to being the center of attention in educational environments, and changing the focus from education to patient care in the wards setting may be an abrupt transition for some learners.[6] However, hospitalists can help restructure teaching opportunities on the clinical wards by using teaching methods of the highest value to Millennial learners to promote learning under the conditions of duty hours limitations.

An approach using these methods was developed by reviewing recent literature as well as educational innovations that have been presented at scholarly meetings (eg, Sal Khan's presentation at the 2012 Association of American Medical Colleges meeting).[16] The authors discussed potential teaching techniques that were thought to be feasible to implement in the context of the current learning environment, with consideration of learning theories that would be most effective for the target group of learners (eg, adult learning theory).[17] A mnemonic was created to consolidate strategies thought to best represent these techniques. FUTURE is a group of teaching strategies that can be used by hospitalists to improve teaching rounds by Flipping the Wards, Using Documentation to Teach, Technology‐Enabled Teaching, Using Guerilla Teaching Tactics, Rainy Day Teaching, and Embedding Teaching Moments into Rounds.

Flipping the Wards

Millennial learners prefer novel methods of delivery that are interactive and technology based.[7, 8, 9] Lectures and slide‐based presentations frequently do not feature the degree of interactive engagement that they seek, and methods such as case‐based presentations and simulation may be more suitable. The Khan Academy is a not‐for‐profit organization that has been proposed as a model for future directions for medical education.[18] The academy's global classroom houses over 4000 videos and interactive modules to allow students to progress through topics on their own time.[19] Teaching rounds can be similarly flipped such that discussion and group work take place during rounds, whereas lectures, modules, and reading are reserved for individual study.[18]

As time pressures shift the focus of rounds exclusively toward discussion of patient‐care tasks, finding time for teaching outside of rounds can be emphasized to inspire self‐directed learning. When residents need time to tend to immediate patient‐care issues, hospitalist attendings could take the time to search for articles to send to team members. Rather than distributing paper copies that may be lost, cloud‐based data management systems such as Dropbox (Dropbox, San Francisco, CA) or Google Drive (Google Inc., Mountain View, CA) can be used to disseminate articles, which can be pulled up in real time on mobile devices during rounds and later deposited in shared folders accessible to all team members.[20, 21] The advantage of this approach is that it does not require all learners to be present on rounds, which may not be possible with duty hours.

Using Documentation to Teach

Trainees report that one of the most desirable attributes of clinical teachers is when they delineate their clinical reasoning and thought process.[22] Similarly, Millennial learners specifically desire to understand the rationale behind their teachers' actions.[6] Documentation in the medical chart or electronic health record (EHR) can be used to enhance teaching and role‐model clinical reasoning in a transparent and readily available fashion.

Billing requirements necessitate daily attending documentation in the form of an attestation. Hospitalist attendings can use attestations to model thought process and clinical synthesis in the daily assessment of a patient. For example, an attestation one‐liner can be used to concisely summarize the patient's course or highlight the most pressing issue of the day, rather than simply serve as a placeholder for billing or agree with above in reference to housestaff documentation. This practice can demonstrate to residents how to write a short snapshot of a patient's care in addition to improving communication.

Additionally, the EHR can be a useful platform to guide feedback for residents on their clinical performance. Millennial learners prefer specific, immediate feedback, and trainee documentation can serve as a template to show examples of good documentation and clinical reasoning as well as areas needing improvement.[5] These tangible examples of clinical performance are specific and understandable for trainees to guide their self‐learning and improvement.

Technology‐Enabled Teaching

Using technology wisely on the wards can improve efficiency while also taking advantage of teaching methods familiar to Millennial learners. Technology can be used in a positive manner to keep the focus on the patient and enhance teaching when time is limited on rounds. Smartphones and tablets have become an omnipresent part of the clinical environment.[23] Rather than distracting from rounds, these tools can be used to answer clinical questions in real time, thus directly linking the question to the patient's care.

The EHR is a powerful technological resource that is readily available to enhance teaching during a busy ward schedule. Clinical information is electronically accessible at all hours for both trainees and attendings, rather than only at prespecified times on daily rounds, and the Millennial Generation is accustomed to receiving and sharing information in this fashion.[24] Technology platforms that enable simultaneous sharing of information among multiple members of a team can also be used to assist in sharing clinical information in this manner. Health Insurance Portability and Accountability Act‐compliant group text‐messaging applications for smartphones and tablets such as GroupMD (GroupMD, San Francisco, CA) allow members of a team to connect through 1 portal.[25] These discussions can foster communication, inspire clinical questions, and model the practice of timely response to new information.

Using Guerilla Teaching Tactics

Though time may be limited by work hours, there are opportunities embedded into clinical practice to create teaching moments. The principle of guerilla marketing uses unconventional marketing tactics in everyday locales to aggressively promote a product.[26] Similarly, guerilla teaching might be employed on rounds to make teaching points about common patient care issues that occur at nearly every room, such as Foley catheters after seeing one at the beside or hand hygiene after leaving a room. These types of topics are familiar to trainees as well as hospitalist attendings and fulfill the relevance that Millennial learners seek by easily applying them to the patient at hand.

Memory triggers or checklists are another way to systematically introduce guerilla teaching on commonplace topics. The IBCD checklist, for example, has been successfully implemented at our institution to promote adherence to 4 quality measures.[27] IBCD, which stands for immunizations, bedsores, catheters, and deep vein thrombosis prophylaxis, is easily and quickly tacked on as a checklist item at the end of the problem list during a presentation. Similar checklists can serve as teaching points on quality and safety in inpatient care, as well as reminders to consider these issues for every patient.

Rainy Day Teaching

Hospitalist teaching attendings recognize that duty hours have shifted the preferred time for teaching away from busy admission periods such as postcall rounds.[28] The limited time spent reviewing new admissions is now often focused on patient care issues, with much of the discussion eliminated. However, hospitalist attendings can be proactive and save certain teaching moments for rainy day teaching, anticipating topics to introduce during lower census times. Additionally, attending access to the EHRs allows attendings to preview cases the residents have admitted during a call period and may facilitate planning teaching topics for future opportunities.[23]

Though teaching is an essential part of the hospitalist teaching attending role, the Millennial Generation's affinity for teamwork makes it possible to utilize additional team members as teachers for the group. This type of distribution of responsibility, or outsourcing of teaching, can be done in the form of a teaching or float resident. These individuals can be directed to search the literature to answer clinical questions the team may have during rounds and report back, which may influence decision making and patient care as well as provide education.[29]

Embedding Teaching Moments Into Rounds

Dr. Francis W. Peabody may have been addressing students many generations removed from Millennial learners when he implored them to remember that the secret of the care of the patient is in caring for the patient, but his maxim still rings true today.[30] This advice provides an important insight on how the focus can be kept on the patient by emphasizing physical examination and history‐taking skills, which engages learners in hands‐on activity and grounds that education in a patient‐based experience.[31] The Stanford 25 represents a successful project that refocuses the doctorpatient encounter on the bedside.[32] Using a Web‐based platform, this initiative instructs on 25 physical examination maneuvers, utilizing teaching methods that are familiar to Millennial learners and are patient focused.

In addition to emphasizing bedside teaching, smaller moments can be used during rounds to establish an expectation for learning. Hospitalist attendings can create a routine with daily teaching moments, such as an electrocardiogram or a daily Medical Knowledge Self‐Assessment Program question, a source of internal medicine board preparation material published by the American College of Physicians.[33] These are opportunities to inject a quick educational moment that is easily relatable to the patients on the team's service. Using teaching moments that are routine, accessible, and relevant to patient care can help shape Millennial learners' expectations that teaching be a daily occurrence interwoven within clinical care provided during rounds.

There are several limitations to our work. These strategies do not represent a systematic review, and there is little evidence to support that our approach is more effective than conventional teaching methods. Though we address hospitalists specifically, these strategies are likely suitable for all inpatient educators as they have not been well studied in specific groups. With the paucity of literature regarding learning preferences of Millennial medical trainees, it is difficult to know what methods may truly be most desirable in the wards setting, as many of the needs and learning styles considered in our approach are borrowed from other more traditional learning environments. It is unclear how adoptable our strategies may be for educators from other generations; these faculty may have different approaches to teaching. Further research is necessary to identify areas for faculty development in learning new techniques as well as compare the efficacy of our approach to conventional methods with respect to standardized educational outcomes such as In‐Training Exam performance, as well as patient outcomes.

ACCEPTING THE CHALLENGE

The landscape of clinical teaching has shifted considerably in recent years, in both the makeup of learners for whom educators are responsible for teaching as well as the challenges in teaching under the duty hours restrictions. Though rounds are more focused on patient care than in the past, it is possible to work within the current structure to promote successful learning with an approach that considers the preferences of today's learners.

A hospitalist's natural habitat, the busy inpatient wards, is a clinical learning environment with rich potential for innovation and excellence in teaching. The challenges in practicing hospital medicine closely parallel the challenges in teaching under the constraints of duty hours restrictions; both require a creative approach to problem solving and an affinity for teamwork. The hospitalist community is well suited to not only meet these challenges but become leaders in embracing how to teach effectively on today's wards. Maximizing interaction, embracing technology, and encouraging group‐based learning may represent the keys to a successful approach to teaching the Millennial Generation in a post‐duty hours world.

References
  1. Nasca TJ, Day SH, Amis ES; ACGME Duty Hour Task Force. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010;363(2):e3.
  2. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335(7):514517.
  3. Liston BW, O'Dorisio N, Walker C, et al. Hospital medicine in the internal medicine clerkship: results from a national survey. J Hosp Med. 2012;7(7):557561.
  4. Howe N, Strauss W. Millennials Rising: The Next Great Generation. New York, NY: Random House/Vintage Books; 2000.
  5. Eckleberry‐Hunt J, Tucciarone J. The challenges and opportunities of teaching “Generation Y.” J Grad Med Educ.2011;3(4):458461.
  6. Twenge JM. Generational changes and their impact in the classroom: teaching Generation Me. Med Educ. 2009;43(5):398405.
  7. Roberts DH, Newman LR, Schwarzstein RM. Twelve tips for facilitating Millennials' learning. Med Teach. 2012;34(4):274278.
  8. Pew Research Center. Millennials: a portrait of generation next. Available at: http://pewsocialtrends.org/files/2010/10/millennials‐confident‐connected‐open‐to‐change.pdf. Accessed February 28, 2013.
  9. Mohr NM, Moreno‐Walton L, Mills AM, et al. Generational influences in academic emergency medicine: teaching and learning, mentoring, and technology (part I). Acad Emerg Med. 2011;18(2):190199.
  10. Mohr NM, Moreno‐Walton L, Mills AM, et al. Generational influences in academic emergency medicine: structure, function, and culture (part II). Acad Emerg Med. 2011;18(2):200207.
  11. Katz‐Sidlow RJ, Ludwig A, Miller S, Sidlow R. Smartphone use during inpatient attending rounds: prevalence, patterns, and potential for distraction. J Hosp Med. 2012;8:595599.
  12. Borges NJ, Manuel RS, Elam CL, et al. Comparing millennial and generation X medical students at one medical school. Acad Med. 2006;81(6):571576.
  13. Borges NJ, Manuel RS, Elam CL, Jones BJ. Differences in motives between Millennial and Generation X students. Med Educ. 2010;44(6):570576.
  14. Arora V, Meltzer D. Effect of ACGME duty hours on attending physician teaching and satisfaction. Arch Intern Med. 2008;168(11):12261227.
  15. Drolet BC, Christopher DA, Fischer SA. Residents' response to duty‐hours regulations—a follow‐up national survey. N Engl J Med. 2012; 366(24):e35.
  16. Khan S. Innovation arc: new approaches. Presented at: Association of American Colleges of Medicine National Meeting; November 2012; San Francisco, CA.
  17. Spencer JA, Jordan RK. Learner‐centered approaches in medical education. BMJ. 1999;318:12801283.
  18. Prober CG, Heath C. Lecture halls without lectures—a proposal for medical education. N Engl J Med. 2012;366(18):16571659.
  19. The Khan Academy. Available at: https://www.khanacademy.org/. Accessed March 4, 2013.
  20. Dropbox. Dropbox Inc. Available at: https://www.dropbox.com/. Accessed April 19, 2013.
  21. Google Drive. Google Inc. Available at: https://drive.google.com/. Accessed April 19, 2013.
  22. Sutkin G, Wagner E, Harris I, et al. What makes a good clinical teacher in medicine? A review of the literature. Acad Med. 2008;83(5):452466.
  23. Baumgart DC. Smartphones in clinical practice, medical education, and research. Arch Intern Med. 2011;171(14):12941296.
  24. Martin SK, Tulla K, Meltzer DO, et al. Attending use of the electronic health record (EHR) and implications for housestaff supervision. Presented at: Midwest Society of General Internal Medicine Regional Meeting; September 2012; Chicago, IL.
  25. GroupMD. GroupMD Inc. Available at http://group.md. Accessed April 19, 2013.
  26. Levinson J. Guerilla Marketing: Secrets for Making Big Profits From Your Small Business. Boston, MA: Houghton Mifflin; 1984.
  27. Aspesi A, Kauffmann GE, Davis AM, et al. IBCD: development and testing of a checklist to improve quality of care for hospitalized general medical patients. Jt Comm J Qual Patient Saf. 2013;39(4):147156.
  28. Cohen S, Sarkar U. Ice cream rounds. Acad Med. 2013;88(1):66.
  29. Lucas BP, Evans AT, Reilly BM, et al. The impact of evidence on physicians' inpatient treatment decisions. J Gen Intern Med. 2004; 19(5 pt 1):402409.
  30. Peabody FW. Landmark article March 19, 1927: the care of the patient. By Francis W. Peabody. JAMA. 1984;252(6):813818.
  31. Gonzalo JD, Heist BS, Duffy BL, et al. The art of bedside rounds: a multi‐center qualitative study of strategies used by experienced bedside teachers. J Gen Intern Med. 2013;28(3):412420.
  32. Stanford University School of Medicine. Stanford Medicine 25. Available at: http://stanfordmedicine25.stanford.edu/. Accessed February 28, 2013.
  33. Medical Knowledge Self‐Assessment Program 16. The American College of Physicians. Available at: https://mksap.acponline.org. Accessed April 19, 2013.
References
  1. Nasca TJ, Day SH, Amis ES; ACGME Duty Hour Task Force. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010;363(2):e3.
  2. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335(7):514517.
  3. Liston BW, O'Dorisio N, Walker C, et al. Hospital medicine in the internal medicine clerkship: results from a national survey. J Hosp Med. 2012;7(7):557561.
  4. Howe N, Strauss W. Millennials Rising: The Next Great Generation. New York, NY: Random House/Vintage Books; 2000.
  5. Eckleberry‐Hunt J, Tucciarone J. The challenges and opportunities of teaching “Generation Y.” J Grad Med Educ.2011;3(4):458461.
  6. Twenge JM. Generational changes and their impact in the classroom: teaching Generation Me. Med Educ. 2009;43(5):398405.
  7. Roberts DH, Newman LR, Schwarzstein RM. Twelve tips for facilitating Millennials' learning. Med Teach. 2012;34(4):274278.
  8. Pew Research Center. Millennials: a portrait of generation next. Available at: http://pewsocialtrends.org/files/2010/10/millennials‐confident‐connected‐open‐to‐change.pdf. Accessed February 28, 2013.
  9. Mohr NM, Moreno‐Walton L, Mills AM, et al. Generational influences in academic emergency medicine: teaching and learning, mentoring, and technology (part I). Acad Emerg Med. 2011;18(2):190199.
  10. Mohr NM, Moreno‐Walton L, Mills AM, et al. Generational influences in academic emergency medicine: structure, function, and culture (part II). Acad Emerg Med. 2011;18(2):200207.
  11. Katz‐Sidlow RJ, Ludwig A, Miller S, Sidlow R. Smartphone use during inpatient attending rounds: prevalence, patterns, and potential for distraction. J Hosp Med. 2012;8:595599.
  12. Borges NJ, Manuel RS, Elam CL, et al. Comparing millennial and generation X medical students at one medical school. Acad Med. 2006;81(6):571576.
  13. Borges NJ, Manuel RS, Elam CL, Jones BJ. Differences in motives between Millennial and Generation X students. Med Educ. 2010;44(6):570576.
  14. Arora V, Meltzer D. Effect of ACGME duty hours on attending physician teaching and satisfaction. Arch Intern Med. 2008;168(11):12261227.
  15. Drolet BC, Christopher DA, Fischer SA. Residents' response to duty‐hours regulations—a follow‐up national survey. N Engl J Med. 2012; 366(24):e35.
  16. Khan S. Innovation arc: new approaches. Presented at: Association of American Colleges of Medicine National Meeting; November 2012; San Francisco, CA.
  17. Spencer JA, Jordan RK. Learner‐centered approaches in medical education. BMJ. 1999;318:12801283.
  18. Prober CG, Heath C. Lecture halls without lectures—a proposal for medical education. N Engl J Med. 2012;366(18):16571659.
  19. The Khan Academy. Available at: https://www.khanacademy.org/. Accessed March 4, 2013.
  20. Dropbox. Dropbox Inc. Available at: https://www.dropbox.com/. Accessed April 19, 2013.
  21. Google Drive. Google Inc. Available at: https://drive.google.com/. Accessed April 19, 2013.
  22. Sutkin G, Wagner E, Harris I, et al. What makes a good clinical teacher in medicine? A review of the literature. Acad Med. 2008;83(5):452466.
  23. Baumgart DC. Smartphones in clinical practice, medical education, and research. Arch Intern Med. 2011;171(14):12941296.
  24. Martin SK, Tulla K, Meltzer DO, et al. Attending use of the electronic health record (EHR) and implications for housestaff supervision. Presented at: Midwest Society of General Internal Medicine Regional Meeting; September 2012; Chicago, IL.
  25. GroupMD. GroupMD Inc. Available at http://group.md. Accessed April 19, 2013.
  26. Levinson J. Guerilla Marketing: Secrets for Making Big Profits From Your Small Business. Boston, MA: Houghton Mifflin; 1984.
  27. Aspesi A, Kauffmann GE, Davis AM, et al. IBCD: development and testing of a checklist to improve quality of care for hospitalized general medical patients. Jt Comm J Qual Patient Saf. 2013;39(4):147156.
  28. Cohen S, Sarkar U. Ice cream rounds. Acad Med. 2013;88(1):66.
  29. Lucas BP, Evans AT, Reilly BM, et al. The impact of evidence on physicians' inpatient treatment decisions. J Gen Intern Med. 2004; 19(5 pt 1):402409.
  30. Peabody FW. Landmark article March 19, 1927: the care of the patient. By Francis W. Peabody. JAMA. 1984;252(6):813818.
  31. Gonzalo JD, Heist BS, Duffy BL, et al. The art of bedside rounds: a multi‐center qualitative study of strategies used by experienced bedside teachers. J Gen Intern Med. 2013;28(3):412420.
  32. Stanford University School of Medicine. Stanford Medicine 25. Available at: http://stanfordmedicine25.stanford.edu/. Accessed February 28, 2013.
  33. Medical Knowledge Self‐Assessment Program 16. The American College of Physicians. Available at: https://mksap.acponline.org. Accessed April 19, 2013.
Issue
Journal of Hospital Medicine - 8(7)
Issue
Journal of Hospital Medicine - 8(7)
Page Number
409-413
Page Number
409-413
Publications
Publications
Article Type
Display Headline
FUTURE: New strategies for hospitalists to overcome challenges in teaching on today's wards
Display Headline
FUTURE: New strategies for hospitalists to overcome challenges in teaching on today's wards
Sections
Article Source
© 2013 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Shannon Martin, MD, 5841 S. Maryland Avenue MC 5000, W307, Chicago, IL 60637; Telephone: 773‐702‐2604; Fax: 773–795‐7398; E‐mail: smartin1@medicine.bsd.uchicago.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Handoff CEX

Article Type
Changed
Mon, 05/22/2017 - 18:12
Display Headline
Development of a handoff evaluation tool for shift‐to‐shift physician handoffs: The handoff CEX

Transfers among trainee physicians within the hospital typically occur at least twice a day and have been increasing among trainees as work hours have declined.[1] The 2011 Accreditation Council for Graduate Medical Education (ACGME) guidelines,[2] which restrict intern working hours to 16 hours from a previous maximum of 30, have likely increased the frequency of physician trainee handoffs even further. Similarly, transfers among hospitalist attendings occur at least twice a day, given typical shifts of 8 to 12 hours.

Given the frequency of transfers, and the potential for harm generated by failed transitions,[3, 4, 5, 6] the end‐of‐shift written and verbal handoffs have assumed increasingly greater importance in hospital care among both trainees and hospitalist attendings.

The ACGME now requires that programs assess the competency of trainees in handoff communication.[2] Yet, there are few tools for assessing the quality of sign‐out communication. Those that exist primarily focus on the written sign‐out, and are rarely validated.[7, 8, 9, 10, 11, 12] Furthermore, it is uncertain whether such assessments must be done by supervisors or whether peers can participate in the evaluation. In this prospective multi‐institutional study we assess the performance characteristics of a verbal sign‐out evaluation tool for internal medicine housestaff and hospitalist attendings, and examine whether it can be used by peers as well as by external evaluators. This tool has previously been found to effectively discriminate between experienced and inexperienced nurses conducting nursing handoffs.[13]

METHODS

Tool Design and Measures

The Handoff CEX (clinical evaluation exercise) is a structured assessment based on the format of the mini‐CEX, an instrument used to assess the quality of history and physical examination by trainees for which validation studies have previously been conducted.[14, 15, 16, 17] We developed the tool based on themes we identified from our own expertise,[1, 5, 6, 8, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] the ACGME core competencies for trainees,[2] and the literature to maximize content validity. First, standardization has numerous demonstrable benefits for safety in general and handoffs in particular.[30, 31, 32] Consequently we created a domain for organization in which standardization was a characteristic of high performance.

Second, there is evidence that people engaged in conversation routinely overestimate peer comprehension,[27] and that explicit strategies to combat this overestimation, such as confirming understanding, explicitly assigning tasks rather than using open‐ended language, and using concrete language, are effective.[33] Accordingly we created a domain for communication skills, which is also an ACGME competency.

Third, although there were no formal guidelines for sign‐out content when we developed this tool, our own research had demonstrated that the content elements most often missing and felt to be important by stakeholders were related to clinical condition and explicating thinking processes,[5, 6] so we created a domain for content that highlighted these areas and met the ACGME competency of medical knowledge. In accordance with standards for evaluation of learners, we incorporated a domain for judgment to identify where trainees were in the RIME spectrum of reporter, interpreter, master, and educator.

Next, we added a section for professionalism in accordance with the ACGME core competencies of professionalism and patient care.[34] To avoid the disinclination of peers to label each other unprofessional, we labeled the professionalism domain as patient‐focused on the tool.

Finally, we included a domain for setting because of an extensive literature demonstrating increased handoff failures in noisy or interruptive settings.[35, 36, 37] We then revised the tool slightly based on our experiences among nurses and students.[13, 38] The final tool included the 6 domains described above and an assessment of overall competency. Each domain was scored on a 9‐point scale and included descriptive anchors at high and low ends of performance. We further divided the scale into 3 main sections: unsatisfactory (score 13), satisfactory (46), and superior (79). We designed 2 tools, 1 to assess the person providing the handoff and 1 to assess the handoff recipient, each with its own descriptive anchors. The recipient tool did not include a content domain (see Supporting Information, Appendix 1, in the online version of this article).

Setting and Subjects

We tested the tool in 2 different urban academic medical centers: the University of Chicago Medicine (UCM) and Yale‐New Haven Hospital (Yale). At UCM, we tested the tool among hospitalists, nurse practitioners, and physician assistants during the Monday and Tuesday morning and Friday evening sign‐out sessions. At Yale, we tested the tool among housestaff during the evening sign‐out session from the primary team to the on‐call covering team.

The UCM is a 550‐bed urban academic medical center in which the nonteaching hospitalist service cares for patients with liver disease, or end‐stage renal or lung disease awaiting transplant, and a small fraction of general medicine and oncology patients when the housestaff service exceeds its cap. No formal training on sign‐out is provided to attending or midlevel providers. The nonteaching hospitalist service operates as a separate service from the housestaff service and consists of 38 hospitalist clinicians (hospitalist attendings, nurse practitioners, and physicians assistants). There are 2 handoffs each day. In the morning the departing night hospitalist hands off to the incoming daytime hospitalist or midlevel provider. These handoffs occur at 7:30 am in a dedicated room. In the evening the daytime hospitalist or midlevel provider hands off to an incoming night hospitalist. This handoff occurs at 5:30 pm or 7:30 pm in a dedicated location. The written sign‐out is maintained on a Microsoft Word (Microsoft Corp., Redmond, WA) document on a password‐protected server and updated daily.

Yale is a 946‐bed urban academic medical center with a large internal medicine training program. Formal sign‐out education that covers the main domains of the tool is provided to new interns during the first 3 months of the year,[19] and a templated electronic medical record‐based electronic written handoff report is produced by the housestaff for all patients.[22] Approximately half of inpatient medicine patients are cared for by housestaff teams, which are entirely separate from the hospitalist service. Housestaff sign‐out occurs between 4 pm and 7 pm every night. At a minimum, the departing intern signs out to the incoming intern; this handoff is typically supervised by at least 1 second‐ or third‐year resident. All patients are signed out verbally; in addition, the written handoff report is provided to the incoming team. Most handoffs occur in a quiet charting room.

Data Collection

Data collection at UCM occurred between March and December 2010 on 3 days of each week: Mondays, Tuesdays, and Fridays. On Mondays and Tuesdays the morning handoffs were observed; on Fridays the evening handoffs were observed. Data collection at Yale occurred between March and May 2011. Only evening handoffs from the primary team to the overnight coverage were observed. At both sites, participants provided verbal informed consent prior to data collection. At the time of an eligible sign‐out session, a research assistant (D.R. at Yale, P.S. at UCM) provided the evaluation tools to all members of the incoming and outgoing teams, and observed the sign‐out session himself. Each person providing a handoff was asked to evaluate the recipient of the handoff; each person receiving a handoff was asked to evaluate the provider of the handoff. In addition, the trained third‐party observer (D.R., P.S.) evaluated both the provider and recipient of the handoff. The external evaluators were trained in principles of effective communication and the use of the tool, with specific review of anchors at each end of each domain. One evaluator had a DO degree and was completing an MPH degree. The second evaluator was an experienced clinical research assistant whose training consisted of supervised observation of 10 handoffs by a physician investigator. At Yale, if a resident was present, she or he was also asked to evaluate both the provider and recipient of the handoff. Consequently, every sign‐out session included at least 2 evaluations of each participant, 1 by a peer evaluator and 1 by a consistent external evaluator who did not know the patients. At Yale, many sign‐outs also included a third evaluation by a resident supervisor.

The study was approved by the institutional review boards at both UCM and Yale.

Statistical Analysis

We obtained mean, median, and interquartile range of scores for each subdomain of the tool as well as the overall assessment of handoff quality. We assessed convergent construct validity by assessing performance of the tool in different contexts. To do so, we determined whether scores differed by type of participant (provider or recipient), by site, by training level of evaluatee, or by type of evaluator (external, resident supervisor, or peer) by using Wilcoxon rank sum tests and Kruskal‐Wallis tests. For the assessment of differences in ratings by training level, we used evaluations of sign‐out providers only, because the 2 sites differed in scores for recipients. We also assessed construct validity by using Spearman rank correlation coefficients to describe the internal consistency of the tool in terms of the correlation between domains of the tool, and we conducted an exploratory factor analysis to gain insight into whether the subdomains of the tool were measuring the same construct. In conducting this analysis, we restricted the dataset to evaluations of sign‐out providers only, and used a principal components estimation method, a promax rotation, and squared multiple correlation communality priors. Finally, we conducted some preliminary studies of reliability by testing whether different types of evaluators provided similar assessments. We calculated a weighted kappa using Fleiss‐Cohen weights for external versus peer scores and again for supervising resident versus peer scores (Yale only). We were not able to assess test‐retest reliability by nature of the sign‐out process. Statistical significance was defined by a P value 0.05, and analyses were performed using SAS 9.2 (SAS Institute, Cary, NC).

RESULTS

A total of 149 handoff sessions were observed: 89 at UCM and 60 at Yale. Each site conducted a similar total number of evaluations: 336 at UCM, 337 at Yale. These sessions involved 97 unique individuals, 34 at UCM and 63 at Yale. Overall scores were high at both sites, but a wide range of scores was applied (Table 1).

Median, Mean, and Range of Handoff CEX Scores in Each Domain, Providers, and Recipients
DomainProvider, N=343Recipient, N=330P Value
Median (IQR)Mean (SD)RangeMedian (IQR)Mean (SD)Range
  • NOTE: Abbreviations: IQR, interquartile range; SD, standard deviation.

Setting7 (69)7.0 (1.7)297 (69)7.3 (1.6)290.05
Organization7 (68)7.2 (1.5)298 (69)7.4 (1.4)290.07
Communication7 (69)7.2 (1.6)198 (79)7.4 (1.5)290.22
Content7 (68)7.0 (1.6)29    
Judgment8 (68)7.3 (1.4)398 (79)7.5 (1.4)390.06
Professionalism8 (79)7.4 (1.5)298 (79)7.6 (1.4)390.23
Overall7 (68)7.1 (1.5)297 (68)7.4 (1.4)290.02

Handoff Providers

A total of 343 evaluations of handoff providers were completed regarding 67 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism (median: 8; interquartile range [IQR]: 79). The lowest rated domain was content (median: 7; IQR: 68) (Table 1).

Handoff Recipients

A total of 330 evaluations of handoff recipients were completed regarding 58 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism, with a median of 8 (IQR: 79). The lowest rated domain was setting, with a median score of 7 (IQR: 6‐9) (Table 1).

Validity Testing

Comparing provider scores to recipient scores, recipients received significantly higher scores for overall assessment (Table 1). Scores at UCM and Yale were similar in all domains for providers but were slightly lower at UCM in several domains for recipients (see Supporting Information, Appendix 2, in the online version of this article). Scores did not differ significantly by training level (Table 2). Third‐party external evaluators consistently gave lower marks for the same handoff than peer evaluators did (Table 3).

Handoff CEX Scores by Training Level, Providers Only
DomainMedian (Range)P Value
NP/PA, N=33Subintern or Intern, N=170Resident, N=44Hospitalist, N=95
  • NOTE: Abbreviations: NP/PA: nurse practitioner/physician assistant.

Setting7 (29)7 (39)7 (49)7 (29)0.89
Organization8 (49)7 (29)7 (49)8 (39)0.11
Communication8 (49)7 (29)7 (49)8 (19)0.72
Content7 (39)7 (29)7 (49)7 (29)0.92
Judgment8 (59)7 (39)8 (49)8 (49)0.09
Professionalism8 (49)7 (29)8 (39)8 (49)0.82
Overall7 (39)7 (29)8 (49)7 (29)0.28
Handoff CEX Scores by Peer Versus External Evaluators
 Provider, Median (Range)Recipient, Median (Range)
DomainPeer, N=152Resident, Supervisor, N=43External, N=147P ValuePeer, N=145Resident Supervisor, N=43External, N=142P Value
  • NOTE: Abbreviations: N/A, not applicable.

Setting8 (39)7 (39)7 (29)0.028 (29)7 (39)7 (29)<0.001
Organization8 (39)8 (39)7 (29)0.188 (39)8 (69)7 (29)<0.001
Communication8 (39)8 (39)7 (19)<0.0018 (39)8 (49)7 (29)<0.001
Content8 (39)8 (29)7 (29)<0.001N/AN/AN/AN/A
Judgment8 (49)8 (39)7 (39)<0.0018 (39)8 (49)7 (39)<0.001
Professionalism8 (39)8 (59)7 (29)0.028 (39)8 (69)7 (39)<0.001
Overall8 (39)8 (39)7 (29)0.0018 (29)8 (49)7 (29)<0.001

Spearman rank correlation coefficients among the CEX subdomains for provider scores ranged from 0.71 to 0.86, except for setting (Table 4). Setting was less well correlated with the other subdomains, with correlation coefficients ranging from 0.39 to 0.41. Correlations between individual domains and the overall rating ranged from 0.80 to 0.86, except setting, which had a correlation of 0.55. Every correlation was significant at P<0.001. Correlation coefficients for recipient scores were very similar to those for provider scores (see Supporting Information, Appendix 3, in the online version of this article).

Spearman Correlation Coefficients, Provider Evaluations (N=342)
 Spearman Correlation Coefficients
 SettingOrganizationCommunicationContentJudgmentProfessionalism
  • NOTE: All P values <0.0001.

Setting1.0000.400.400.390.390.41
Organization0.401.000.800.710.770.73
Communication0.400.801.000.790.820.77
Content0.390.710.791.000.800.74
Judgment0.390.770.820.801.000.78
Professionalism0.410.730.770.740.781.00
Overall0.550.800.840.830.860.82

We analyzed 343 provider evaluations in the factor analysis; there were 6 missing values. The scree plot of eigenvalues did not support more than 1 factor; however, the rotated factor pattern for standardized regression coefficients for the first factor and the final communality estimates showed the setting component yielding smaller values than did other scale components (see Supporting Information, Appendix 4, in the online version of this article).

Reliability Testing

Weighted kappa scores for provider evaluations ranged from 0.28 (95% confidence interval [CI]: 0.01, 0.56) for setting to 0.59 (95% CI: 0.38, 0.80) for organization, and were generally higher for resident versus peer comparisons than for external versus peer comparisons. Weighted kappa scores for recipient evaluation were slightly lower for external versus peer evaluations, but agreement was no better than chance for resident versus peer evaluations (Table 5).

Weighted Kappa Scores
DomainProviderRecipient
External vs Peer, N=144 (95% CI)Resident vs Peer, N=42 (95% CI)External vs Peer, N=134 (95% CI)Resident vs Peer, N=43 (95% CI)
  • NOTE: Abbreviations: CI, confidence interval; N/A, not applicable.

Setting0.39 (0.24, 0.54)0.28 (0.01, 0.56)0.34 (0.20, 0.48)0.48 (0.27, 0.69)
Organization0.43 (0.29, 0.58)0.59 (0.39, 0.80)0.39 (0.22, 0.55)0.03 (0.23, 0.29)
Communication0.34 (0.19, 0.49)0.52 (0.37, 0.68)0.36 (0.22, 0.51)0.02 (0.18, 0.23)
Content0.38 (0.25, 0.51)0.53 (0.27, 0.80)N/A (N/A)N/A (N/A)
Judgment0.36 (0.22, 0.49)0.54 (0.25, 0.83)0.28 (0.15, 0.42)0.12 (0.34, 0.09)
Professionalism0.47 (0.32, 0.63)0.47 (0.23, 0.72)0.35 (0.18, 0.51)0.01 (0.29, 0.26)
Overall0.50 (0.36, 0.64)0.45 (0.24, 0.67)0.31 (0.16, 0.48)0.07 (0.20, 0.34)

DISCUSSION

In this study we found that an evaluation tool for direct observation of housestaff and hospitalists generated a range of scores and was well validated in the sense of performing similarly across 2 different institutions and among both trainees and attendings, while having high internal consistency. However, external evaluators gave consistently lower marks than peer evaluators at both sites, resulting in low reliability when comparing these 2 groups of raters.

It has traditionally been difficult to conduct direct evaluations of handoffs, because they may occur at haphazard times, in variable locations, and without very much advance notice. For this reason, several attempts have been made to incorporate peers in evaluations of handoff practices.[5, 39, 40] Using peers to conduct evaluations also has the advantage that peers are more likely to be familiar with the patients being handed off and might recognize handoff flaws that external evaluators would miss. Nonetheless, peer evaluations have some important liabilities. Peers may be unwilling or unable to provide honest critiques of their colleagues given that they must work closely together for years. Trainee peers may also lack sufficient clinical expertise or experience to accurately assess competence. In our study, we found that peers gave consistently higher marks to their colleagues than did external evaluators, suggesting they may have found it difficult to criticize their colleagues. We conclude that peer evaluation alone is likely an insufficient means of evaluating handoff quality.

Supervising residents gave very similar marks as intern peers, suggesting that they also are unwilling to criticize, are insufficiently experienced to evaluate, or alternatively, that the peer evaluations were reasonable. We suspect the latter is unlikely given that external evaluator scores were consistently lower than peers. One would expect the external evaluators to be biased toward higher scores given that they are not familiar with the patients and are not able to comment on inaccuracies or omissions in the sign‐out.

The tool appeared to perform less well in most cases for recipients than for providers, with a narrower range of scores and low‐weighted kappa scores. Although recipients play a key role in ensuring a high‐quality sign‐out by paying close attention, ensuring it is a bidirectional conversation, asking appropriate questions, and reading back key information, it may be that evaluators were unable to place these activities within the same domains that were used for the provider evaluation. An altogether different recipient evaluation approach may be necessary.[41]

In general, scores were clustered at the top of the score range, as is typical for evaluations. One strategy to spread out scores further would be to refine the tool by adding anchors for satisfactory performance not just the extremes. A second approach might be to reduce the grading scale to only 3 points (unsatisfactory, satisfactory, superior) to force more scores to the middle. However, this approach might limit the discrimination ability of the tool.

We have previously studied the use of this tool among nurses. In that study, we also found consistently higher scores by peers than by external evaluators. We did, however, find a positive effect of experience, in which more experienced nurses received higher scores on average. We did not observe a similar training effect in this study. There are several possible explanations for the lack of a training effect. It is possible that the types of handoffs assessed played a role. At UCM, some assessed handoffs were night staff to day staff, which might be lower quality than day staff to night staff handoffs, whereas at Yale, all handoffs were day to night teams. Thus, average scores at UCM (primarily hospitalists) might have been lowered by the type of handoff provided. Given that hospitalist evaluations were conducted exclusively at UCM and housestaff evaluations exclusively at Yale, lack of difference between hospitalists and housestaff may also have been related to differences in evaluation practice or handoff practice at the 2 sites, not necessarily related to training level. Third, in our experience, attending physicians provide briefer less‐comprehensive sign‐outs than trainees, particularly when communicating with equally experienced attendings; these sign‐outs may appropriately be scored lower on the tool. Fourth, the great majority of the hospitalists at UCM were within 5 years of residency and therefore not very much more experienced than the trainees. Finally, it is possible that skills do not improve over time given widespread lack of observation and feedback during training years for this important skill.

The high internal consistency of most of the subdomains and the loading of all subdomains except setting onto 1 factor are evidence of convergent construct validity, but also suggest that evaluators have difficulty distinguishing among components of sign‐out quality. Internal consistency may also reflect a halo effect, in which scores on different domains are all influenced by a common overall judgment.[42] We are currently testing a shorter version of the tool including domains only for content, professionalism, and setting in addition to overall score. The fact that setting did not correlate as well with the other domains suggests that sign‐out practitioners may not have or exercise control over their surroundings. Consequently, it may ultimately be reasonable to drop this domain from the tool, or alternatively, to refocus on the need to ensure a quiet setting during sign‐out skills training.

There are several limitations to this study. External evaluations were conducted by personnel who were not familiar with the patients, and they may therefore have overestimated the quality of sign‐out. Studying different types of physicians at different sites might have limited our ability to identify differences by training level. As is commonly seen in evaluation studies, scores were skewed to the high end, although we did observe some use of the full range of the tool. Finally, we were limited in our ability to test inter‐rater reliability because of the multiple sources of variability in the data (numerous different raters, with different backgrounds at different settings, rating different individuals).

In summary, we developed a handoff evaluation tool that was easily completed by housestaff and attendings without training, that performed similarly in a variety of different settings at 2 institutions, and that can in principle be used either for peer evaluations or for external evaluations, although peer evaluations may be positively biased. Further work will be done to refine and simplify the tool.

ACKNOWLEDGMENTS

Disclosures: Development and evaluation of the sign‐out CEX was supported by a grant from the Agency for Healthcare Research and Quality (1R03HS018278‐01). Dr. Arora is supported by a National Institute on Aging (K23 AG033763). Dr. Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30AG021342 NIH/NIA). No funding source had any role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the article for publication. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality, the National Institute on Aging, the National Institutes of Health, or the American Federation for Aging Research. Dr. Horwitz had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. An earlier version of this work was presented as a poster presentation at the Society of General Internal Medicine Annual Meeting in Orlando, Florida on May 9, 2012. Dr. Rand is now with the Department of Medicine, University of Vermont College of Medicine, Burlington, Vermont. Mr. Staisiunas is now with the Law School, Marquette University, Milwaukee, Wisconsin. The authors declare they have no conflicts of interest.

Appendix

A

PROVIDER HAND‐OFF CEX TOOL

 

 

RECIPIENT HAND‐OFF CEX TOOL

 

 

Appendix

B

 

Handoff CEX scores by site of evaluation

DomainProviderRecipient
Median (Range)P‐valueMedian (Range)P‐value
 UCYale UCYale 
N=172N=170 N=163N=167 
Setting7 (29)7 (39)0.327 (29)7 (39)0.36
Organization8 (29)7 (39)0.307 (29)8 (59)0.001
Communication7 (19)7 (39)0.677 (29)8 (49)0.03
Content7 (29)7 (29) N/AN/AN/A
Judgment8 (39)7 (39)0.607 (39)8 (49)0.001
Professionalism8 (29)8 (39)0.678 (39)8 (49)0.35
Overall7 (29)7 (39)0.417 (29)8 (49)0.005

 

Appendix

C

Spearman correlation, recipients (N=330)

SpearmanCorrelationCoefficients
 SettingOrganizationCommunicationJudgmentProfessionalism
Setting1.00.460.480.470.40
Organization0.461.000.780.750.75
Communication0.480.781.000.850.77
Judgment0.470.750.851.000.74
Professionalism0.400.750.770.741.00
Overall0.600.770.840.820.77

 

All p values <0.0001

 

Appendix

D

Factor analysis results for provider evaluations

Rotated Factor Pattern (Standardized Regression Coefficients) N=336
 Factor1Factor2
Organization0.640.27
Communication0.790.16
Content0.820.06
Judgment0.860.06
Professionalism0.660.23
Setting0.180.29

 

 

Files
References
  1. Horwitz LI, Krumholz HM, Green ML, Huot SJ. Transfers of patient care between house staff on internal medicine wards: a national survey. Arch Intern Med. 2006;166(11):11731177.
  2. Accreditation Council for Graduate Medical Education. Common program requirements. 2011; http://www.acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed August 23, 2011.
  3. Petersen LA, Brennan TA, O'Neil AC, Cook EF, Lee TH. Does housestaff discontinuity of care increase the risk for preventable adverse events? Ann Intern Med. 1994;121(11):866872.
  4. Sutcliffe KM, Lewton E, Rosenthal MM. Communication failures: an insidious contributor to medical mishaps. Acad Med. 2004;79(2):186194.
  5. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14(6):401407.
  6. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):17551760.
  7. Borowitz SM, Waggoner‐Fountain LA, Bass EJ, Sledd RM. Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17(1):610.
  8. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. What are covering doctors told about their patients? Analysis of sign‐out among internal medicine house staff. Qual Saf Health Care. 2009;18(4):248255.
  9. Gakhar B, Spencer AL. Using direct observation, formal evaluation, and an interactive curriculum to improve the sign‐out practices of internal medicine interns. Acad Med. 2010;85(7):11821188.
  10. Raduma‐Tomas MA, Flin R, Yule S, Williams D. Doctors' handovers in hospitals: a literature review. Qual Saf Health Care. 2011;20(2):128133.
  11. Bump GM, Jovin F, Destefano L, et al. Resident sign‐out and patient hand‐offs: opportunities for improvement. Teach Learn Med. 2011;23(2):105111.
  12. Helms AS, Perez TE, Baltz J, et al. Use of an appreciative inquiry approach to improve resident sign‐out in an era of multiple shift changes. J Gen Intern Med. 2012;27(3):287291.
  13. Horwitz LI, Dombroski J, Murphy TE, Farnan JM, Johnson JK, Arora VM. Validation of a handoff assessment tool: the Handoff CEX [published online ahead of print June 7, 2012]. J Clin Nurs. doi: 10.1111/j.1365–2702.2012.04131.x.
  14. Norcini JJ, Blank LL, Arnold GK, Kimball HR. The mini‐CEX (clinical evaluation exercise): a preliminary investigation. Ann Intern Med. 1995;123(10):795799.
  15. Norcini JJ, Blank LL, Arnold GK, Kimball HR. Examiner differences in the mini‐CEX. Adv Health Sci Educ Theory Pract. 1997;2(1):2733.
  16. Durning SJ, Cation LJ, Markert RJ, Pangaro LN. Assessing the reliability and validity of the mini‐clinical evaluation exercise for internal medicine residency training. Acad Med. 2002;77(9):900904.
  17. Holmboe ES, Huot S, Chung J, Norcini J, Hawkins RE. Construct validity of the miniclinical evaluation exercise (miniCEX). Acad Med. 2003;78(8):826830.
  18. Horwitz LI, Meredith T, Schuur JD, Shah NR, Kulkarni RG, Jenq GY. Dropping the baton: a qualitative analysis of failures during the transition from emergency department to inpatient care. Ann Emerg Med. 2009;53(6):701710.e4.
  19. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22(10):14701474.
  20. Horwitz LI, Moin T, Wang L, Bradley EH. Mixed methods evaluation of oral sign‐out practices. J Gen Intern Med. 2007;22(S1):S114.
  21. Horwitz LI, Parwani V, Shah NR, et al. Evaluation of an asynchronous physician voicemail sign‐out for emergency department admissions. Ann Emerg Med. 2009;54(3):368378.
  22. Horwitz LI, Schuster KM, Thung SF, et al. An institution‐wide handoff task force to standardise and improve physician handoffs. BMJ Qual Saf. 2012;21(10):863871.
  23. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32(11):646655.
  24. Arora V, Kao J, Lovinger D, Seiden SC, Meltzer D. Medication discrepancies in resident sign‐outs and their potential to harm. J Gen Intern Med. 2007;22(12):17511755.
  25. Arora VM, Johnson JK, Meltzer DO, Humphrey HJ. A theoretical framework and competency‐based approach to improving handoffs. Qual Saf Health Care. 2008;17(1):1114.
  26. Arora VM, Manjarrez E, Dressler DD, Basaviah P, Halasyamani L, Kripalani S. Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433440.
  27. Chang VY, Arora VM, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491496.
  28. Johnson JK, Arora VM. Improving clinical handovers: creating local solutions for a global problem. Qual Saf Health Care. 2009;18(4):244245.
  29. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257266.
  30. Salerno SM, Arnett MV, Domanski JP. Standardized sign‐out reduces intern perception of medical errors on the general internal medicine ward. Teach Learn Med. 2009;21(2):121126.
  31. Haig KM, Sutton S, Whittington J. SBAR: a shared mental model for improving communication between clinicians. Jt Comm J Qual Patient Saf. 2006;32(3):167175.
  32. Patterson ES. Structuring flexibility: the potential good, bad and ugly in standardisation of handovers. Qual Saf Health Care. 2008;17(1):45.
  33. Patterson ES, Roth EM, Woods DD, Chow R, Gomes JO. Handoff strategies in settings with high consequences for failure: lessons for health care operations. Int J Qual Health Care. 2004;16(2):125132.
  34. Ratanawongsa N, Bolen S, Howell EE, Kern DE, Sisson SD, Larriviere D. Residents' perceptions of professionalism in training and practice: barriers, promoters, and duty hour requirements. J Gen Intern Med. 2006;21(7):758763.
  35. Coiera E, Tombs V. Communication behaviours in a hospital setting: an observational study. BMJ. 1998;316(7132):673676.
  36. Coiera EW, Jayasuriya RA, Hardy J, Bannan A, Thorpe ME. Communication loads on clinical staff in the emergency department. Med J Aust. 2002;176(9):415418.
  37. Ong MS, Coiera E. A systematic review of failures in handoff communication during intrahospital transfers. Jt Comm J Qual Patient Saf. 2011;37(6):274284.
  38. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129134.
  39. Kitch BT, Cooper JB, Zapol WM, et al. Handoffs causing patient harm: a survey of medical and surgical house staff. Jt Comm J Qual Patient Saf. 2008;34(10):563570.
  40. Li P, Stelfox HT, Ghali WA. A prospective observational study of physician handoff for intensive‐care‐unit‐to‐ward patient transfers. Am J Med. 2011;124(9):860867.
  41. Greenstein E, Arora V, Banerjee S, Staisiunas P, Farnan J. Characterizing physician listening behavior during hospitalist handoffs using the HEAR checklist (published online ahead of print December 20, 2012]. BMJ Qual Saf. doi:10.1136/bmjqs‐2012‐001138.
  42. Thorndike EL. A constant error in psychological ratings. J Appl Psychol. 1920;4(1):25.
Article PDF
Issue
Journal of Hospital Medicine - 8(4)
Publications
Page Number
191-200
Sections
Files
Files
Article PDF
Article PDF

Transfers among trainee physicians within the hospital typically occur at least twice a day and have been increasing among trainees as work hours have declined.[1] The 2011 Accreditation Council for Graduate Medical Education (ACGME) guidelines,[2] which restrict intern working hours to 16 hours from a previous maximum of 30, have likely increased the frequency of physician trainee handoffs even further. Similarly, transfers among hospitalist attendings occur at least twice a day, given typical shifts of 8 to 12 hours.

Given the frequency of transfers, and the potential for harm generated by failed transitions,[3, 4, 5, 6] the end‐of‐shift written and verbal handoffs have assumed increasingly greater importance in hospital care among both trainees and hospitalist attendings.

The ACGME now requires that programs assess the competency of trainees in handoff communication.[2] Yet, there are few tools for assessing the quality of sign‐out communication. Those that exist primarily focus on the written sign‐out, and are rarely validated.[7, 8, 9, 10, 11, 12] Furthermore, it is uncertain whether such assessments must be done by supervisors or whether peers can participate in the evaluation. In this prospective multi‐institutional study we assess the performance characteristics of a verbal sign‐out evaluation tool for internal medicine housestaff and hospitalist attendings, and examine whether it can be used by peers as well as by external evaluators. This tool has previously been found to effectively discriminate between experienced and inexperienced nurses conducting nursing handoffs.[13]

METHODS

Tool Design and Measures

The Handoff CEX (clinical evaluation exercise) is a structured assessment based on the format of the mini‐CEX, an instrument used to assess the quality of history and physical examination by trainees for which validation studies have previously been conducted.[14, 15, 16, 17] We developed the tool based on themes we identified from our own expertise,[1, 5, 6, 8, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] the ACGME core competencies for trainees,[2] and the literature to maximize content validity. First, standardization has numerous demonstrable benefits for safety in general and handoffs in particular.[30, 31, 32] Consequently we created a domain for organization in which standardization was a characteristic of high performance.

Second, there is evidence that people engaged in conversation routinely overestimate peer comprehension,[27] and that explicit strategies to combat this overestimation, such as confirming understanding, explicitly assigning tasks rather than using open‐ended language, and using concrete language, are effective.[33] Accordingly we created a domain for communication skills, which is also an ACGME competency.

Third, although there were no formal guidelines for sign‐out content when we developed this tool, our own research had demonstrated that the content elements most often missing and felt to be important by stakeholders were related to clinical condition and explicating thinking processes,[5, 6] so we created a domain for content that highlighted these areas and met the ACGME competency of medical knowledge. In accordance with standards for evaluation of learners, we incorporated a domain for judgment to identify where trainees were in the RIME spectrum of reporter, interpreter, master, and educator.

Next, we added a section for professionalism in accordance with the ACGME core competencies of professionalism and patient care.[34] To avoid the disinclination of peers to label each other unprofessional, we labeled the professionalism domain as patient‐focused on the tool.

Finally, we included a domain for setting because of an extensive literature demonstrating increased handoff failures in noisy or interruptive settings.[35, 36, 37] We then revised the tool slightly based on our experiences among nurses and students.[13, 38] The final tool included the 6 domains described above and an assessment of overall competency. Each domain was scored on a 9‐point scale and included descriptive anchors at high and low ends of performance. We further divided the scale into 3 main sections: unsatisfactory (score 13), satisfactory (46), and superior (79). We designed 2 tools, 1 to assess the person providing the handoff and 1 to assess the handoff recipient, each with its own descriptive anchors. The recipient tool did not include a content domain (see Supporting Information, Appendix 1, in the online version of this article).

Setting and Subjects

We tested the tool in 2 different urban academic medical centers: the University of Chicago Medicine (UCM) and Yale‐New Haven Hospital (Yale). At UCM, we tested the tool among hospitalists, nurse practitioners, and physician assistants during the Monday and Tuesday morning and Friday evening sign‐out sessions. At Yale, we tested the tool among housestaff during the evening sign‐out session from the primary team to the on‐call covering team.

The UCM is a 550‐bed urban academic medical center in which the nonteaching hospitalist service cares for patients with liver disease, or end‐stage renal or lung disease awaiting transplant, and a small fraction of general medicine and oncology patients when the housestaff service exceeds its cap. No formal training on sign‐out is provided to attending or midlevel providers. The nonteaching hospitalist service operates as a separate service from the housestaff service and consists of 38 hospitalist clinicians (hospitalist attendings, nurse practitioners, and physicians assistants). There are 2 handoffs each day. In the morning the departing night hospitalist hands off to the incoming daytime hospitalist or midlevel provider. These handoffs occur at 7:30 am in a dedicated room. In the evening the daytime hospitalist or midlevel provider hands off to an incoming night hospitalist. This handoff occurs at 5:30 pm or 7:30 pm in a dedicated location. The written sign‐out is maintained on a Microsoft Word (Microsoft Corp., Redmond, WA) document on a password‐protected server and updated daily.

Yale is a 946‐bed urban academic medical center with a large internal medicine training program. Formal sign‐out education that covers the main domains of the tool is provided to new interns during the first 3 months of the year,[19] and a templated electronic medical record‐based electronic written handoff report is produced by the housestaff for all patients.[22] Approximately half of inpatient medicine patients are cared for by housestaff teams, which are entirely separate from the hospitalist service. Housestaff sign‐out occurs between 4 pm and 7 pm every night. At a minimum, the departing intern signs out to the incoming intern; this handoff is typically supervised by at least 1 second‐ or third‐year resident. All patients are signed out verbally; in addition, the written handoff report is provided to the incoming team. Most handoffs occur in a quiet charting room.

Data Collection

Data collection at UCM occurred between March and December 2010 on 3 days of each week: Mondays, Tuesdays, and Fridays. On Mondays and Tuesdays the morning handoffs were observed; on Fridays the evening handoffs were observed. Data collection at Yale occurred between March and May 2011. Only evening handoffs from the primary team to the overnight coverage were observed. At both sites, participants provided verbal informed consent prior to data collection. At the time of an eligible sign‐out session, a research assistant (D.R. at Yale, P.S. at UCM) provided the evaluation tools to all members of the incoming and outgoing teams, and observed the sign‐out session himself. Each person providing a handoff was asked to evaluate the recipient of the handoff; each person receiving a handoff was asked to evaluate the provider of the handoff. In addition, the trained third‐party observer (D.R., P.S.) evaluated both the provider and recipient of the handoff. The external evaluators were trained in principles of effective communication and the use of the tool, with specific review of anchors at each end of each domain. One evaluator had a DO degree and was completing an MPH degree. The second evaluator was an experienced clinical research assistant whose training consisted of supervised observation of 10 handoffs by a physician investigator. At Yale, if a resident was present, she or he was also asked to evaluate both the provider and recipient of the handoff. Consequently, every sign‐out session included at least 2 evaluations of each participant, 1 by a peer evaluator and 1 by a consistent external evaluator who did not know the patients. At Yale, many sign‐outs also included a third evaluation by a resident supervisor.

The study was approved by the institutional review boards at both UCM and Yale.

Statistical Analysis

We obtained mean, median, and interquartile range of scores for each subdomain of the tool as well as the overall assessment of handoff quality. We assessed convergent construct validity by assessing performance of the tool in different contexts. To do so, we determined whether scores differed by type of participant (provider or recipient), by site, by training level of evaluatee, or by type of evaluator (external, resident supervisor, or peer) by using Wilcoxon rank sum tests and Kruskal‐Wallis tests. For the assessment of differences in ratings by training level, we used evaluations of sign‐out providers only, because the 2 sites differed in scores for recipients. We also assessed construct validity by using Spearman rank correlation coefficients to describe the internal consistency of the tool in terms of the correlation between domains of the tool, and we conducted an exploratory factor analysis to gain insight into whether the subdomains of the tool were measuring the same construct. In conducting this analysis, we restricted the dataset to evaluations of sign‐out providers only, and used a principal components estimation method, a promax rotation, and squared multiple correlation communality priors. Finally, we conducted some preliminary studies of reliability by testing whether different types of evaluators provided similar assessments. We calculated a weighted kappa using Fleiss‐Cohen weights for external versus peer scores and again for supervising resident versus peer scores (Yale only). We were not able to assess test‐retest reliability by nature of the sign‐out process. Statistical significance was defined by a P value 0.05, and analyses were performed using SAS 9.2 (SAS Institute, Cary, NC).

RESULTS

A total of 149 handoff sessions were observed: 89 at UCM and 60 at Yale. Each site conducted a similar total number of evaluations: 336 at UCM, 337 at Yale. These sessions involved 97 unique individuals, 34 at UCM and 63 at Yale. Overall scores were high at both sites, but a wide range of scores was applied (Table 1).

Median, Mean, and Range of Handoff CEX Scores in Each Domain, Providers, and Recipients
DomainProvider, N=343Recipient, N=330P Value
Median (IQR)Mean (SD)RangeMedian (IQR)Mean (SD)Range
  • NOTE: Abbreviations: IQR, interquartile range; SD, standard deviation.

Setting7 (69)7.0 (1.7)297 (69)7.3 (1.6)290.05
Organization7 (68)7.2 (1.5)298 (69)7.4 (1.4)290.07
Communication7 (69)7.2 (1.6)198 (79)7.4 (1.5)290.22
Content7 (68)7.0 (1.6)29    
Judgment8 (68)7.3 (1.4)398 (79)7.5 (1.4)390.06
Professionalism8 (79)7.4 (1.5)298 (79)7.6 (1.4)390.23
Overall7 (68)7.1 (1.5)297 (68)7.4 (1.4)290.02

Handoff Providers

A total of 343 evaluations of handoff providers were completed regarding 67 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism (median: 8; interquartile range [IQR]: 79). The lowest rated domain was content (median: 7; IQR: 68) (Table 1).

Handoff Recipients

A total of 330 evaluations of handoff recipients were completed regarding 58 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism, with a median of 8 (IQR: 79). The lowest rated domain was setting, with a median score of 7 (IQR: 6‐9) (Table 1).

Validity Testing

Comparing provider scores to recipient scores, recipients received significantly higher scores for overall assessment (Table 1). Scores at UCM and Yale were similar in all domains for providers but were slightly lower at UCM in several domains for recipients (see Supporting Information, Appendix 2, in the online version of this article). Scores did not differ significantly by training level (Table 2). Third‐party external evaluators consistently gave lower marks for the same handoff than peer evaluators did (Table 3).

Handoff CEX Scores by Training Level, Providers Only
DomainMedian (Range)P Value
NP/PA, N=33Subintern or Intern, N=170Resident, N=44Hospitalist, N=95
  • NOTE: Abbreviations: NP/PA: nurse practitioner/physician assistant.

Setting7 (29)7 (39)7 (49)7 (29)0.89
Organization8 (49)7 (29)7 (49)8 (39)0.11
Communication8 (49)7 (29)7 (49)8 (19)0.72
Content7 (39)7 (29)7 (49)7 (29)0.92
Judgment8 (59)7 (39)8 (49)8 (49)0.09
Professionalism8 (49)7 (29)8 (39)8 (49)0.82
Overall7 (39)7 (29)8 (49)7 (29)0.28
Handoff CEX Scores by Peer Versus External Evaluators
 Provider, Median (Range)Recipient, Median (Range)
DomainPeer, N=152Resident, Supervisor, N=43External, N=147P ValuePeer, N=145Resident Supervisor, N=43External, N=142P Value
  • NOTE: Abbreviations: N/A, not applicable.

Setting8 (39)7 (39)7 (29)0.028 (29)7 (39)7 (29)<0.001
Organization8 (39)8 (39)7 (29)0.188 (39)8 (69)7 (29)<0.001
Communication8 (39)8 (39)7 (19)<0.0018 (39)8 (49)7 (29)<0.001
Content8 (39)8 (29)7 (29)<0.001N/AN/AN/AN/A
Judgment8 (49)8 (39)7 (39)<0.0018 (39)8 (49)7 (39)<0.001
Professionalism8 (39)8 (59)7 (29)0.028 (39)8 (69)7 (39)<0.001
Overall8 (39)8 (39)7 (29)0.0018 (29)8 (49)7 (29)<0.001

Spearman rank correlation coefficients among the CEX subdomains for provider scores ranged from 0.71 to 0.86, except for setting (Table 4). Setting was less well correlated with the other subdomains, with correlation coefficients ranging from 0.39 to 0.41. Correlations between individual domains and the overall rating ranged from 0.80 to 0.86, except setting, which had a correlation of 0.55. Every correlation was significant at P<0.001. Correlation coefficients for recipient scores were very similar to those for provider scores (see Supporting Information, Appendix 3, in the online version of this article).

Spearman Correlation Coefficients, Provider Evaluations (N=342)
 Spearman Correlation Coefficients
 SettingOrganizationCommunicationContentJudgmentProfessionalism
  • NOTE: All P values <0.0001.

Setting1.0000.400.400.390.390.41
Organization0.401.000.800.710.770.73
Communication0.400.801.000.790.820.77
Content0.390.710.791.000.800.74
Judgment0.390.770.820.801.000.78
Professionalism0.410.730.770.740.781.00
Overall0.550.800.840.830.860.82

We analyzed 343 provider evaluations in the factor analysis; there were 6 missing values. The scree plot of eigenvalues did not support more than 1 factor; however, the rotated factor pattern for standardized regression coefficients for the first factor and the final communality estimates showed the setting component yielding smaller values than did other scale components (see Supporting Information, Appendix 4, in the online version of this article).

Reliability Testing

Weighted kappa scores for provider evaluations ranged from 0.28 (95% confidence interval [CI]: 0.01, 0.56) for setting to 0.59 (95% CI: 0.38, 0.80) for organization, and were generally higher for resident versus peer comparisons than for external versus peer comparisons. Weighted kappa scores for recipient evaluation were slightly lower for external versus peer evaluations, but agreement was no better than chance for resident versus peer evaluations (Table 5).

Weighted Kappa Scores
DomainProviderRecipient
External vs Peer, N=144 (95% CI)Resident vs Peer, N=42 (95% CI)External vs Peer, N=134 (95% CI)Resident vs Peer, N=43 (95% CI)
  • NOTE: Abbreviations: CI, confidence interval; N/A, not applicable.

Setting0.39 (0.24, 0.54)0.28 (0.01, 0.56)0.34 (0.20, 0.48)0.48 (0.27, 0.69)
Organization0.43 (0.29, 0.58)0.59 (0.39, 0.80)0.39 (0.22, 0.55)0.03 (0.23, 0.29)
Communication0.34 (0.19, 0.49)0.52 (0.37, 0.68)0.36 (0.22, 0.51)0.02 (0.18, 0.23)
Content0.38 (0.25, 0.51)0.53 (0.27, 0.80)N/A (N/A)N/A (N/A)
Judgment0.36 (0.22, 0.49)0.54 (0.25, 0.83)0.28 (0.15, 0.42)0.12 (0.34, 0.09)
Professionalism0.47 (0.32, 0.63)0.47 (0.23, 0.72)0.35 (0.18, 0.51)0.01 (0.29, 0.26)
Overall0.50 (0.36, 0.64)0.45 (0.24, 0.67)0.31 (0.16, 0.48)0.07 (0.20, 0.34)

DISCUSSION

In this study we found that an evaluation tool for direct observation of housestaff and hospitalists generated a range of scores and was well validated in the sense of performing similarly across 2 different institutions and among both trainees and attendings, while having high internal consistency. However, external evaluators gave consistently lower marks than peer evaluators at both sites, resulting in low reliability when comparing these 2 groups of raters.

It has traditionally been difficult to conduct direct evaluations of handoffs, because they may occur at haphazard times, in variable locations, and without very much advance notice. For this reason, several attempts have been made to incorporate peers in evaluations of handoff practices.[5, 39, 40] Using peers to conduct evaluations also has the advantage that peers are more likely to be familiar with the patients being handed off and might recognize handoff flaws that external evaluators would miss. Nonetheless, peer evaluations have some important liabilities. Peers may be unwilling or unable to provide honest critiques of their colleagues given that they must work closely together for years. Trainee peers may also lack sufficient clinical expertise or experience to accurately assess competence. In our study, we found that peers gave consistently higher marks to their colleagues than did external evaluators, suggesting they may have found it difficult to criticize their colleagues. We conclude that peer evaluation alone is likely an insufficient means of evaluating handoff quality.

Supervising residents gave very similar marks as intern peers, suggesting that they also are unwilling to criticize, are insufficiently experienced to evaluate, or alternatively, that the peer evaluations were reasonable. We suspect the latter is unlikely given that external evaluator scores were consistently lower than peers. One would expect the external evaluators to be biased toward higher scores given that they are not familiar with the patients and are not able to comment on inaccuracies or omissions in the sign‐out.

The tool appeared to perform less well in most cases for recipients than for providers, with a narrower range of scores and low‐weighted kappa scores. Although recipients play a key role in ensuring a high‐quality sign‐out by paying close attention, ensuring it is a bidirectional conversation, asking appropriate questions, and reading back key information, it may be that evaluators were unable to place these activities within the same domains that were used for the provider evaluation. An altogether different recipient evaluation approach may be necessary.[41]

In general, scores were clustered at the top of the score range, as is typical for evaluations. One strategy to spread out scores further would be to refine the tool by adding anchors for satisfactory performance not just the extremes. A second approach might be to reduce the grading scale to only 3 points (unsatisfactory, satisfactory, superior) to force more scores to the middle. However, this approach might limit the discrimination ability of the tool.

We have previously studied the use of this tool among nurses. In that study, we also found consistently higher scores by peers than by external evaluators. We did, however, find a positive effect of experience, in which more experienced nurses received higher scores on average. We did not observe a similar training effect in this study. There are several possible explanations for the lack of a training effect. It is possible that the types of handoffs assessed played a role. At UCM, some assessed handoffs were night staff to day staff, which might be lower quality than day staff to night staff handoffs, whereas at Yale, all handoffs were day to night teams. Thus, average scores at UCM (primarily hospitalists) might have been lowered by the type of handoff provided. Given that hospitalist evaluations were conducted exclusively at UCM and housestaff evaluations exclusively at Yale, lack of difference between hospitalists and housestaff may also have been related to differences in evaluation practice or handoff practice at the 2 sites, not necessarily related to training level. Third, in our experience, attending physicians provide briefer less‐comprehensive sign‐outs than trainees, particularly when communicating with equally experienced attendings; these sign‐outs may appropriately be scored lower on the tool. Fourth, the great majority of the hospitalists at UCM were within 5 years of residency and therefore not very much more experienced than the trainees. Finally, it is possible that skills do not improve over time given widespread lack of observation and feedback during training years for this important skill.

The high internal consistency of most of the subdomains and the loading of all subdomains except setting onto 1 factor are evidence of convergent construct validity, but also suggest that evaluators have difficulty distinguishing among components of sign‐out quality. Internal consistency may also reflect a halo effect, in which scores on different domains are all influenced by a common overall judgment.[42] We are currently testing a shorter version of the tool including domains only for content, professionalism, and setting in addition to overall score. The fact that setting did not correlate as well with the other domains suggests that sign‐out practitioners may not have or exercise control over their surroundings. Consequently, it may ultimately be reasonable to drop this domain from the tool, or alternatively, to refocus on the need to ensure a quiet setting during sign‐out skills training.

There are several limitations to this study. External evaluations were conducted by personnel who were not familiar with the patients, and they may therefore have overestimated the quality of sign‐out. Studying different types of physicians at different sites might have limited our ability to identify differences by training level. As is commonly seen in evaluation studies, scores were skewed to the high end, although we did observe some use of the full range of the tool. Finally, we were limited in our ability to test inter‐rater reliability because of the multiple sources of variability in the data (numerous different raters, with different backgrounds at different settings, rating different individuals).

In summary, we developed a handoff evaluation tool that was easily completed by housestaff and attendings without training, that performed similarly in a variety of different settings at 2 institutions, and that can in principle be used either for peer evaluations or for external evaluations, although peer evaluations may be positively biased. Further work will be done to refine and simplify the tool.

ACKNOWLEDGMENTS

Disclosures: Development and evaluation of the sign‐out CEX was supported by a grant from the Agency for Healthcare Research and Quality (1R03HS018278‐01). Dr. Arora is supported by a National Institute on Aging (K23 AG033763). Dr. Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30AG021342 NIH/NIA). No funding source had any role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the article for publication. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality, the National Institute on Aging, the National Institutes of Health, or the American Federation for Aging Research. Dr. Horwitz had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. An earlier version of this work was presented as a poster presentation at the Society of General Internal Medicine Annual Meeting in Orlando, Florida on May 9, 2012. Dr. Rand is now with the Department of Medicine, University of Vermont College of Medicine, Burlington, Vermont. Mr. Staisiunas is now with the Law School, Marquette University, Milwaukee, Wisconsin. The authors declare they have no conflicts of interest.

Appendix

A

PROVIDER HAND‐OFF CEX TOOL

 

 

RECIPIENT HAND‐OFF CEX TOOL

 

 

Appendix

B

 

Handoff CEX scores by site of evaluation

DomainProviderRecipient
Median (Range)P‐valueMedian (Range)P‐value
 UCYale UCYale 
N=172N=170 N=163N=167 
Setting7 (29)7 (39)0.327 (29)7 (39)0.36
Organization8 (29)7 (39)0.307 (29)8 (59)0.001
Communication7 (19)7 (39)0.677 (29)8 (49)0.03
Content7 (29)7 (29) N/AN/AN/A
Judgment8 (39)7 (39)0.607 (39)8 (49)0.001
Professionalism8 (29)8 (39)0.678 (39)8 (49)0.35
Overall7 (29)7 (39)0.417 (29)8 (49)0.005

 

Appendix

C

Spearman correlation, recipients (N=330)

SpearmanCorrelationCoefficients
 SettingOrganizationCommunicationJudgmentProfessionalism
Setting1.00.460.480.470.40
Organization0.461.000.780.750.75
Communication0.480.781.000.850.77
Judgment0.470.750.851.000.74
Professionalism0.400.750.770.741.00
Overall0.600.770.840.820.77

 

All p values <0.0001

 

Appendix

D

Factor analysis results for provider evaluations

Rotated Factor Pattern (Standardized Regression Coefficients) N=336
 Factor1Factor2
Organization0.640.27
Communication0.790.16
Content0.820.06
Judgment0.860.06
Professionalism0.660.23
Setting0.180.29

 

 

Transfers among trainee physicians within the hospital typically occur at least twice a day and have been increasing among trainees as work hours have declined.[1] The 2011 Accreditation Council for Graduate Medical Education (ACGME) guidelines,[2] which restrict intern working hours to 16 hours from a previous maximum of 30, have likely increased the frequency of physician trainee handoffs even further. Similarly, transfers among hospitalist attendings occur at least twice a day, given typical shifts of 8 to 12 hours.

Given the frequency of transfers, and the potential for harm generated by failed transitions,[3, 4, 5, 6] the end‐of‐shift written and verbal handoffs have assumed increasingly greater importance in hospital care among both trainees and hospitalist attendings.

The ACGME now requires that programs assess the competency of trainees in handoff communication.[2] Yet, there are few tools for assessing the quality of sign‐out communication. Those that exist primarily focus on the written sign‐out, and are rarely validated.[7, 8, 9, 10, 11, 12] Furthermore, it is uncertain whether such assessments must be done by supervisors or whether peers can participate in the evaluation. In this prospective multi‐institutional study we assess the performance characteristics of a verbal sign‐out evaluation tool for internal medicine housestaff and hospitalist attendings, and examine whether it can be used by peers as well as by external evaluators. This tool has previously been found to effectively discriminate between experienced and inexperienced nurses conducting nursing handoffs.[13]

METHODS

Tool Design and Measures

The Handoff CEX (clinical evaluation exercise) is a structured assessment based on the format of the mini‐CEX, an instrument used to assess the quality of history and physical examination by trainees for which validation studies have previously been conducted.[14, 15, 16, 17] We developed the tool based on themes we identified from our own expertise,[1, 5, 6, 8, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] the ACGME core competencies for trainees,[2] and the literature to maximize content validity. First, standardization has numerous demonstrable benefits for safety in general and handoffs in particular.[30, 31, 32] Consequently we created a domain for organization in which standardization was a characteristic of high performance.

Second, there is evidence that people engaged in conversation routinely overestimate peer comprehension,[27] and that explicit strategies to combat this overestimation, such as confirming understanding, explicitly assigning tasks rather than using open‐ended language, and using concrete language, are effective.[33] Accordingly we created a domain for communication skills, which is also an ACGME competency.

Third, although there were no formal guidelines for sign‐out content when we developed this tool, our own research had demonstrated that the content elements most often missing and felt to be important by stakeholders were related to clinical condition and explicating thinking processes,[5, 6] so we created a domain for content that highlighted these areas and met the ACGME competency of medical knowledge. In accordance with standards for evaluation of learners, we incorporated a domain for judgment to identify where trainees were in the RIME spectrum of reporter, interpreter, master, and educator.

Next, we added a section for professionalism in accordance with the ACGME core competencies of professionalism and patient care.[34] To avoid the disinclination of peers to label each other unprofessional, we labeled the professionalism domain as patient‐focused on the tool.

Finally, we included a domain for setting because of an extensive literature demonstrating increased handoff failures in noisy or interruptive settings.[35, 36, 37] We then revised the tool slightly based on our experiences among nurses and students.[13, 38] The final tool included the 6 domains described above and an assessment of overall competency. Each domain was scored on a 9‐point scale and included descriptive anchors at high and low ends of performance. We further divided the scale into 3 main sections: unsatisfactory (score 13), satisfactory (46), and superior (79). We designed 2 tools, 1 to assess the person providing the handoff and 1 to assess the handoff recipient, each with its own descriptive anchors. The recipient tool did not include a content domain (see Supporting Information, Appendix 1, in the online version of this article).

Setting and Subjects

We tested the tool in 2 different urban academic medical centers: the University of Chicago Medicine (UCM) and Yale‐New Haven Hospital (Yale). At UCM, we tested the tool among hospitalists, nurse practitioners, and physician assistants during the Monday and Tuesday morning and Friday evening sign‐out sessions. At Yale, we tested the tool among housestaff during the evening sign‐out session from the primary team to the on‐call covering team.

The UCM is a 550‐bed urban academic medical center in which the nonteaching hospitalist service cares for patients with liver disease, or end‐stage renal or lung disease awaiting transplant, and a small fraction of general medicine and oncology patients when the housestaff service exceeds its cap. No formal training on sign‐out is provided to attending or midlevel providers. The nonteaching hospitalist service operates as a separate service from the housestaff service and consists of 38 hospitalist clinicians (hospitalist attendings, nurse practitioners, and physicians assistants). There are 2 handoffs each day. In the morning the departing night hospitalist hands off to the incoming daytime hospitalist or midlevel provider. These handoffs occur at 7:30 am in a dedicated room. In the evening the daytime hospitalist or midlevel provider hands off to an incoming night hospitalist. This handoff occurs at 5:30 pm or 7:30 pm in a dedicated location. The written sign‐out is maintained on a Microsoft Word (Microsoft Corp., Redmond, WA) document on a password‐protected server and updated daily.

Yale is a 946‐bed urban academic medical center with a large internal medicine training program. Formal sign‐out education that covers the main domains of the tool is provided to new interns during the first 3 months of the year,[19] and a templated electronic medical record‐based electronic written handoff report is produced by the housestaff for all patients.[22] Approximately half of inpatient medicine patients are cared for by housestaff teams, which are entirely separate from the hospitalist service. Housestaff sign‐out occurs between 4 pm and 7 pm every night. At a minimum, the departing intern signs out to the incoming intern; this handoff is typically supervised by at least 1 second‐ or third‐year resident. All patients are signed out verbally; in addition, the written handoff report is provided to the incoming team. Most handoffs occur in a quiet charting room.

Data Collection

Data collection at UCM occurred between March and December 2010 on 3 days of each week: Mondays, Tuesdays, and Fridays. On Mondays and Tuesdays the morning handoffs were observed; on Fridays the evening handoffs were observed. Data collection at Yale occurred between March and May 2011. Only evening handoffs from the primary team to the overnight coverage were observed. At both sites, participants provided verbal informed consent prior to data collection. At the time of an eligible sign‐out session, a research assistant (D.R. at Yale, P.S. at UCM) provided the evaluation tools to all members of the incoming and outgoing teams, and observed the sign‐out session himself. Each person providing a handoff was asked to evaluate the recipient of the handoff; each person receiving a handoff was asked to evaluate the provider of the handoff. In addition, the trained third‐party observer (D.R., P.S.) evaluated both the provider and recipient of the handoff. The external evaluators were trained in principles of effective communication and the use of the tool, with specific review of anchors at each end of each domain. One evaluator had a DO degree and was completing an MPH degree. The second evaluator was an experienced clinical research assistant whose training consisted of supervised observation of 10 handoffs by a physician investigator. At Yale, if a resident was present, she or he was also asked to evaluate both the provider and recipient of the handoff. Consequently, every sign‐out session included at least 2 evaluations of each participant, 1 by a peer evaluator and 1 by a consistent external evaluator who did not know the patients. At Yale, many sign‐outs also included a third evaluation by a resident supervisor.

The study was approved by the institutional review boards at both UCM and Yale.

Statistical Analysis

We obtained mean, median, and interquartile range of scores for each subdomain of the tool as well as the overall assessment of handoff quality. We assessed convergent construct validity by assessing performance of the tool in different contexts. To do so, we determined whether scores differed by type of participant (provider or recipient), by site, by training level of evaluatee, or by type of evaluator (external, resident supervisor, or peer) by using Wilcoxon rank sum tests and Kruskal‐Wallis tests. For the assessment of differences in ratings by training level, we used evaluations of sign‐out providers only, because the 2 sites differed in scores for recipients. We also assessed construct validity by using Spearman rank correlation coefficients to describe the internal consistency of the tool in terms of the correlation between domains of the tool, and we conducted an exploratory factor analysis to gain insight into whether the subdomains of the tool were measuring the same construct. In conducting this analysis, we restricted the dataset to evaluations of sign‐out providers only, and used a principal components estimation method, a promax rotation, and squared multiple correlation communality priors. Finally, we conducted some preliminary studies of reliability by testing whether different types of evaluators provided similar assessments. We calculated a weighted kappa using Fleiss‐Cohen weights for external versus peer scores and again for supervising resident versus peer scores (Yale only). We were not able to assess test‐retest reliability by nature of the sign‐out process. Statistical significance was defined by a P value 0.05, and analyses were performed using SAS 9.2 (SAS Institute, Cary, NC).

RESULTS

A total of 149 handoff sessions were observed: 89 at UCM and 60 at Yale. Each site conducted a similar total number of evaluations: 336 at UCM, 337 at Yale. These sessions involved 97 unique individuals, 34 at UCM and 63 at Yale. Overall scores were high at both sites, but a wide range of scores was applied (Table 1).

Median, Mean, and Range of Handoff CEX Scores in Each Domain, Providers, and Recipients
DomainProvider, N=343Recipient, N=330P Value
Median (IQR)Mean (SD)RangeMedian (IQR)Mean (SD)Range
  • NOTE: Abbreviations: IQR, interquartile range; SD, standard deviation.

Setting7 (69)7.0 (1.7)297 (69)7.3 (1.6)290.05
Organization7 (68)7.2 (1.5)298 (69)7.4 (1.4)290.07
Communication7 (69)7.2 (1.6)198 (79)7.4 (1.5)290.22
Content7 (68)7.0 (1.6)29    
Judgment8 (68)7.3 (1.4)398 (79)7.5 (1.4)390.06
Professionalism8 (79)7.4 (1.5)298 (79)7.6 (1.4)390.23
Overall7 (68)7.1 (1.5)297 (68)7.4 (1.4)290.02

Handoff Providers

A total of 343 evaluations of handoff providers were completed regarding 67 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism (median: 8; interquartile range [IQR]: 79). The lowest rated domain was content (median: 7; IQR: 68) (Table 1).

Handoff Recipients

A total of 330 evaluations of handoff recipients were completed regarding 58 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism, with a median of 8 (IQR: 79). The lowest rated domain was setting, with a median score of 7 (IQR: 6‐9) (Table 1).

Validity Testing

Comparing provider scores to recipient scores, recipients received significantly higher scores for overall assessment (Table 1). Scores at UCM and Yale were similar in all domains for providers but were slightly lower at UCM in several domains for recipients (see Supporting Information, Appendix 2, in the online version of this article). Scores did not differ significantly by training level (Table 2). Third‐party external evaluators consistently gave lower marks for the same handoff than peer evaluators did (Table 3).

Handoff CEX Scores by Training Level, Providers Only
DomainMedian (Range)P Value
NP/PA, N=33Subintern or Intern, N=170Resident, N=44Hospitalist, N=95
  • NOTE: Abbreviations: NP/PA: nurse practitioner/physician assistant.

Setting7 (29)7 (39)7 (49)7 (29)0.89
Organization8 (49)7 (29)7 (49)8 (39)0.11
Communication8 (49)7 (29)7 (49)8 (19)0.72
Content7 (39)7 (29)7 (49)7 (29)0.92
Judgment8 (59)7 (39)8 (49)8 (49)0.09
Professionalism8 (49)7 (29)8 (39)8 (49)0.82
Overall7 (39)7 (29)8 (49)7 (29)0.28
Handoff CEX Scores by Peer Versus External Evaluators
 Provider, Median (Range)Recipient, Median (Range)
DomainPeer, N=152Resident, Supervisor, N=43External, N=147P ValuePeer, N=145Resident Supervisor, N=43External, N=142P Value
  • NOTE: Abbreviations: N/A, not applicable.

Setting8 (39)7 (39)7 (29)0.028 (29)7 (39)7 (29)<0.001
Organization8 (39)8 (39)7 (29)0.188 (39)8 (69)7 (29)<0.001
Communication8 (39)8 (39)7 (19)<0.0018 (39)8 (49)7 (29)<0.001
Content8 (39)8 (29)7 (29)<0.001N/AN/AN/AN/A
Judgment8 (49)8 (39)7 (39)<0.0018 (39)8 (49)7 (39)<0.001
Professionalism8 (39)8 (59)7 (29)0.028 (39)8 (69)7 (39)<0.001
Overall8 (39)8 (39)7 (29)0.0018 (29)8 (49)7 (29)<0.001

Spearman rank correlation coefficients among the CEX subdomains for provider scores ranged from 0.71 to 0.86, except for setting (Table 4). Setting was less well correlated with the other subdomains, with correlation coefficients ranging from 0.39 to 0.41. Correlations between individual domains and the overall rating ranged from 0.80 to 0.86, except setting, which had a correlation of 0.55. Every correlation was significant at P<0.001. Correlation coefficients for recipient scores were very similar to those for provider scores (see Supporting Information, Appendix 3, in the online version of this article).

Spearman Correlation Coefficients, Provider Evaluations (N=342)
 Spearman Correlation Coefficients
 SettingOrganizationCommunicationContentJudgmentProfessionalism
  • NOTE: All P values <0.0001.

Setting1.0000.400.400.390.390.41
Organization0.401.000.800.710.770.73
Communication0.400.801.000.790.820.77
Content0.390.710.791.000.800.74
Judgment0.390.770.820.801.000.78
Professionalism0.410.730.770.740.781.00
Overall0.550.800.840.830.860.82

We analyzed 343 provider evaluations in the factor analysis; there were 6 missing values. The scree plot of eigenvalues did not support more than 1 factor; however, the rotated factor pattern for standardized regression coefficients for the first factor and the final communality estimates showed the setting component yielding smaller values than did other scale components (see Supporting Information, Appendix 4, in the online version of this article).

Reliability Testing

Weighted kappa scores for provider evaluations ranged from 0.28 (95% confidence interval [CI]: 0.01, 0.56) for setting to 0.59 (95% CI: 0.38, 0.80) for organization, and were generally higher for resident versus peer comparisons than for external versus peer comparisons. Weighted kappa scores for recipient evaluation were slightly lower for external versus peer evaluations, but agreement was no better than chance for resident versus peer evaluations (Table 5).

Weighted Kappa Scores
DomainProviderRecipient
External vs Peer, N=144 (95% CI)Resident vs Peer, N=42 (95% CI)External vs Peer, N=134 (95% CI)Resident vs Peer, N=43 (95% CI)
  • NOTE: Abbreviations: CI, confidence interval; N/A, not applicable.

Setting0.39 (0.24, 0.54)0.28 (0.01, 0.56)0.34 (0.20, 0.48)0.48 (0.27, 0.69)
Organization0.43 (0.29, 0.58)0.59 (0.39, 0.80)0.39 (0.22, 0.55)0.03 (0.23, 0.29)
Communication0.34 (0.19, 0.49)0.52 (0.37, 0.68)0.36 (0.22, 0.51)0.02 (0.18, 0.23)
Content0.38 (0.25, 0.51)0.53 (0.27, 0.80)N/A (N/A)N/A (N/A)
Judgment0.36 (0.22, 0.49)0.54 (0.25, 0.83)0.28 (0.15, 0.42)0.12 (0.34, 0.09)
Professionalism0.47 (0.32, 0.63)0.47 (0.23, 0.72)0.35 (0.18, 0.51)0.01 (0.29, 0.26)
Overall0.50 (0.36, 0.64)0.45 (0.24, 0.67)0.31 (0.16, 0.48)0.07 (0.20, 0.34)

DISCUSSION

In this study we found that an evaluation tool for direct observation of housestaff and hospitalists generated a range of scores and was well validated in the sense of performing similarly across 2 different institutions and among both trainees and attendings, while having high internal consistency. However, external evaluators gave consistently lower marks than peer evaluators at both sites, resulting in low reliability when comparing these 2 groups of raters.

It has traditionally been difficult to conduct direct evaluations of handoffs, because they may occur at haphazard times, in variable locations, and without very much advance notice. For this reason, several attempts have been made to incorporate peers in evaluations of handoff practices.[5, 39, 40] Using peers to conduct evaluations also has the advantage that peers are more likely to be familiar with the patients being handed off and might recognize handoff flaws that external evaluators would miss. Nonetheless, peer evaluations have some important liabilities. Peers may be unwilling or unable to provide honest critiques of their colleagues given that they must work closely together for years. Trainee peers may also lack sufficient clinical expertise or experience to accurately assess competence. In our study, we found that peers gave consistently higher marks to their colleagues than did external evaluators, suggesting they may have found it difficult to criticize their colleagues. We conclude that peer evaluation alone is likely an insufficient means of evaluating handoff quality.

Supervising residents gave very similar marks as intern peers, suggesting that they also are unwilling to criticize, are insufficiently experienced to evaluate, or alternatively, that the peer evaluations were reasonable. We suspect the latter is unlikely given that external evaluator scores were consistently lower than peers. One would expect the external evaluators to be biased toward higher scores given that they are not familiar with the patients and are not able to comment on inaccuracies or omissions in the sign‐out.

The tool appeared to perform less well in most cases for recipients than for providers, with a narrower range of scores and low‐weighted kappa scores. Although recipients play a key role in ensuring a high‐quality sign‐out by paying close attention, ensuring it is a bidirectional conversation, asking appropriate questions, and reading back key information, it may be that evaluators were unable to place these activities within the same domains that were used for the provider evaluation. An altogether different recipient evaluation approach may be necessary.[41]

In general, scores were clustered at the top of the score range, as is typical for evaluations. One strategy to spread out scores further would be to refine the tool by adding anchors for satisfactory performance not just the extremes. A second approach might be to reduce the grading scale to only 3 points (unsatisfactory, satisfactory, superior) to force more scores to the middle. However, this approach might limit the discrimination ability of the tool.

We have previously studied the use of this tool among nurses. In that study, we also found consistently higher scores by peers than by external evaluators. We did, however, find a positive effect of experience, in which more experienced nurses received higher scores on average. We did not observe a similar training effect in this study. There are several possible explanations for the lack of a training effect. It is possible that the types of handoffs assessed played a role. At UCM, some assessed handoffs were night staff to day staff, which might be lower quality than day staff to night staff handoffs, whereas at Yale, all handoffs were day to night teams. Thus, average scores at UCM (primarily hospitalists) might have been lowered by the type of handoff provided. Given that hospitalist evaluations were conducted exclusively at UCM and housestaff evaluations exclusively at Yale, lack of difference between hospitalists and housestaff may also have been related to differences in evaluation practice or handoff practice at the 2 sites, not necessarily related to training level. Third, in our experience, attending physicians provide briefer less‐comprehensive sign‐outs than trainees, particularly when communicating with equally experienced attendings; these sign‐outs may appropriately be scored lower on the tool. Fourth, the great majority of the hospitalists at UCM were within 5 years of residency and therefore not very much more experienced than the trainees. Finally, it is possible that skills do not improve over time given widespread lack of observation and feedback during training years for this important skill.

The high internal consistency of most of the subdomains and the loading of all subdomains except setting onto 1 factor are evidence of convergent construct validity, but also suggest that evaluators have difficulty distinguishing among components of sign‐out quality. Internal consistency may also reflect a halo effect, in which scores on different domains are all influenced by a common overall judgment.[42] We are currently testing a shorter version of the tool including domains only for content, professionalism, and setting in addition to overall score. The fact that setting did not correlate as well with the other domains suggests that sign‐out practitioners may not have or exercise control over their surroundings. Consequently, it may ultimately be reasonable to drop this domain from the tool, or alternatively, to refocus on the need to ensure a quiet setting during sign‐out skills training.

There are several limitations to this study. External evaluations were conducted by personnel who were not familiar with the patients, and they may therefore have overestimated the quality of sign‐out. Studying different types of physicians at different sites might have limited our ability to identify differences by training level. As is commonly seen in evaluation studies, scores were skewed to the high end, although we did observe some use of the full range of the tool. Finally, we were limited in our ability to test inter‐rater reliability because of the multiple sources of variability in the data (numerous different raters, with different backgrounds at different settings, rating different individuals).

In summary, we developed a handoff evaluation tool that was easily completed by housestaff and attendings without training, that performed similarly in a variety of different settings at 2 institutions, and that can in principle be used either for peer evaluations or for external evaluations, although peer evaluations may be positively biased. Further work will be done to refine and simplify the tool.

ACKNOWLEDGMENTS

Disclosures: Development and evaluation of the sign‐out CEX was supported by a grant from the Agency for Healthcare Research and Quality (1R03HS018278‐01). Dr. Arora is supported by a National Institute on Aging (K23 AG033763). Dr. Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30AG021342 NIH/NIA). No funding source had any role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the article for publication. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality, the National Institute on Aging, the National Institutes of Health, or the American Federation for Aging Research. Dr. Horwitz had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. An earlier version of this work was presented as a poster presentation at the Society of General Internal Medicine Annual Meeting in Orlando, Florida on May 9, 2012. Dr. Rand is now with the Department of Medicine, University of Vermont College of Medicine, Burlington, Vermont. Mr. Staisiunas is now with the Law School, Marquette University, Milwaukee, Wisconsin. The authors declare they have no conflicts of interest.

Appendix

A

PROVIDER HAND‐OFF CEX TOOL

 

 

RECIPIENT HAND‐OFF CEX TOOL

 

 

Appendix

B

 

Handoff CEX scores by site of evaluation

DomainProviderRecipient
Median (Range)P‐valueMedian (Range)P‐value
 UCYale UCYale 
N=172N=170 N=163N=167 
Setting7 (29)7 (39)0.327 (29)7 (39)0.36
Organization8 (29)7 (39)0.307 (29)8 (59)0.001
Communication7 (19)7 (39)0.677 (29)8 (49)0.03
Content7 (29)7 (29) N/AN/AN/A
Judgment8 (39)7 (39)0.607 (39)8 (49)0.001
Professionalism8 (29)8 (39)0.678 (39)8 (49)0.35
Overall7 (29)7 (39)0.417 (29)8 (49)0.005

 

Appendix

C

Spearman correlation, recipients (N=330)

SpearmanCorrelationCoefficients
 SettingOrganizationCommunicationJudgmentProfessionalism
Setting1.00.460.480.470.40
Organization0.461.000.780.750.75
Communication0.480.781.000.850.77
Judgment0.470.750.851.000.74
Professionalism0.400.750.770.741.00
Overall0.600.770.840.820.77

 

All p values <0.0001

 

Appendix

D

Factor analysis results for provider evaluations

Rotated Factor Pattern (Standardized Regression Coefficients) N=336
 Factor1Factor2
Organization0.640.27
Communication0.790.16
Content0.820.06
Judgment0.860.06
Professionalism0.660.23
Setting0.180.29

 

 

References
  1. Horwitz LI, Krumholz HM, Green ML, Huot SJ. Transfers of patient care between house staff on internal medicine wards: a national survey. Arch Intern Med. 2006;166(11):11731177.
  2. Accreditation Council for Graduate Medical Education. Common program requirements. 2011; http://www.acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed August 23, 2011.
  3. Petersen LA, Brennan TA, O'Neil AC, Cook EF, Lee TH. Does housestaff discontinuity of care increase the risk for preventable adverse events? Ann Intern Med. 1994;121(11):866872.
  4. Sutcliffe KM, Lewton E, Rosenthal MM. Communication failures: an insidious contributor to medical mishaps. Acad Med. 2004;79(2):186194.
  5. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14(6):401407.
  6. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):17551760.
  7. Borowitz SM, Waggoner‐Fountain LA, Bass EJ, Sledd RM. Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17(1):610.
  8. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. What are covering doctors told about their patients? Analysis of sign‐out among internal medicine house staff. Qual Saf Health Care. 2009;18(4):248255.
  9. Gakhar B, Spencer AL. Using direct observation, formal evaluation, and an interactive curriculum to improve the sign‐out practices of internal medicine interns. Acad Med. 2010;85(7):11821188.
  10. Raduma‐Tomas MA, Flin R, Yule S, Williams D. Doctors' handovers in hospitals: a literature review. Qual Saf Health Care. 2011;20(2):128133.
  11. Bump GM, Jovin F, Destefano L, et al. Resident sign‐out and patient hand‐offs: opportunities for improvement. Teach Learn Med. 2011;23(2):105111.
  12. Helms AS, Perez TE, Baltz J, et al. Use of an appreciative inquiry approach to improve resident sign‐out in an era of multiple shift changes. J Gen Intern Med. 2012;27(3):287291.
  13. Horwitz LI, Dombroski J, Murphy TE, Farnan JM, Johnson JK, Arora VM. Validation of a handoff assessment tool: the Handoff CEX [published online ahead of print June 7, 2012]. J Clin Nurs. doi: 10.1111/j.1365–2702.2012.04131.x.
  14. Norcini JJ, Blank LL, Arnold GK, Kimball HR. The mini‐CEX (clinical evaluation exercise): a preliminary investigation. Ann Intern Med. 1995;123(10):795799.
  15. Norcini JJ, Blank LL, Arnold GK, Kimball HR. Examiner differences in the mini‐CEX. Adv Health Sci Educ Theory Pract. 1997;2(1):2733.
  16. Durning SJ, Cation LJ, Markert RJ, Pangaro LN. Assessing the reliability and validity of the mini‐clinical evaluation exercise for internal medicine residency training. Acad Med. 2002;77(9):900904.
  17. Holmboe ES, Huot S, Chung J, Norcini J, Hawkins RE. Construct validity of the miniclinical evaluation exercise (miniCEX). Acad Med. 2003;78(8):826830.
  18. Horwitz LI, Meredith T, Schuur JD, Shah NR, Kulkarni RG, Jenq GY. Dropping the baton: a qualitative analysis of failures during the transition from emergency department to inpatient care. Ann Emerg Med. 2009;53(6):701710.e4.
  19. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22(10):14701474.
  20. Horwitz LI, Moin T, Wang L, Bradley EH. Mixed methods evaluation of oral sign‐out practices. J Gen Intern Med. 2007;22(S1):S114.
  21. Horwitz LI, Parwani V, Shah NR, et al. Evaluation of an asynchronous physician voicemail sign‐out for emergency department admissions. Ann Emerg Med. 2009;54(3):368378.
  22. Horwitz LI, Schuster KM, Thung SF, et al. An institution‐wide handoff task force to standardise and improve physician handoffs. BMJ Qual Saf. 2012;21(10):863871.
  23. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32(11):646655.
  24. Arora V, Kao J, Lovinger D, Seiden SC, Meltzer D. Medication discrepancies in resident sign‐outs and their potential to harm. J Gen Intern Med. 2007;22(12):17511755.
  25. Arora VM, Johnson JK, Meltzer DO, Humphrey HJ. A theoretical framework and competency‐based approach to improving handoffs. Qual Saf Health Care. 2008;17(1):1114.
  26. Arora VM, Manjarrez E, Dressler DD, Basaviah P, Halasyamani L, Kripalani S. Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433440.
  27. Chang VY, Arora VM, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491496.
  28. Johnson JK, Arora VM. Improving clinical handovers: creating local solutions for a global problem. Qual Saf Health Care. 2009;18(4):244245.
  29. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257266.
  30. Salerno SM, Arnett MV, Domanski JP. Standardized sign‐out reduces intern perception of medical errors on the general internal medicine ward. Teach Learn Med. 2009;21(2):121126.
  31. Haig KM, Sutton S, Whittington J. SBAR: a shared mental model for improving communication between clinicians. Jt Comm J Qual Patient Saf. 2006;32(3):167175.
  32. Patterson ES. Structuring flexibility: the potential good, bad and ugly in standardisation of handovers. Qual Saf Health Care. 2008;17(1):45.
  33. Patterson ES, Roth EM, Woods DD, Chow R, Gomes JO. Handoff strategies in settings with high consequences for failure: lessons for health care operations. Int J Qual Health Care. 2004;16(2):125132.
  34. Ratanawongsa N, Bolen S, Howell EE, Kern DE, Sisson SD, Larriviere D. Residents' perceptions of professionalism in training and practice: barriers, promoters, and duty hour requirements. J Gen Intern Med. 2006;21(7):758763.
  35. Coiera E, Tombs V. Communication behaviours in a hospital setting: an observational study. BMJ. 1998;316(7132):673676.
  36. Coiera EW, Jayasuriya RA, Hardy J, Bannan A, Thorpe ME. Communication loads on clinical staff in the emergency department. Med J Aust. 2002;176(9):415418.
  37. Ong MS, Coiera E. A systematic review of failures in handoff communication during intrahospital transfers. Jt Comm J Qual Patient Saf. 2011;37(6):274284.
  38. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129134.
  39. Kitch BT, Cooper JB, Zapol WM, et al. Handoffs causing patient harm: a survey of medical and surgical house staff. Jt Comm J Qual Patient Saf. 2008;34(10):563570.
  40. Li P, Stelfox HT, Ghali WA. A prospective observational study of physician handoff for intensive‐care‐unit‐to‐ward patient transfers. Am J Med. 2011;124(9):860867.
  41. Greenstein E, Arora V, Banerjee S, Staisiunas P, Farnan J. Characterizing physician listening behavior during hospitalist handoffs using the HEAR checklist (published online ahead of print December 20, 2012]. BMJ Qual Saf. doi:10.1136/bmjqs‐2012‐001138.
  42. Thorndike EL. A constant error in psychological ratings. J Appl Psychol. 1920;4(1):25.
References
  1. Horwitz LI, Krumholz HM, Green ML, Huot SJ. Transfers of patient care between house staff on internal medicine wards: a national survey. Arch Intern Med. 2006;166(11):11731177.
  2. Accreditation Council for Graduate Medical Education. Common program requirements. 2011; http://www.acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed August 23, 2011.
  3. Petersen LA, Brennan TA, O'Neil AC, Cook EF, Lee TH. Does housestaff discontinuity of care increase the risk for preventable adverse events? Ann Intern Med. 1994;121(11):866872.
  4. Sutcliffe KM, Lewton E, Rosenthal MM. Communication failures: an insidious contributor to medical mishaps. Acad Med. 2004;79(2):186194.
  5. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14(6):401407.
  6. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):17551760.
  7. Borowitz SM, Waggoner‐Fountain LA, Bass EJ, Sledd RM. Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17(1):610.
  8. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. What are covering doctors told about their patients? Analysis of sign‐out among internal medicine house staff. Qual Saf Health Care. 2009;18(4):248255.
  9. Gakhar B, Spencer AL. Using direct observation, formal evaluation, and an interactive curriculum to improve the sign‐out practices of internal medicine interns. Acad Med. 2010;85(7):11821188.
  10. Raduma‐Tomas MA, Flin R, Yule S, Williams D. Doctors' handovers in hospitals: a literature review. Qual Saf Health Care. 2011;20(2):128133.
  11. Bump GM, Jovin F, Destefano L, et al. Resident sign‐out and patient hand‐offs: opportunities for improvement. Teach Learn Med. 2011;23(2):105111.
  12. Helms AS, Perez TE, Baltz J, et al. Use of an appreciative inquiry approach to improve resident sign‐out in an era of multiple shift changes. J Gen Intern Med. 2012;27(3):287291.
  13. Horwitz LI, Dombroski J, Murphy TE, Farnan JM, Johnson JK, Arora VM. Validation of a handoff assessment tool: the Handoff CEX [published online ahead of print June 7, 2012]. J Clin Nurs. doi: 10.1111/j.1365–2702.2012.04131.x.
  14. Norcini JJ, Blank LL, Arnold GK, Kimball HR. The mini‐CEX (clinical evaluation exercise): a preliminary investigation. Ann Intern Med. 1995;123(10):795799.
  15. Norcini JJ, Blank LL, Arnold GK, Kimball HR. Examiner differences in the mini‐CEX. Adv Health Sci Educ Theory Pract. 1997;2(1):2733.
  16. Durning SJ, Cation LJ, Markert RJ, Pangaro LN. Assessing the reliability and validity of the mini‐clinical evaluation exercise for internal medicine residency training. Acad Med. 2002;77(9):900904.
  17. Holmboe ES, Huot S, Chung J, Norcini J, Hawkins RE. Construct validity of the miniclinical evaluation exercise (miniCEX). Acad Med. 2003;78(8):826830.
  18. Horwitz LI, Meredith T, Schuur JD, Shah NR, Kulkarni RG, Jenq GY. Dropping the baton: a qualitative analysis of failures during the transition from emergency department to inpatient care. Ann Emerg Med. 2009;53(6):701710.e4.
  19. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22(10):14701474.
  20. Horwitz LI, Moin T, Wang L, Bradley EH. Mixed methods evaluation of oral sign‐out practices. J Gen Intern Med. 2007;22(S1):S114.
  21. Horwitz LI, Parwani V, Shah NR, et al. Evaluation of an asynchronous physician voicemail sign‐out for emergency department admissions. Ann Emerg Med. 2009;54(3):368378.
  22. Horwitz LI, Schuster KM, Thung SF, et al. An institution‐wide handoff task force to standardise and improve physician handoffs. BMJ Qual Saf. 2012;21(10):863871.
  23. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32(11):646655.
  24. Arora V, Kao J, Lovinger D, Seiden SC, Meltzer D. Medication discrepancies in resident sign‐outs and their potential to harm. J Gen Intern Med. 2007;22(12):17511755.
  25. Arora VM, Johnson JK, Meltzer DO, Humphrey HJ. A theoretical framework and competency‐based approach to improving handoffs. Qual Saf Health Care. 2008;17(1):1114.
  26. Arora VM, Manjarrez E, Dressler DD, Basaviah P, Halasyamani L, Kripalani S. Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433440.
  27. Chang VY, Arora VM, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491496.
  28. Johnson JK, Arora VM. Improving clinical handovers: creating local solutions for a global problem. Qual Saf Health Care. 2009;18(4):244245.
  29. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257266.
  30. Salerno SM, Arnett MV, Domanski JP. Standardized sign‐out reduces intern perception of medical errors on the general internal medicine ward. Teach Learn Med. 2009;21(2):121126.
  31. Haig KM, Sutton S, Whittington J. SBAR: a shared mental model for improving communication between clinicians. Jt Comm J Qual Patient Saf. 2006;32(3):167175.
  32. Patterson ES. Structuring flexibility: the potential good, bad and ugly in standardisation of handovers. Qual Saf Health Care. 2008;17(1):45.
  33. Patterson ES, Roth EM, Woods DD, Chow R, Gomes JO. Handoff strategies in settings with high consequences for failure: lessons for health care operations. Int J Qual Health Care. 2004;16(2):125132.
  34. Ratanawongsa N, Bolen S, Howell EE, Kern DE, Sisson SD, Larriviere D. Residents' perceptions of professionalism in training and practice: barriers, promoters, and duty hour requirements. J Gen Intern Med. 2006;21(7):758763.
  35. Coiera E, Tombs V. Communication behaviours in a hospital setting: an observational study. BMJ. 1998;316(7132):673676.
  36. Coiera EW, Jayasuriya RA, Hardy J, Bannan A, Thorpe ME. Communication loads on clinical staff in the emergency department. Med J Aust. 2002;176(9):415418.
  37. Ong MS, Coiera E. A systematic review of failures in handoff communication during intrahospital transfers. Jt Comm J Qual Patient Saf. 2011;37(6):274284.
  38. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129134.
  39. Kitch BT, Cooper JB, Zapol WM, et al. Handoffs causing patient harm: a survey of medical and surgical house staff. Jt Comm J Qual Patient Saf. 2008;34(10):563570.
  40. Li P, Stelfox HT, Ghali WA. A prospective observational study of physician handoff for intensive‐care‐unit‐to‐ward patient transfers. Am J Med. 2011;124(9):860867.
  41. Greenstein E, Arora V, Banerjee S, Staisiunas P, Farnan J. Characterizing physician listening behavior during hospitalist handoffs using the HEAR checklist (published online ahead of print December 20, 2012]. BMJ Qual Saf. doi:10.1136/bmjqs‐2012‐001138.
  42. Thorndike EL. A constant error in psychological ratings. J Appl Psychol. 1920;4(1):25.
Issue
Journal of Hospital Medicine - 8(4)
Issue
Journal of Hospital Medicine - 8(4)
Page Number
191-200
Page Number
191-200
Publications
Publications
Article Type
Display Headline
Development of a handoff evaluation tool for shift‐to‐shift physician handoffs: The handoff CEX
Display Headline
Development of a handoff evaluation tool for shift‐to‐shift physician handoffs: The handoff CEX
Sections
Article Source

Copyright © 2013 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Leora I. Horwitz, MD, Section of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine, P.O. Box 208093, New Haven, CT 06520-8093; Telephone: 203-688‐5678; Fax: 203–737‐3306; E‐mail: leora.horwitz@yale.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Attendings' Perception of Housestaff

Article Type
Changed
Sun, 05/21/2017 - 18:13
Display Headline
How do attendings perceive housestaff autonomy? Attending experience, hospitalists, and trends over time

Clinical supervision in graduate medical education (GME) emphasizes patient safety while promoting development of clinical expertise by allowing trainees progressive independence.[1, 2, 3] The importance of the balance between supervision and autonomy has been recognized by accreditation organizations, namely the Institute of Medicine and the Accreditation Council for Graduate Medical Education (ACGME).[4, 5] However, little is known of best practices in supervision, and the model of progressive independence in clinical training lacks empirical support.[3] Limited evidence suggests that enhanced clinical supervision may have positive effects on patient and education‐related outcomes.[6, 7, 8, 9, 10, 11, 12, 13, 14, 15] However, a more nuanced understanding of potential effects of enhanced supervision on resident autonomy and decision making is still required, particularly as preliminary work on increased on‐site hospitalist supervision has yielded mixed results.[16, 17, 18, 19]

Understanding how trainees are entrusted with autonomy will be integral to the ACGME's Next Accreditation System.[20] Entrustable Professional Activities are benchmarks by which resident readiness to progress through training will be judged.[21] The extent to which trainees are entrusted with autonomy is largely determined by the subjective assessment of immediate supervisors, as autonomy is rarely measured or quantified.[3, 22, 23] This judgment of autonomy, most frequently performed by ward attendings, may be subject to significant variation and influenced by factors other than the resident's competence and clinical abilities.

To that end, it is worth considering what factors may affect attending perception of housestaff autonomy and decision making. Recent changes in the GME environment and policy implementation have altered the landscape of the attending workforce considerably. The growth of the hospitalist movement in teaching hospitals, in part due to duty hours, has led to more residents being supervised by hospitalists, who may perceive trainee autonomy differently than other attendings do.[24] This study aims to examine whether factors such as attending demographics and short‐term and long‐term secular trends influence attending perception of housestaff autonomy and participation in decision making.

METHODS

Study Design

From 2001 to 2008, attending physicians at a single academic institution were surveyed at the end of inpatient general medicine teaching rotations.[25] The University of Chicago general medicine service consists of ward teams of an attending physician (internists, hospitalists, or subspecialists), 1 senior resident, and 1 or 2 interns. Attendings serve for 2‐ or 4‐week rotations. Attendings were consented for participation and received a 40‐item, paper‐based survey at the rotation's end. The institutional review board approved this study.

Data Collection

From the 40 survey items, 2 statements were selected for analysis: The intern(s) were truly involved in decision making about their patients and My resident felt that s/he had sufficient autonomy this month. These items have been used in previous work studying attending‐resident dynamics.[19, 26] Attendings also reported demographic and professional information as well as self‐identified hospitalist status, ascertained by the question Do you consider yourself to be a hospitalist? Survey month and year were also recorded. We conducted a secondary data analysis of an inclusive sample of responses to the questions of interest.

Statistical Analysis

Descriptive statistics were used to summarize survey responses and demographics. Survey questions consisted of Likert‐type items. Because the distribution of responses was skewed toward strong agreement for both questions, we collapsed scores into 2 categories (Strongly Agree and Do Not Strongly Agree).[19] Perception of sufficient trainee autonomy was defined as a response of Strongly Agree. The Pearson 2 test was used to compare proportions, and t tests were used to compare mean years since completion of residency and weeks on service between different groups.

Multivariate logistic regression with stepwise forward regression was used to model the relationship between attending sex, institutional hospitalist designation, years of experience, implementation of duty‐hours restrictions, and academic season, and perception of trainee autonomy and decision making. Academic seasons were defined as summer (JulySeptember), fall (OctoberDecember), winter (JanuaryMarch) and spring (AprilJune).[26] Years of experience were divided into tertiles of years since residency: 04 years, 511 years, and >11 years. To account for the possibility that the effect of hospitalist specialty varied by experience, interaction terms were constructed. The interaction term hospitalist*early‐career was used as the reference group.

RESULTS

Seven hundred thirty‐eight surveys were distributed to attendings on inpatient general medicine teaching services from 2001 to 2008; 70% (n=514) were included in the analysis. Table 1 provides demographic characteristics of the respondents. Roughly half (47%) were female, and 23% were hospitalists. Experience ranged from 0 to 35 years, with a median of 7 years. Weeks on service per year ranged from 1 to 27, with a median of 6 weeks. Hospitalists represented a less‐experienced group of attendings, as their mean experience was 4.5 years (standard deviation [SD] 4.5) compared with 11.2 years (SD 7.7) for nonhospitalists (P<0.001). Hospitalists attended more frequently, with a mean 14.2 weeks on service (SD 6.5) compared with 5.8 weeks (SD 3.4) for nonhospitalists (P<0.001). Nineteen percent (n=98) of surveys were completed prior to the first ACGME duty‐hours restriction in 2003. Responses were distributed fairly equally across the academic year, with 29% completed in summer, 26% in fall, 24% in winter, and 21% in spring.

Attending Physician Demographic Characteristics
CharacteristicsValue
  • NOTE: Abbreviations: IQR, interquartile range; SD, standard deviation.

  • Because of missing data, numbers may not correspond to exact percentages.

  • Data only available beyond academic year 20032004.

Female, n (%)275 (47)
Hospitalist, n (%)125 (23)
Years since completion of residency 
Mean, median, SD9.3, 7, 7.6
IQR314
04, n (%)167 (36)
511, n (%)146 (32)
>11, n (%)149 (32)
Weeks on service per yearb 
Mean, median, SD8.1, 6, 5.8
IQR412

Forty‐four percent (n=212) of attendings perceived adequate intern involvement in decision making, and 50% (n=238) perceived sufficient resident autonomy. The correlation coefficient between these 2 measures was 0.66.

Attending Factors Associated With Perception of Trainee Autonomy

In univariate analysis, hospitalists perceived sufficient trainee autonomy less frequently than nonhospitalists; 33% perceived adequate intern involvement in decision making compared with 48% of nonhospitalists (21=6.7, P=0.01), and 42% perceived sufficient resident autonomy compared with 54% of nonhospitalists (21=3.9, P=0.048) (Table 2).

Attending Characteristics and Time Trends Associated With Perception of Intern Involvement in Decision Making and Resident Autonomy
Attending Characteristics, n (%)Agree With Intern Involvement in Decision MakingAgree With Sufficient Resident Autonomy
  • NOTE: Abbreviations: F, female; M, male.

  • Because of missing data, numbers may not correspond to exact percentages.

Designation  
Hospitalist29 (33)37 (42)
Nonhospitalist163 (48)180 (54)
Years since completion of residency  
0437 (27)49 (36)
51177 (53)88 (61)
>1177 (53)81 (56)
Sex  
F98 (46)100 (47)
M113 (43)138 (53)
Secular factors, n (%)  
Pre‐2003 duty‐hours restrictions56 (57)62 (65)
Post‐2003 duty‐hours restrictions156 (41)176 (46)
Season of survey  
Summer (JulySeptember)61 (45)69 (51)
Fall (OctoberDecember)53 (42)59 (48)
Winter (JanuaryMarch)42 (37)52 (46)
Spring (AprilJune)56 (54)58 (57)

Perception of trainee autonomy increased with experience (Table 2). About 30% of early‐career attendings (04 years experience) perceived sufficient autonomy and involvement in decision making compared with >50% agreement in the later‐career tertiles (intern decision making: 22=25.1, P<0.001; resident autonomy: 22=18.9, P<0.001). Attendings perceiving more intern decision making involvement had a mean 11 years of experience (SD 7.1), whereas those perceiving less had a mean of 8.8 years (SD 7.8; P=0.003). Mean years of experience were similar for perception of resident autonomy (10.6 years [SD 7.2] vs 8.9 years [SD 7.8], P=0.021).

Sex was not associated with differences in perception of intern decision making (21=0.39, P=0.53) or resident autonomy (21=1.4, P=0.236) (Table 2).

Secular Factors Associated With Perception of Trainee Autonomy

The implementation of duty‐hour restrictions in 2003 was associated with decreased attending perception of autonomy. Only 41% of attendings perceived adequate intern involvement in decision making following the restrictions, compared with 57% before the restrictions were instituted (21=8.2, P=0.004). Similarly, 46% of attendings agreed with sufficient resident autonomy post‐duty hours, compared with 65% prior (21=10.1, P=0.001) (Table 2).

Academic season was also associated with differences in perception of autonomy (Table 2). In spring, 54% of attendings perceived adequate intern involvement in decision making, compared with 42% in the other seasons combined (21=5.34, P=0.021). Perception of resident autonomy was also higher in spring, though this was not statistically significant (57% in spring vs 48% in the other seasons; 21=2.37, P=0.123).

Multivariate Analyses

Variation in attending perception of housestaff autonomy by attending characteristics persisted in multivariate analysis. Table 3 shows ORs for perception of adequate intern involvement in decision making and sufficient resident autonomy. Sex was not a significant predictor of agreement with either statement. The odds that an attending would perceive adequate intern involvement in decision making were higher for later‐career attendings compared with early‐career attendings (ie, 04 years); attendings who completed residency 511 years ago were 2.16 more likely to perceive adequate involvement (OR: 2.16, 95% CI: 1.17‐3.97, P=0.013), and those >11 years from residency were 2.05 more likely (OR: 2.05, 95% CI: 1.16‐3.63, P=0.014). Later‐career attendings also had nonsignificant higher odds of perceiving sufficient resident autonomy compared with early‐career attendings (511 years, OR: 1.73, 95% CI: 0.963.14, P=0.07; >11 years, OR: 1.50, 95% CI: 0.862.62, P=0.154).

Association Between Agreement With Housestaff Autonomy and Attending Characteristics and Secular Factors
 Interns Involved With Decision MakingResident Had Sufficient Autonomy
  • NOTE: Abbreviations: CI, confidence interval; OR, odds ratio.

  • Multivariate logistic regression model to determine association between sex, years of experience, hospitalist specialty, duty hours, academic season, and the interaction between hospitalist specialty and experience with attending physician agreement with intern involvement in decision making. Similarly, the second model was to determine the association between the above‐listed factors and attending agreement with sufficient resident autonomy. Male sex was used as the reference group in the analysis. Experience was divided into tertiles of years since completion of residency: first tertile (04 years), second tertile (511 years) and third tertile (>11 years). First tertile of years of experience was used as the reference group in the analysis. Similarly, hospitalist*04 years of experience was the reference group when determining the effects of the interaction between hospitalist specialty and experience. The duty‐hours covariate is the responses after implementation of the 2003 duty‐hours restriction. Academic year was studied as spring season (MarchJune) compared with the other seasons.

CovariateOR (95% CI)P ValueOR (95% CI)P Value
Attending characteristics    
04 years of experience    
511 years of experience2.16 (1.17‐3.97)0.0131.73 (0.96‐3.14)0.07
>11 years of experience2.05 (1.16‐3.63)0.0141.50 (0.86‐2.62)0.154
Hospitalist0.19 (0.06‐0.58)0.0040.27 (0.11‐0.66)0.004
Hospitalist 04 years of experiencea    
Hospitalist 511 years of experiencea7.36 (1.86‐29.1)0.0045.85 (1.75‐19.6)0.004
Hospitalist >11 years of experiencea21.2 (1.73‐260)0.01714.4 (1.31‐159)0.029
Female sex1.41 (0.92‐2.17)0.1150.92 (0.60‐1.40)0.69
Secular factors    
Post‐2003 duty hours0.51 (0.29‐0.87)0.0140.49 (0.28‐0.86)0.012
Spring academic season1.94 (1.18‐3.19)0.0091.59 (0.97‐2.60)0.064

Hospitalists were associated with 81% lower odds of perceiving adequate intern involvement in decision making (OR: 0.19, 95% CI: 0.060.58, P=0.004) and 73% lower odds of perceiving sufficient resident autonomy compared with nonhospitalists (OR: 0.27, 95% CI: 0.110.66, P=0.004). However, there was a significant interaction between hospitalists and experience; compared with early‐career hospitalists, experienced hospitalists had higher odds of perceiving both adequate intern involvement in decision making (511 years, OR: 7.36, 95% CI: 1.8629.1, P=0.004; >11 years, OR: 21.2, 95% CI: 1.73260, P=0.017) and sufficient resident autonomy (511 years, OR: 5.85, 95% CI: 1.7519.6, P=0.004; >11 years, OR: 14.4, 95% CI: 1.3159, P=0.029) (Table 3).

Secular trends also remained associated with differences in perception of housestaff autonomy (Table 3). Attendings had 49% lower odds of perceiving adequate intern involvement in decision making in the years following duty‐hour limits compared with the years prior (OR: 0.51, 95% CI: 0.29‐0.87, P=0.014). Similarly, odds of perceiving sufficient resident autonomy were 51% lower post‐duty hours (OR: 0.49, 95% CI: 0.280.86, P=0.012). Spring season was associated with 94% higher odds of perceiving adequate intern involvement in decision making compared with other seasons (OR: 1.94, 95% 1.183.19, P=0.009). There were also nonsignificant higher odds of perception of sufficient resident autonomy in spring (OR: 1.59, 95% CI: 0.972.60, P=0.064). To address the possibility of associations due to secular trends resulting from repeated measures of attendings, models using attending fixed effects were also used. Clustering by attending, the associations between duty hours and perceiving sufficient resident autonomy and intern decision making both remained significant, but the association of spring season did not.

DISCUSSION

This study highlights that attendings' perception of housestaff autonomy varies by attending characteristics and secular trends. Specifically, early‐career attendings and hospitalists were less likely to perceive sufficient housestaff autonomy and involvement in decision making. However, there was a significant hospitalist‐experience interaction, such that more‐experienced hospitalists were associated with higher odds of perceiving sufficient autonomy than would be expected from the effect of experience alone. With respect to secular trends, attendings perceived more trainee autonomy in the last quarter of the academic year, and less autonomy after implementation of resident duty‐hour restrictions in 2003.

As Entrustable Professional Activities unveil a new emphasis on the notion of entrustment, it will be critical to ensure that attending assessment of resident performance is uniform and a valid judge of when to entrust autonomy.[27, 28] If, as suggested by these findings, perception of autonomy varies based on attending characteristics, all faculty may benefit from strategies to standardize assessment and evaluation skills to ensure trainees are appropriately progressing through various milestones to achieve competence. Our results suggest that faculty development may be particularly important for early‐career attendings and especially hospitalists.

Early‐career attendings may perceive less housestaff autonomy due to a reluctance to relinquish control over patient‐care duties and decision making when the attending is only a few years from residency. Hospitalists are relatively junior in most institutions and may be similar to early‐career attendings in that regard. It is noteworthy, however, that experienced hospitalists are associated with even greater perception of autonomy than would be predicted by years of experience alone. Hospitalists may gain experience at a rate faster than nonhospitalists, which could affect how they perceive autonomy and decision making in trainees and may make them more comfortable entrusting autonomy to housestaff. Early‐career hospitalists likely represent a heterogeneous group of physicians, in both 1‐year clinical hospitalists as well as academic‐career hospitalists, who may have different approaches to managing housestaff teams. Residents are less likely to fear hospitalists limiting their autonomy after exposure to working with hospitalists as teaching attendings, and our findings may suggest a corollary in that hospitalists may be more likely to perceive sufficient autonomy with more exposure to working with housestaff.[19]

Attendings perceived less housestaff autonomy following the 2003 duty‐hour limits. This may be due to attendings assuming more responsibilities that were traditionally performed by residents.[26, 29] This shifting of responsibility may lead to perception of less‐active housestaff decision making and less‐evident autonomy. These findings suggest autonomy may become even more restricted after implementation of the 2011 duty‐hour restrictions, which included 16‐hour shifts for interns.[5] Further studies are warranted in examining the effect of these new limits. Entrustment of autonomy and allowance for decision making is an essential part of any learning environment that allows residents to develop clinical reasoning skills, and it will be critical to adopt new strategies to encourage professional growth of housestaff in this new era.[30]

Attendings also perceived autonomy differently by academic season. Spring represents the season by which housestaff are most experienced and by which attendings may be most familiar with individual team members. Additionally, there may be a stronger emphasis on supervision and adherence to traditional hierarchy earlier in the academic year as interns and junior residents are learning their new roles.[30] These findings may have implications for system changes to support development of more functional educational dyads between attendings and trainees, especially early in the academic year.[31]

There are several limitations to our findings. This is a single‐institution study restricted to the general‐medicine service; thus generalizability is limited. Our outcome measures, the survey items of interest, question perception of housestaff autonomy but do not query the appropriateness of that autonomy, an important construct in entrustment. Additionally, self‐reported answers could be subject to recall bias. Although data were collected over 8 years, the most recent trends of residency training are not reflected. Although there was a significant interaction involving experienced hospitalists, wide confidence intervals and large standard errors likely reflect the relatively few individuals in this category. Though there was a large number of overall respondents, our interaction terms included few advanced‐career hospitalists, likely secondary to hospital medicine's relative youth as a specialty.

As this study focuses only on perception of autonomy, future work must investigate autonomy from a practical standpoint. It is conceivable that if factors such as attending characteristics and secular trends influence perception, they may also be associated with variation in how attendings entrust autonomy and provide supervision. To what extent perception and practice are linked remains to be studied, but it will be important to determine if variation due to these factors may also be associated with inconsistent and uneven supervisory practices that would adversely affect resident education and patient safety.

Finally, future work must include the viewpoint of the recipients of autonomy: the residents and interns. A significant limitation of the current study is the lack of the resident perspective, as our survey was only administered to attendings. Autonomy is clearly a 2‐way relationship, and attending perception must be corroborated by the resident's experience. It is possible attendings may perceive that their housestaff have sufficient autonomy, but residents may view this autonomy as inappropriate or unavoidable due an absentee attending who does not adequately supervise.[32] Future work must examine how resident and attending perceptions of autonomy correlate, and whether discordance or concordance in these perceptions influence satisfaction with attending‐resident relationships, education, and patient care.

In conclusion, significant variation existed among attending physicians with respect to perception of housestaff autonomy, an important aspect of entrustment and clinical supervision. This variation was present for hospitalists, among different levels of attending experience, and a significant interaction was found between these 2 factors. Additionally, secular trends were associated with differences in perception of autonomy. As entrustment of residents with progressive levels of autonomy becomes more integrated within the requirements for advancement in residency, a greater understanding of factors affecting entrustment will be critical in helping faculty develop skills to appropriately assess trainee professional growth and development.

Acknowledgments

The authors thank all members of the Multicenter Hospitalist Project for their assistance with this project.

Disclosures: The authors acknowledge funding from the AHRQ/CERT 5 U18 HS016967‐01. The funder had no role in the design of the study; the collection, analysis, and interpretation of the data; or the decision to approve publication of the finished manuscript. Prior presentations of the data include the 2012 Department of Medicine Research Day at the University of Chicago, the 2012 Society of Hospital Medicine Annual Meeting in San Diego, California, and the 2012 Midwest Society of General Medicine Meeting in Chicago, Illinois. All coauthors have seen and agree with the contents of the manuscript. The submission was not under review by any other publication. The authors report no conflicts of interest.

Files
References
  1. Kilminster SM, Jolly BC. Effective supervision in clinical practice settings: a literature review. Med Educ. 2000;34(10):827840.
  2. Ericsson KA. Deliberate practice and acquisition of expert performance: a general overview. Acad Emerg Med. 2008;15(11):988994.
  3. Kennedy TJ, Regehr G, Baker GR, et al. Progressive independence in clinical training: a tradition worth defending? Acad Med. 2005;80(10 suppl):S106S111.
  4. Committee on Optimizing Graduate Medical Trainee (Resident) Hours and Work Schedules to Improve Patient Safety, Institute of Medicine. Ulmer C, Wolman D, Johns M, eds. Resident Duty Hours: Enhancing Sleep, Supervision, and Safety. Washington, DC: National Academies Press; 2008.
  5. Nasca TJ, Day SH, Amis ES; ACGME Duty Hour Task Force. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010;363(2):e3.
  6. Haun SE. Positive impact of pediatric critical care fellows on mortality: is it merely a function of resident supervision? Crit Care Med. 1997;25(10):16221623.
  7. Sox CM, Burstin HR, Orav EJ, et al. The effect of supervision of residents on quality of care in five university‐affiliated emergency departments. Acad Med. 1998;73(7):776782.
  8. Phy MP, Offord KP, Manning DM, et al. Increased faculty presence on inpatient teaching services. Mayo Clin Proc. 2004;79(3):332336.
  9. Busari JO, Weggelaar NM, Knottnerus AC, et al. How medical residents perceive the quality of supervision provided by attending doctors in the clinical setting. Med Educ. 2005;39(7):696703.
  10. Fallon WF, Wears RL, Tepas JJ. Resident supervision in the operating room: does this impact on outcome? J Trauma. 1993;35(4):556560.
  11. Schmidt UH, Kumwilaisak K, Bittner E, et al. Effects of supervision by attending anesthesiologists on complications of emergency tracheal intubation. Anesthesiology. 2008;109(6):973937.
  12. Velmahos GC, Fili C, Vassiliu P, et al. Around‐the‐clock attending radiology coverage is essential to avoid mistakes in the care of trauma patients. Am Surg. 2001;67(12):11751177.
  13. Gennis VM, Gennis MA. Supervision in the outpatient clinic: effects on teaching and patient care. J Gen Int Med. 1993;8(7):378380.
  14. Paukert JL, Richards BF. How medical students and residents describe the roles and characteristics of their influential clinical teachers. Acad Med. 2000;75(8):843845.
  15. Farnan JM, Petty LA, Georgitis E, et al. A systematic review: the effect of clinical supervision on patient and residency education outcomes. Acad Med. 2012;87(4):428442.
  16. Farnan JM, Burger A, Boonayasai RT, et al; for the SGIM Housestaff Oversight Subcommittee. Survey of overnight academic hospitalist supervision of trainees. J Hosp Med. 2012;7(7):521523.
  17. Haber LA, Lau CY, Sharpe B, et al. Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7(8):606610.
  18. Trowbridge RL, Almeder L, Jacquet M, et al. The effect of overnight in‐house attending coverage on perceptions of care and education on a general medical service. J Grad Med Educ. 2010;2(1):5356.
  19. Chung P, Morrison J, Jin L, et al. Resident satisfaction on an academic hospitalist service: time to teach. Am J Med. 2002;112(7):597601.
  20. Nasca TJ, Philibert I, Brigham T, et al. The next GME accreditation system—rationale and benefits. N Engl J Med. 2012;366(11):10511056.
  21. Ten Cate O, Scheele F. Competency‐based postgraduate training: can we bridge the gap between theory and clinical practice? Acad Med. 2007;82(6):542547.
  22. Ten Cate O. Trust, competence, and the supervisor's role in postgraduate training. BMJ. 2006;333(7571):748751.
  23. Kashner TM, Byrne JM, Chang BK, et al. Measuring progressive independence with the resident supervision index: empirical approach. J Grad Med Educ. 2010;2(1):1730.
  24. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335(7):514517.
  25. Arora V, Meltzer D. Effect of ACGME duty hours on attending physician teaching and satisfaction. Arch Intern Med. 2008;168(11):12261227.
  26. Arora VM, Georgitis E, Siddique J, et al. Association of workload of on‐call interns with on‐call sleep duration, shift duration, and participation in educational activities. JAMA. 2008;300(10):11461153.
  27. Ten Cate O. Entrustability of professional activities and competency‐based training. Med Educ. 2005;39:11761177.
  28. Sterkenburg A, Barach P, Kalkman C, et al. When do supervising physicians decide to entrust residents with unsupervised tasks? Acad Med. 2010;85(9):13991400.
  29. Reed D, Levine R, et al. Effect of residency duty‐hour limits. Arch Intern Med. 2007;167(14):14871492.
  30. Wilkerson L, Irby DM. Strategies for improving teaching practices: a comprehensive approach to faculty development. Acad Med. 1998;73:387396.
  31. Kilminster S, Jolly B, der Vleuten CP. A framework for effective training for supervisors. Med Teach. 2002;24:385389.
  32. Farnan JM, Johnson JK, Meltzer DO, et al. On‐call supervision and resident autonomy: from micromanager to absentee attending. Am J Med. 2009;122(8):784788.
Article PDF
Issue
Journal of Hospital Medicine - 8(6)
Publications
Page Number
292-297
Sections
Files
Files
Article PDF
Article PDF

Clinical supervision in graduate medical education (GME) emphasizes patient safety while promoting development of clinical expertise by allowing trainees progressive independence.[1, 2, 3] The importance of the balance between supervision and autonomy has been recognized by accreditation organizations, namely the Institute of Medicine and the Accreditation Council for Graduate Medical Education (ACGME).[4, 5] However, little is known of best practices in supervision, and the model of progressive independence in clinical training lacks empirical support.[3] Limited evidence suggests that enhanced clinical supervision may have positive effects on patient and education‐related outcomes.[6, 7, 8, 9, 10, 11, 12, 13, 14, 15] However, a more nuanced understanding of potential effects of enhanced supervision on resident autonomy and decision making is still required, particularly as preliminary work on increased on‐site hospitalist supervision has yielded mixed results.[16, 17, 18, 19]

Understanding how trainees are entrusted with autonomy will be integral to the ACGME's Next Accreditation System.[20] Entrustable Professional Activities are benchmarks by which resident readiness to progress through training will be judged.[21] The extent to which trainees are entrusted with autonomy is largely determined by the subjective assessment of immediate supervisors, as autonomy is rarely measured or quantified.[3, 22, 23] This judgment of autonomy, most frequently performed by ward attendings, may be subject to significant variation and influenced by factors other than the resident's competence and clinical abilities.

To that end, it is worth considering what factors may affect attending perception of housestaff autonomy and decision making. Recent changes in the GME environment and policy implementation have altered the landscape of the attending workforce considerably. The growth of the hospitalist movement in teaching hospitals, in part due to duty hours, has led to more residents being supervised by hospitalists, who may perceive trainee autonomy differently than other attendings do.[24] This study aims to examine whether factors such as attending demographics and short‐term and long‐term secular trends influence attending perception of housestaff autonomy and participation in decision making.

METHODS

Study Design

From 2001 to 2008, attending physicians at a single academic institution were surveyed at the end of inpatient general medicine teaching rotations.[25] The University of Chicago general medicine service consists of ward teams of an attending physician (internists, hospitalists, or subspecialists), 1 senior resident, and 1 or 2 interns. Attendings serve for 2‐ or 4‐week rotations. Attendings were consented for participation and received a 40‐item, paper‐based survey at the rotation's end. The institutional review board approved this study.

Data Collection

From the 40 survey items, 2 statements were selected for analysis: The intern(s) were truly involved in decision making about their patients and My resident felt that s/he had sufficient autonomy this month. These items have been used in previous work studying attending‐resident dynamics.[19, 26] Attendings also reported demographic and professional information as well as self‐identified hospitalist status, ascertained by the question Do you consider yourself to be a hospitalist? Survey month and year were also recorded. We conducted a secondary data analysis of an inclusive sample of responses to the questions of interest.

Statistical Analysis

Descriptive statistics were used to summarize survey responses and demographics. Survey questions consisted of Likert‐type items. Because the distribution of responses was skewed toward strong agreement for both questions, we collapsed scores into 2 categories (Strongly Agree and Do Not Strongly Agree).[19] Perception of sufficient trainee autonomy was defined as a response of Strongly Agree. The Pearson 2 test was used to compare proportions, and t tests were used to compare mean years since completion of residency and weeks on service between different groups.

Multivariate logistic regression with stepwise forward regression was used to model the relationship between attending sex, institutional hospitalist designation, years of experience, implementation of duty‐hours restrictions, and academic season, and perception of trainee autonomy and decision making. Academic seasons were defined as summer (JulySeptember), fall (OctoberDecember), winter (JanuaryMarch) and spring (AprilJune).[26] Years of experience were divided into tertiles of years since residency: 04 years, 511 years, and >11 years. To account for the possibility that the effect of hospitalist specialty varied by experience, interaction terms were constructed. The interaction term hospitalist*early‐career was used as the reference group.

RESULTS

Seven hundred thirty‐eight surveys were distributed to attendings on inpatient general medicine teaching services from 2001 to 2008; 70% (n=514) were included in the analysis. Table 1 provides demographic characteristics of the respondents. Roughly half (47%) were female, and 23% were hospitalists. Experience ranged from 0 to 35 years, with a median of 7 years. Weeks on service per year ranged from 1 to 27, with a median of 6 weeks. Hospitalists represented a less‐experienced group of attendings, as their mean experience was 4.5 years (standard deviation [SD] 4.5) compared with 11.2 years (SD 7.7) for nonhospitalists (P<0.001). Hospitalists attended more frequently, with a mean 14.2 weeks on service (SD 6.5) compared with 5.8 weeks (SD 3.4) for nonhospitalists (P<0.001). Nineteen percent (n=98) of surveys were completed prior to the first ACGME duty‐hours restriction in 2003. Responses were distributed fairly equally across the academic year, with 29% completed in summer, 26% in fall, 24% in winter, and 21% in spring.

Attending Physician Demographic Characteristics
CharacteristicsValue
  • NOTE: Abbreviations: IQR, interquartile range; SD, standard deviation.

  • Because of missing data, numbers may not correspond to exact percentages.

  • Data only available beyond academic year 20032004.

Female, n (%)275 (47)
Hospitalist, n (%)125 (23)
Years since completion of residency 
Mean, median, SD9.3, 7, 7.6
IQR314
04, n (%)167 (36)
511, n (%)146 (32)
>11, n (%)149 (32)
Weeks on service per yearb 
Mean, median, SD8.1, 6, 5.8
IQR412

Forty‐four percent (n=212) of attendings perceived adequate intern involvement in decision making, and 50% (n=238) perceived sufficient resident autonomy. The correlation coefficient between these 2 measures was 0.66.

Attending Factors Associated With Perception of Trainee Autonomy

In univariate analysis, hospitalists perceived sufficient trainee autonomy less frequently than nonhospitalists; 33% perceived adequate intern involvement in decision making compared with 48% of nonhospitalists (21=6.7, P=0.01), and 42% perceived sufficient resident autonomy compared with 54% of nonhospitalists (21=3.9, P=0.048) (Table 2).

Attending Characteristics and Time Trends Associated With Perception of Intern Involvement in Decision Making and Resident Autonomy
Attending Characteristics, n (%)Agree With Intern Involvement in Decision MakingAgree With Sufficient Resident Autonomy
  • NOTE: Abbreviations: F, female; M, male.

  • Because of missing data, numbers may not correspond to exact percentages.

Designation  
Hospitalist29 (33)37 (42)
Nonhospitalist163 (48)180 (54)
Years since completion of residency  
0437 (27)49 (36)
51177 (53)88 (61)
>1177 (53)81 (56)
Sex  
F98 (46)100 (47)
M113 (43)138 (53)
Secular factors, n (%)  
Pre‐2003 duty‐hours restrictions56 (57)62 (65)
Post‐2003 duty‐hours restrictions156 (41)176 (46)
Season of survey  
Summer (JulySeptember)61 (45)69 (51)
Fall (OctoberDecember)53 (42)59 (48)
Winter (JanuaryMarch)42 (37)52 (46)
Spring (AprilJune)56 (54)58 (57)

Perception of trainee autonomy increased with experience (Table 2). About 30% of early‐career attendings (04 years experience) perceived sufficient autonomy and involvement in decision making compared with >50% agreement in the later‐career tertiles (intern decision making: 22=25.1, P<0.001; resident autonomy: 22=18.9, P<0.001). Attendings perceiving more intern decision making involvement had a mean 11 years of experience (SD 7.1), whereas those perceiving less had a mean of 8.8 years (SD 7.8; P=0.003). Mean years of experience were similar for perception of resident autonomy (10.6 years [SD 7.2] vs 8.9 years [SD 7.8], P=0.021).

Sex was not associated with differences in perception of intern decision making (21=0.39, P=0.53) or resident autonomy (21=1.4, P=0.236) (Table 2).

Secular Factors Associated With Perception of Trainee Autonomy

The implementation of duty‐hour restrictions in 2003 was associated with decreased attending perception of autonomy. Only 41% of attendings perceived adequate intern involvement in decision making following the restrictions, compared with 57% before the restrictions were instituted (21=8.2, P=0.004). Similarly, 46% of attendings agreed with sufficient resident autonomy post‐duty hours, compared with 65% prior (21=10.1, P=0.001) (Table 2).

Academic season was also associated with differences in perception of autonomy (Table 2). In spring, 54% of attendings perceived adequate intern involvement in decision making, compared with 42% in the other seasons combined (21=5.34, P=0.021). Perception of resident autonomy was also higher in spring, though this was not statistically significant (57% in spring vs 48% in the other seasons; 21=2.37, P=0.123).

Multivariate Analyses

Variation in attending perception of housestaff autonomy by attending characteristics persisted in multivariate analysis. Table 3 shows ORs for perception of adequate intern involvement in decision making and sufficient resident autonomy. Sex was not a significant predictor of agreement with either statement. The odds that an attending would perceive adequate intern involvement in decision making were higher for later‐career attendings compared with early‐career attendings (ie, 04 years); attendings who completed residency 511 years ago were 2.16 more likely to perceive adequate involvement (OR: 2.16, 95% CI: 1.17‐3.97, P=0.013), and those >11 years from residency were 2.05 more likely (OR: 2.05, 95% CI: 1.16‐3.63, P=0.014). Later‐career attendings also had nonsignificant higher odds of perceiving sufficient resident autonomy compared with early‐career attendings (511 years, OR: 1.73, 95% CI: 0.963.14, P=0.07; >11 years, OR: 1.50, 95% CI: 0.862.62, P=0.154).

Association Between Agreement With Housestaff Autonomy and Attending Characteristics and Secular Factors
 Interns Involved With Decision MakingResident Had Sufficient Autonomy
  • NOTE: Abbreviations: CI, confidence interval; OR, odds ratio.

  • Multivariate logistic regression model to determine association between sex, years of experience, hospitalist specialty, duty hours, academic season, and the interaction between hospitalist specialty and experience with attending physician agreement with intern involvement in decision making. Similarly, the second model was to determine the association between the above‐listed factors and attending agreement with sufficient resident autonomy. Male sex was used as the reference group in the analysis. Experience was divided into tertiles of years since completion of residency: first tertile (04 years), second tertile (511 years) and third tertile (>11 years). First tertile of years of experience was used as the reference group in the analysis. Similarly, hospitalist*04 years of experience was the reference group when determining the effects of the interaction between hospitalist specialty and experience. The duty‐hours covariate is the responses after implementation of the 2003 duty‐hours restriction. Academic year was studied as spring season (MarchJune) compared with the other seasons.

CovariateOR (95% CI)P ValueOR (95% CI)P Value
Attending characteristics    
04 years of experience    
511 years of experience2.16 (1.17‐3.97)0.0131.73 (0.96‐3.14)0.07
>11 years of experience2.05 (1.16‐3.63)0.0141.50 (0.86‐2.62)0.154
Hospitalist0.19 (0.06‐0.58)0.0040.27 (0.11‐0.66)0.004
Hospitalist 04 years of experiencea    
Hospitalist 511 years of experiencea7.36 (1.86‐29.1)0.0045.85 (1.75‐19.6)0.004
Hospitalist >11 years of experiencea21.2 (1.73‐260)0.01714.4 (1.31‐159)0.029
Female sex1.41 (0.92‐2.17)0.1150.92 (0.60‐1.40)0.69
Secular factors    
Post‐2003 duty hours0.51 (0.29‐0.87)0.0140.49 (0.28‐0.86)0.012
Spring academic season1.94 (1.18‐3.19)0.0091.59 (0.97‐2.60)0.064

Hospitalists were associated with 81% lower odds of perceiving adequate intern involvement in decision making (OR: 0.19, 95% CI: 0.060.58, P=0.004) and 73% lower odds of perceiving sufficient resident autonomy compared with nonhospitalists (OR: 0.27, 95% CI: 0.110.66, P=0.004). However, there was a significant interaction between hospitalists and experience; compared with early‐career hospitalists, experienced hospitalists had higher odds of perceiving both adequate intern involvement in decision making (511 years, OR: 7.36, 95% CI: 1.8629.1, P=0.004; >11 years, OR: 21.2, 95% CI: 1.73260, P=0.017) and sufficient resident autonomy (511 years, OR: 5.85, 95% CI: 1.7519.6, P=0.004; >11 years, OR: 14.4, 95% CI: 1.3159, P=0.029) (Table 3).

Secular trends also remained associated with differences in perception of housestaff autonomy (Table 3). Attendings had 49% lower odds of perceiving adequate intern involvement in decision making in the years following duty‐hour limits compared with the years prior (OR: 0.51, 95% CI: 0.29‐0.87, P=0.014). Similarly, odds of perceiving sufficient resident autonomy were 51% lower post‐duty hours (OR: 0.49, 95% CI: 0.280.86, P=0.012). Spring season was associated with 94% higher odds of perceiving adequate intern involvement in decision making compared with other seasons (OR: 1.94, 95% 1.183.19, P=0.009). There were also nonsignificant higher odds of perception of sufficient resident autonomy in spring (OR: 1.59, 95% CI: 0.972.60, P=0.064). To address the possibility of associations due to secular trends resulting from repeated measures of attendings, models using attending fixed effects were also used. Clustering by attending, the associations between duty hours and perceiving sufficient resident autonomy and intern decision making both remained significant, but the association of spring season did not.

DISCUSSION

This study highlights that attendings' perception of housestaff autonomy varies by attending characteristics and secular trends. Specifically, early‐career attendings and hospitalists were less likely to perceive sufficient housestaff autonomy and involvement in decision making. However, there was a significant hospitalist‐experience interaction, such that more‐experienced hospitalists were associated with higher odds of perceiving sufficient autonomy than would be expected from the effect of experience alone. With respect to secular trends, attendings perceived more trainee autonomy in the last quarter of the academic year, and less autonomy after implementation of resident duty‐hour restrictions in 2003.

As Entrustable Professional Activities unveil a new emphasis on the notion of entrustment, it will be critical to ensure that attending assessment of resident performance is uniform and a valid judge of when to entrust autonomy.[27, 28] If, as suggested by these findings, perception of autonomy varies based on attending characteristics, all faculty may benefit from strategies to standardize assessment and evaluation skills to ensure trainees are appropriately progressing through various milestones to achieve competence. Our results suggest that faculty development may be particularly important for early‐career attendings and especially hospitalists.

Early‐career attendings may perceive less housestaff autonomy due to a reluctance to relinquish control over patient‐care duties and decision making when the attending is only a few years from residency. Hospitalists are relatively junior in most institutions and may be similar to early‐career attendings in that regard. It is noteworthy, however, that experienced hospitalists are associated with even greater perception of autonomy than would be predicted by years of experience alone. Hospitalists may gain experience at a rate faster than nonhospitalists, which could affect how they perceive autonomy and decision making in trainees and may make them more comfortable entrusting autonomy to housestaff. Early‐career hospitalists likely represent a heterogeneous group of physicians, in both 1‐year clinical hospitalists as well as academic‐career hospitalists, who may have different approaches to managing housestaff teams. Residents are less likely to fear hospitalists limiting their autonomy after exposure to working with hospitalists as teaching attendings, and our findings may suggest a corollary in that hospitalists may be more likely to perceive sufficient autonomy with more exposure to working with housestaff.[19]

Attendings perceived less housestaff autonomy following the 2003 duty‐hour limits. This may be due to attendings assuming more responsibilities that were traditionally performed by residents.[26, 29] This shifting of responsibility may lead to perception of less‐active housestaff decision making and less‐evident autonomy. These findings suggest autonomy may become even more restricted after implementation of the 2011 duty‐hour restrictions, which included 16‐hour shifts for interns.[5] Further studies are warranted in examining the effect of these new limits. Entrustment of autonomy and allowance for decision making is an essential part of any learning environment that allows residents to develop clinical reasoning skills, and it will be critical to adopt new strategies to encourage professional growth of housestaff in this new era.[30]

Attendings also perceived autonomy differently by academic season. Spring represents the season by which housestaff are most experienced and by which attendings may be most familiar with individual team members. Additionally, there may be a stronger emphasis on supervision and adherence to traditional hierarchy earlier in the academic year as interns and junior residents are learning their new roles.[30] These findings may have implications for system changes to support development of more functional educational dyads between attendings and trainees, especially early in the academic year.[31]

There are several limitations to our findings. This is a single‐institution study restricted to the general‐medicine service; thus generalizability is limited. Our outcome measures, the survey items of interest, question perception of housestaff autonomy but do not query the appropriateness of that autonomy, an important construct in entrustment. Additionally, self‐reported answers could be subject to recall bias. Although data were collected over 8 years, the most recent trends of residency training are not reflected. Although there was a significant interaction involving experienced hospitalists, wide confidence intervals and large standard errors likely reflect the relatively few individuals in this category. Though there was a large number of overall respondents, our interaction terms included few advanced‐career hospitalists, likely secondary to hospital medicine's relative youth as a specialty.

As this study focuses only on perception of autonomy, future work must investigate autonomy from a practical standpoint. It is conceivable that if factors such as attending characteristics and secular trends influence perception, they may also be associated with variation in how attendings entrust autonomy and provide supervision. To what extent perception and practice are linked remains to be studied, but it will be important to determine if variation due to these factors may also be associated with inconsistent and uneven supervisory practices that would adversely affect resident education and patient safety.

Finally, future work must include the viewpoint of the recipients of autonomy: the residents and interns. A significant limitation of the current study is the lack of the resident perspective, as our survey was only administered to attendings. Autonomy is clearly a 2‐way relationship, and attending perception must be corroborated by the resident's experience. It is possible attendings may perceive that their housestaff have sufficient autonomy, but residents may view this autonomy as inappropriate or unavoidable due an absentee attending who does not adequately supervise.[32] Future work must examine how resident and attending perceptions of autonomy correlate, and whether discordance or concordance in these perceptions influence satisfaction with attending‐resident relationships, education, and patient care.

In conclusion, significant variation existed among attending physicians with respect to perception of housestaff autonomy, an important aspect of entrustment and clinical supervision. This variation was present for hospitalists, among different levels of attending experience, and a significant interaction was found between these 2 factors. Additionally, secular trends were associated with differences in perception of autonomy. As entrustment of residents with progressive levels of autonomy becomes more integrated within the requirements for advancement in residency, a greater understanding of factors affecting entrustment will be critical in helping faculty develop skills to appropriately assess trainee professional growth and development.

Acknowledgments

The authors thank all members of the Multicenter Hospitalist Project for their assistance with this project.

Disclosures: The authors acknowledge funding from the AHRQ/CERT 5 U18 HS016967‐01. The funder had no role in the design of the study; the collection, analysis, and interpretation of the data; or the decision to approve publication of the finished manuscript. Prior presentations of the data include the 2012 Department of Medicine Research Day at the University of Chicago, the 2012 Society of Hospital Medicine Annual Meeting in San Diego, California, and the 2012 Midwest Society of General Medicine Meeting in Chicago, Illinois. All coauthors have seen and agree with the contents of the manuscript. The submission was not under review by any other publication. The authors report no conflicts of interest.

Clinical supervision in graduate medical education (GME) emphasizes patient safety while promoting development of clinical expertise by allowing trainees progressive independence.[1, 2, 3] The importance of the balance between supervision and autonomy has been recognized by accreditation organizations, namely the Institute of Medicine and the Accreditation Council for Graduate Medical Education (ACGME).[4, 5] However, little is known of best practices in supervision, and the model of progressive independence in clinical training lacks empirical support.[3] Limited evidence suggests that enhanced clinical supervision may have positive effects on patient and education‐related outcomes.[6, 7, 8, 9, 10, 11, 12, 13, 14, 15] However, a more nuanced understanding of potential effects of enhanced supervision on resident autonomy and decision making is still required, particularly as preliminary work on increased on‐site hospitalist supervision has yielded mixed results.[16, 17, 18, 19]

Understanding how trainees are entrusted with autonomy will be integral to the ACGME's Next Accreditation System.[20] Entrustable Professional Activities are benchmarks by which resident readiness to progress through training will be judged.[21] The extent to which trainees are entrusted with autonomy is largely determined by the subjective assessment of immediate supervisors, as autonomy is rarely measured or quantified.[3, 22, 23] This judgment of autonomy, most frequently performed by ward attendings, may be subject to significant variation and influenced by factors other than the resident's competence and clinical abilities.

To that end, it is worth considering what factors may affect attending perception of housestaff autonomy and decision making. Recent changes in the GME environment and policy implementation have altered the landscape of the attending workforce considerably. The growth of the hospitalist movement in teaching hospitals, in part due to duty hours, has led to more residents being supervised by hospitalists, who may perceive trainee autonomy differently than other attendings do.[24] This study aims to examine whether factors such as attending demographics and short‐term and long‐term secular trends influence attending perception of housestaff autonomy and participation in decision making.

METHODS

Study Design

From 2001 to 2008, attending physicians at a single academic institution were surveyed at the end of inpatient general medicine teaching rotations.[25] The University of Chicago general medicine service consists of ward teams of an attending physician (internists, hospitalists, or subspecialists), 1 senior resident, and 1 or 2 interns. Attendings serve for 2‐ or 4‐week rotations. Attendings were consented for participation and received a 40‐item, paper‐based survey at the rotation's end. The institutional review board approved this study.

Data Collection

From the 40 survey items, 2 statements were selected for analysis: The intern(s) were truly involved in decision making about their patients and My resident felt that s/he had sufficient autonomy this month. These items have been used in previous work studying attending‐resident dynamics.[19, 26] Attendings also reported demographic and professional information as well as self‐identified hospitalist status, ascertained by the question Do you consider yourself to be a hospitalist? Survey month and year were also recorded. We conducted a secondary data analysis of an inclusive sample of responses to the questions of interest.

Statistical Analysis

Descriptive statistics were used to summarize survey responses and demographics. Survey questions consisted of Likert‐type items. Because the distribution of responses was skewed toward strong agreement for both questions, we collapsed scores into 2 categories (Strongly Agree and Do Not Strongly Agree).[19] Perception of sufficient trainee autonomy was defined as a response of Strongly Agree. The Pearson 2 test was used to compare proportions, and t tests were used to compare mean years since completion of residency and weeks on service between different groups.

Multivariate logistic regression with stepwise forward regression was used to model the relationship between attending sex, institutional hospitalist designation, years of experience, implementation of duty‐hours restrictions, and academic season, and perception of trainee autonomy and decision making. Academic seasons were defined as summer (JulySeptember), fall (OctoberDecember), winter (JanuaryMarch) and spring (AprilJune).[26] Years of experience were divided into tertiles of years since residency: 04 years, 511 years, and >11 years. To account for the possibility that the effect of hospitalist specialty varied by experience, interaction terms were constructed. The interaction term hospitalist*early‐career was used as the reference group.

RESULTS

Seven hundred thirty‐eight surveys were distributed to attendings on inpatient general medicine teaching services from 2001 to 2008; 70% (n=514) were included in the analysis. Table 1 provides demographic characteristics of the respondents. Roughly half (47%) were female, and 23% were hospitalists. Experience ranged from 0 to 35 years, with a median of 7 years. Weeks on service per year ranged from 1 to 27, with a median of 6 weeks. Hospitalists represented a less‐experienced group of attendings, as their mean experience was 4.5 years (standard deviation [SD] 4.5) compared with 11.2 years (SD 7.7) for nonhospitalists (P<0.001). Hospitalists attended more frequently, with a mean 14.2 weeks on service (SD 6.5) compared with 5.8 weeks (SD 3.4) for nonhospitalists (P<0.001). Nineteen percent (n=98) of surveys were completed prior to the first ACGME duty‐hours restriction in 2003. Responses were distributed fairly equally across the academic year, with 29% completed in summer, 26% in fall, 24% in winter, and 21% in spring.

Attending Physician Demographic Characteristics
CharacteristicsValue
  • NOTE: Abbreviations: IQR, interquartile range; SD, standard deviation.

  • Because of missing data, numbers may not correspond to exact percentages.

  • Data only available beyond academic year 20032004.

Female, n (%)275 (47)
Hospitalist, n (%)125 (23)
Years since completion of residency 
Mean, median, SD9.3, 7, 7.6
IQR314
04, n (%)167 (36)
511, n (%)146 (32)
>11, n (%)149 (32)
Weeks on service per yearb 
Mean, median, SD8.1, 6, 5.8
IQR412

Forty‐four percent (n=212) of attendings perceived adequate intern involvement in decision making, and 50% (n=238) perceived sufficient resident autonomy. The correlation coefficient between these 2 measures was 0.66.

Attending Factors Associated With Perception of Trainee Autonomy

In univariate analysis, hospitalists perceived sufficient trainee autonomy less frequently than nonhospitalists; 33% perceived adequate intern involvement in decision making compared with 48% of nonhospitalists (21=6.7, P=0.01), and 42% perceived sufficient resident autonomy compared with 54% of nonhospitalists (21=3.9, P=0.048) (Table 2).

Attending Characteristics and Time Trends Associated With Perception of Intern Involvement in Decision Making and Resident Autonomy
Attending Characteristics, n (%)Agree With Intern Involvement in Decision MakingAgree With Sufficient Resident Autonomy
  • NOTE: Abbreviations: F, female; M, male.

  • Because of missing data, numbers may not correspond to exact percentages.

Designation  
Hospitalist29 (33)37 (42)
Nonhospitalist163 (48)180 (54)
Years since completion of residency  
0437 (27)49 (36)
51177 (53)88 (61)
>1177 (53)81 (56)
Sex  
F98 (46)100 (47)
M113 (43)138 (53)
Secular factors, n (%)  
Pre‐2003 duty‐hours restrictions56 (57)62 (65)
Post‐2003 duty‐hours restrictions156 (41)176 (46)
Season of survey  
Summer (JulySeptember)61 (45)69 (51)
Fall (OctoberDecember)53 (42)59 (48)
Winter (JanuaryMarch)42 (37)52 (46)
Spring (AprilJune)56 (54)58 (57)

Perception of trainee autonomy increased with experience (Table 2). About 30% of early‐career attendings (04 years experience) perceived sufficient autonomy and involvement in decision making compared with >50% agreement in the later‐career tertiles (intern decision making: 22=25.1, P<0.001; resident autonomy: 22=18.9, P<0.001). Attendings perceiving more intern decision making involvement had a mean 11 years of experience (SD 7.1), whereas those perceiving less had a mean of 8.8 years (SD 7.8; P=0.003). Mean years of experience were similar for perception of resident autonomy (10.6 years [SD 7.2] vs 8.9 years [SD 7.8], P=0.021).

Sex was not associated with differences in perception of intern decision making (21=0.39, P=0.53) or resident autonomy (21=1.4, P=0.236) (Table 2).

Secular Factors Associated With Perception of Trainee Autonomy

The implementation of duty‐hour restrictions in 2003 was associated with decreased attending perception of autonomy. Only 41% of attendings perceived adequate intern involvement in decision making following the restrictions, compared with 57% before the restrictions were instituted (21=8.2, P=0.004). Similarly, 46% of attendings agreed with sufficient resident autonomy post‐duty hours, compared with 65% prior (21=10.1, P=0.001) (Table 2).

Academic season was also associated with differences in perception of autonomy (Table 2). In spring, 54% of attendings perceived adequate intern involvement in decision making, compared with 42% in the other seasons combined (21=5.34, P=0.021). Perception of resident autonomy was also higher in spring, though this was not statistically significant (57% in spring vs 48% in the other seasons; 21=2.37, P=0.123).

Multivariate Analyses

Variation in attending perception of housestaff autonomy by attending characteristics persisted in multivariate analysis. Table 3 shows ORs for perception of adequate intern involvement in decision making and sufficient resident autonomy. Sex was not a significant predictor of agreement with either statement. The odds that an attending would perceive adequate intern involvement in decision making were higher for later‐career attendings compared with early‐career attendings (ie, 04 years); attendings who completed residency 511 years ago were 2.16 more likely to perceive adequate involvement (OR: 2.16, 95% CI: 1.17‐3.97, P=0.013), and those >11 years from residency were 2.05 more likely (OR: 2.05, 95% CI: 1.16‐3.63, P=0.014). Later‐career attendings also had nonsignificant higher odds of perceiving sufficient resident autonomy compared with early‐career attendings (511 years, OR: 1.73, 95% CI: 0.963.14, P=0.07; >11 years, OR: 1.50, 95% CI: 0.862.62, P=0.154).

Association Between Agreement With Housestaff Autonomy and Attending Characteristics and Secular Factors
 Interns Involved With Decision MakingResident Had Sufficient Autonomy
  • NOTE: Abbreviations: CI, confidence interval; OR, odds ratio.

  • Multivariate logistic regression model to determine association between sex, years of experience, hospitalist specialty, duty hours, academic season, and the interaction between hospitalist specialty and experience with attending physician agreement with intern involvement in decision making. Similarly, the second model was to determine the association between the above‐listed factors and attending agreement with sufficient resident autonomy. Male sex was used as the reference group in the analysis. Experience was divided into tertiles of years since completion of residency: first tertile (04 years), second tertile (511 years) and third tertile (>11 years). First tertile of years of experience was used as the reference group in the analysis. Similarly, hospitalist*04 years of experience was the reference group when determining the effects of the interaction between hospitalist specialty and experience. The duty‐hours covariate is the responses after implementation of the 2003 duty‐hours restriction. Academic year was studied as spring season (MarchJune) compared with the other seasons.

CovariateOR (95% CI)P ValueOR (95% CI)P Value
Attending characteristics    
04 years of experience    
511 years of experience2.16 (1.17‐3.97)0.0131.73 (0.96‐3.14)0.07
>11 years of experience2.05 (1.16‐3.63)0.0141.50 (0.86‐2.62)0.154
Hospitalist0.19 (0.06‐0.58)0.0040.27 (0.11‐0.66)0.004
Hospitalist 04 years of experiencea    
Hospitalist 511 years of experiencea7.36 (1.86‐29.1)0.0045.85 (1.75‐19.6)0.004
Hospitalist >11 years of experiencea21.2 (1.73‐260)0.01714.4 (1.31‐159)0.029
Female sex1.41 (0.92‐2.17)0.1150.92 (0.60‐1.40)0.69
Secular factors    
Post‐2003 duty hours0.51 (0.29‐0.87)0.0140.49 (0.28‐0.86)0.012
Spring academic season1.94 (1.18‐3.19)0.0091.59 (0.97‐2.60)0.064

Hospitalists were associated with 81% lower odds of perceiving adequate intern involvement in decision making (OR: 0.19, 95% CI: 0.060.58, P=0.004) and 73% lower odds of perceiving sufficient resident autonomy compared with nonhospitalists (OR: 0.27, 95% CI: 0.110.66, P=0.004). However, there was a significant interaction between hospitalists and experience; compared with early‐career hospitalists, experienced hospitalists had higher odds of perceiving both adequate intern involvement in decision making (511 years, OR: 7.36, 95% CI: 1.8629.1, P=0.004; >11 years, OR: 21.2, 95% CI: 1.73260, P=0.017) and sufficient resident autonomy (511 years, OR: 5.85, 95% CI: 1.7519.6, P=0.004; >11 years, OR: 14.4, 95% CI: 1.3159, P=0.029) (Table 3).

Secular trends also remained associated with differences in perception of housestaff autonomy (Table 3). Attendings had 49% lower odds of perceiving adequate intern involvement in decision making in the years following duty‐hour limits compared with the years prior (OR: 0.51, 95% CI: 0.29‐0.87, P=0.014). Similarly, odds of perceiving sufficient resident autonomy were 51% lower post‐duty hours (OR: 0.49, 95% CI: 0.280.86, P=0.012). Spring season was associated with 94% higher odds of perceiving adequate intern involvement in decision making compared with other seasons (OR: 1.94, 95% 1.183.19, P=0.009). There were also nonsignificant higher odds of perception of sufficient resident autonomy in spring (OR: 1.59, 95% CI: 0.972.60, P=0.064). To address the possibility of associations due to secular trends resulting from repeated measures of attendings, models using attending fixed effects were also used. Clustering by attending, the associations between duty hours and perceiving sufficient resident autonomy and intern decision making both remained significant, but the association of spring season did not.

DISCUSSION

This study highlights that attendings' perception of housestaff autonomy varies by attending characteristics and secular trends. Specifically, early‐career attendings and hospitalists were less likely to perceive sufficient housestaff autonomy and involvement in decision making. However, there was a significant hospitalist‐experience interaction, such that more‐experienced hospitalists were associated with higher odds of perceiving sufficient autonomy than would be expected from the effect of experience alone. With respect to secular trends, attendings perceived more trainee autonomy in the last quarter of the academic year, and less autonomy after implementation of resident duty‐hour restrictions in 2003.

As Entrustable Professional Activities unveil a new emphasis on the notion of entrustment, it will be critical to ensure that attending assessment of resident performance is uniform and a valid judge of when to entrust autonomy.[27, 28] If, as suggested by these findings, perception of autonomy varies based on attending characteristics, all faculty may benefit from strategies to standardize assessment and evaluation skills to ensure trainees are appropriately progressing through various milestones to achieve competence. Our results suggest that faculty development may be particularly important for early‐career attendings and especially hospitalists.

Early‐career attendings may perceive less housestaff autonomy due to a reluctance to relinquish control over patient‐care duties and decision making when the attending is only a few years from residency. Hospitalists are relatively junior in most institutions and may be similar to early‐career attendings in that regard. It is noteworthy, however, that experienced hospitalists are associated with even greater perception of autonomy than would be predicted by years of experience alone. Hospitalists may gain experience at a rate faster than nonhospitalists, which could affect how they perceive autonomy and decision making in trainees and may make them more comfortable entrusting autonomy to housestaff. Early‐career hospitalists likely represent a heterogeneous group of physicians, in both 1‐year clinical hospitalists as well as academic‐career hospitalists, who may have different approaches to managing housestaff teams. Residents are less likely to fear hospitalists limiting their autonomy after exposure to working with hospitalists as teaching attendings, and our findings may suggest a corollary in that hospitalists may be more likely to perceive sufficient autonomy with more exposure to working with housestaff.[19]

Attendings perceived less housestaff autonomy following the 2003 duty‐hour limits. This may be due to attendings assuming more responsibilities that were traditionally performed by residents.[26, 29] This shifting of responsibility may lead to perception of less‐active housestaff decision making and less‐evident autonomy. These findings suggest autonomy may become even more restricted after implementation of the 2011 duty‐hour restrictions, which included 16‐hour shifts for interns.[5] Further studies are warranted in examining the effect of these new limits. Entrustment of autonomy and allowance for decision making is an essential part of any learning environment that allows residents to develop clinical reasoning skills, and it will be critical to adopt new strategies to encourage professional growth of housestaff in this new era.[30]

Attendings also perceived autonomy differently by academic season. Spring represents the season by which housestaff are most experienced and by which attendings may be most familiar with individual team members. Additionally, there may be a stronger emphasis on supervision and adherence to traditional hierarchy earlier in the academic year as interns and junior residents are learning their new roles.[30] These findings may have implications for system changes to support development of more functional educational dyads between attendings and trainees, especially early in the academic year.[31]

There are several limitations to our findings. This is a single‐institution study restricted to the general‐medicine service; thus generalizability is limited. Our outcome measures, the survey items of interest, question perception of housestaff autonomy but do not query the appropriateness of that autonomy, an important construct in entrustment. Additionally, self‐reported answers could be subject to recall bias. Although data were collected over 8 years, the most recent trends of residency training are not reflected. Although there was a significant interaction involving experienced hospitalists, wide confidence intervals and large standard errors likely reflect the relatively few individuals in this category. Though there was a large number of overall respondents, our interaction terms included few advanced‐career hospitalists, likely secondary to hospital medicine's relative youth as a specialty.

As this study focuses only on perception of autonomy, future work must investigate autonomy from a practical standpoint. It is conceivable that if factors such as attending characteristics and secular trends influence perception, they may also be associated with variation in how attendings entrust autonomy and provide supervision. To what extent perception and practice are linked remains to be studied, but it will be important to determine if variation due to these factors may also be associated with inconsistent and uneven supervisory practices that would adversely affect resident education and patient safety.

Finally, future work must include the viewpoint of the recipients of autonomy: the residents and interns. A significant limitation of the current study is the lack of the resident perspective, as our survey was only administered to attendings. Autonomy is clearly a 2‐way relationship, and attending perception must be corroborated by the resident's experience. It is possible attendings may perceive that their housestaff have sufficient autonomy, but residents may view this autonomy as inappropriate or unavoidable due an absentee attending who does not adequately supervise.[32] Future work must examine how resident and attending perceptions of autonomy correlate, and whether discordance or concordance in these perceptions influence satisfaction with attending‐resident relationships, education, and patient care.

In conclusion, significant variation existed among attending physicians with respect to perception of housestaff autonomy, an important aspect of entrustment and clinical supervision. This variation was present for hospitalists, among different levels of attending experience, and a significant interaction was found between these 2 factors. Additionally, secular trends were associated with differences in perception of autonomy. As entrustment of residents with progressive levels of autonomy becomes more integrated within the requirements for advancement in residency, a greater understanding of factors affecting entrustment will be critical in helping faculty develop skills to appropriately assess trainee professional growth and development.

Acknowledgments

The authors thank all members of the Multicenter Hospitalist Project for their assistance with this project.

Disclosures: The authors acknowledge funding from the AHRQ/CERT 5 U18 HS016967‐01. The funder had no role in the design of the study; the collection, analysis, and interpretation of the data; or the decision to approve publication of the finished manuscript. Prior presentations of the data include the 2012 Department of Medicine Research Day at the University of Chicago, the 2012 Society of Hospital Medicine Annual Meeting in San Diego, California, and the 2012 Midwest Society of General Medicine Meeting in Chicago, Illinois. All coauthors have seen and agree with the contents of the manuscript. The submission was not under review by any other publication. The authors report no conflicts of interest.

References
  1. Kilminster SM, Jolly BC. Effective supervision in clinical practice settings: a literature review. Med Educ. 2000;34(10):827840.
  2. Ericsson KA. Deliberate practice and acquisition of expert performance: a general overview. Acad Emerg Med. 2008;15(11):988994.
  3. Kennedy TJ, Regehr G, Baker GR, et al. Progressive independence in clinical training: a tradition worth defending? Acad Med. 2005;80(10 suppl):S106S111.
  4. Committee on Optimizing Graduate Medical Trainee (Resident) Hours and Work Schedules to Improve Patient Safety, Institute of Medicine. Ulmer C, Wolman D, Johns M, eds. Resident Duty Hours: Enhancing Sleep, Supervision, and Safety. Washington, DC: National Academies Press; 2008.
  5. Nasca TJ, Day SH, Amis ES; ACGME Duty Hour Task Force. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010;363(2):e3.
  6. Haun SE. Positive impact of pediatric critical care fellows on mortality: is it merely a function of resident supervision? Crit Care Med. 1997;25(10):16221623.
  7. Sox CM, Burstin HR, Orav EJ, et al. The effect of supervision of residents on quality of care in five university‐affiliated emergency departments. Acad Med. 1998;73(7):776782.
  8. Phy MP, Offord KP, Manning DM, et al. Increased faculty presence on inpatient teaching services. Mayo Clin Proc. 2004;79(3):332336.
  9. Busari JO, Weggelaar NM, Knottnerus AC, et al. How medical residents perceive the quality of supervision provided by attending doctors in the clinical setting. Med Educ. 2005;39(7):696703.
  10. Fallon WF, Wears RL, Tepas JJ. Resident supervision in the operating room: does this impact on outcome? J Trauma. 1993;35(4):556560.
  11. Schmidt UH, Kumwilaisak K, Bittner E, et al. Effects of supervision by attending anesthesiologists on complications of emergency tracheal intubation. Anesthesiology. 2008;109(6):973937.
  12. Velmahos GC, Fili C, Vassiliu P, et al. Around‐the‐clock attending radiology coverage is essential to avoid mistakes in the care of trauma patients. Am Surg. 2001;67(12):11751177.
  13. Gennis VM, Gennis MA. Supervision in the outpatient clinic: effects on teaching and patient care. J Gen Int Med. 1993;8(7):378380.
  14. Paukert JL, Richards BF. How medical students and residents describe the roles and characteristics of their influential clinical teachers. Acad Med. 2000;75(8):843845.
  15. Farnan JM, Petty LA, Georgitis E, et al. A systematic review: the effect of clinical supervision on patient and residency education outcomes. Acad Med. 2012;87(4):428442.
  16. Farnan JM, Burger A, Boonayasai RT, et al; for the SGIM Housestaff Oversight Subcommittee. Survey of overnight academic hospitalist supervision of trainees. J Hosp Med. 2012;7(7):521523.
  17. Haber LA, Lau CY, Sharpe B, et al. Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7(8):606610.
  18. Trowbridge RL, Almeder L, Jacquet M, et al. The effect of overnight in‐house attending coverage on perceptions of care and education on a general medical service. J Grad Med Educ. 2010;2(1):5356.
  19. Chung P, Morrison J, Jin L, et al. Resident satisfaction on an academic hospitalist service: time to teach. Am J Med. 2002;112(7):597601.
  20. Nasca TJ, Philibert I, Brigham T, et al. The next GME accreditation system—rationale and benefits. N Engl J Med. 2012;366(11):10511056.
  21. Ten Cate O, Scheele F. Competency‐based postgraduate training: can we bridge the gap between theory and clinical practice? Acad Med. 2007;82(6):542547.
  22. Ten Cate O. Trust, competence, and the supervisor's role in postgraduate training. BMJ. 2006;333(7571):748751.
  23. Kashner TM, Byrne JM, Chang BK, et al. Measuring progressive independence with the resident supervision index: empirical approach. J Grad Med Educ. 2010;2(1):1730.
  24. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335(7):514517.
  25. Arora V, Meltzer D. Effect of ACGME duty hours on attending physician teaching and satisfaction. Arch Intern Med. 2008;168(11):12261227.
  26. Arora VM, Georgitis E, Siddique J, et al. Association of workload of on‐call interns with on‐call sleep duration, shift duration, and participation in educational activities. JAMA. 2008;300(10):11461153.
  27. Ten Cate O. Entrustability of professional activities and competency‐based training. Med Educ. 2005;39:11761177.
  28. Sterkenburg A, Barach P, Kalkman C, et al. When do supervising physicians decide to entrust residents with unsupervised tasks? Acad Med. 2010;85(9):13991400.
  29. Reed D, Levine R, et al. Effect of residency duty‐hour limits. Arch Intern Med. 2007;167(14):14871492.
  30. Wilkerson L, Irby DM. Strategies for improving teaching practices: a comprehensive approach to faculty development. Acad Med. 1998;73:387396.
  31. Kilminster S, Jolly B, der Vleuten CP. A framework for effective training for supervisors. Med Teach. 2002;24:385389.
  32. Farnan JM, Johnson JK, Meltzer DO, et al. On‐call supervision and resident autonomy: from micromanager to absentee attending. Am J Med. 2009;122(8):784788.
References
  1. Kilminster SM, Jolly BC. Effective supervision in clinical practice settings: a literature review. Med Educ. 2000;34(10):827840.
  2. Ericsson KA. Deliberate practice and acquisition of expert performance: a general overview. Acad Emerg Med. 2008;15(11):988994.
  3. Kennedy TJ, Regehr G, Baker GR, et al. Progressive independence in clinical training: a tradition worth defending? Acad Med. 2005;80(10 suppl):S106S111.
  4. Committee on Optimizing Graduate Medical Trainee (Resident) Hours and Work Schedules to Improve Patient Safety, Institute of Medicine. Ulmer C, Wolman D, Johns M, eds. Resident Duty Hours: Enhancing Sleep, Supervision, and Safety. Washington, DC: National Academies Press; 2008.
  5. Nasca TJ, Day SH, Amis ES; ACGME Duty Hour Task Force. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010;363(2):e3.
  6. Haun SE. Positive impact of pediatric critical care fellows on mortality: is it merely a function of resident supervision? Crit Care Med. 1997;25(10):16221623.
  7. Sox CM, Burstin HR, Orav EJ, et al. The effect of supervision of residents on quality of care in five university‐affiliated emergency departments. Acad Med. 1998;73(7):776782.
  8. Phy MP, Offord KP, Manning DM, et al. Increased faculty presence on inpatient teaching services. Mayo Clin Proc. 2004;79(3):332336.
  9. Busari JO, Weggelaar NM, Knottnerus AC, et al. How medical residents perceive the quality of supervision provided by attending doctors in the clinical setting. Med Educ. 2005;39(7):696703.
  10. Fallon WF, Wears RL, Tepas JJ. Resident supervision in the operating room: does this impact on outcome? J Trauma. 1993;35(4):556560.
  11. Schmidt UH, Kumwilaisak K, Bittner E, et al. Effects of supervision by attending anesthesiologists on complications of emergency tracheal intubation. Anesthesiology. 2008;109(6):973937.
  12. Velmahos GC, Fili C, Vassiliu P, et al. Around‐the‐clock attending radiology coverage is essential to avoid mistakes in the care of trauma patients. Am Surg. 2001;67(12):11751177.
  13. Gennis VM, Gennis MA. Supervision in the outpatient clinic: effects on teaching and patient care. J Gen Int Med. 1993;8(7):378380.
  14. Paukert JL, Richards BF. How medical students and residents describe the roles and characteristics of their influential clinical teachers. Acad Med. 2000;75(8):843845.
  15. Farnan JM, Petty LA, Georgitis E, et al. A systematic review: the effect of clinical supervision on patient and residency education outcomes. Acad Med. 2012;87(4):428442.
  16. Farnan JM, Burger A, Boonayasai RT, et al; for the SGIM Housestaff Oversight Subcommittee. Survey of overnight academic hospitalist supervision of trainees. J Hosp Med. 2012;7(7):521523.
  17. Haber LA, Lau CY, Sharpe B, et al. Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7(8):606610.
  18. Trowbridge RL, Almeder L, Jacquet M, et al. The effect of overnight in‐house attending coverage on perceptions of care and education on a general medical service. J Grad Med Educ. 2010;2(1):5356.
  19. Chung P, Morrison J, Jin L, et al. Resident satisfaction on an academic hospitalist service: time to teach. Am J Med. 2002;112(7):597601.
  20. Nasca TJ, Philibert I, Brigham T, et al. The next GME accreditation system—rationale and benefits. N Engl J Med. 2012;366(11):10511056.
  21. Ten Cate O, Scheele F. Competency‐based postgraduate training: can we bridge the gap between theory and clinical practice? Acad Med. 2007;82(6):542547.
  22. Ten Cate O. Trust, competence, and the supervisor's role in postgraduate training. BMJ. 2006;333(7571):748751.
  23. Kashner TM, Byrne JM, Chang BK, et al. Measuring progressive independence with the resident supervision index: empirical approach. J Grad Med Educ. 2010;2(1):1730.
  24. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335(7):514517.
  25. Arora V, Meltzer D. Effect of ACGME duty hours on attending physician teaching and satisfaction. Arch Intern Med. 2008;168(11):12261227.
  26. Arora VM, Georgitis E, Siddique J, et al. Association of workload of on‐call interns with on‐call sleep duration, shift duration, and participation in educational activities. JAMA. 2008;300(10):11461153.
  27. Ten Cate O. Entrustability of professional activities and competency‐based training. Med Educ. 2005;39:11761177.
  28. Sterkenburg A, Barach P, Kalkman C, et al. When do supervising physicians decide to entrust residents with unsupervised tasks? Acad Med. 2010;85(9):13991400.
  29. Reed D, Levine R, et al. Effect of residency duty‐hour limits. Arch Intern Med. 2007;167(14):14871492.
  30. Wilkerson L, Irby DM. Strategies for improving teaching practices: a comprehensive approach to faculty development. Acad Med. 1998;73:387396.
  31. Kilminster S, Jolly B, der Vleuten CP. A framework for effective training for supervisors. Med Teach. 2002;24:385389.
  32. Farnan JM, Johnson JK, Meltzer DO, et al. On‐call supervision and resident autonomy: from micromanager to absentee attending. Am J Med. 2009;122(8):784788.
Issue
Journal of Hospital Medicine - 8(6)
Issue
Journal of Hospital Medicine - 8(6)
Page Number
292-297
Page Number
292-297
Publications
Publications
Article Type
Display Headline
How do attendings perceive housestaff autonomy? Attending experience, hospitalists, and trends over time
Display Headline
How do attendings perceive housestaff autonomy? Attending experience, hospitalists, and trends over time
Sections
Article Source

Copyright © 2013 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Shannon Martin, MD, 5841 S. Maryland Ave., MC 5000, W307, Chicago, IL 60637; Telephone: 773‐702‐2604; Fax: 773‐795‐7398; E‐mail: smartin1@medicine.bsd.uchicago.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Implementing Peer Evaluation of Handoffs

Article Type
Changed
Mon, 05/22/2017 - 18:18
Display Headline
Implementing Peer Evaluation of Handoffs: Associations With Experience and Workload

The advent of restricted residency duty hours has thrust the safety risks of handoffs into the spotlight. More recently, the Accreditation Council of Graduate Medical Education (ACGME) has restricted hours even further to a maximum of 16 hours for first‐year residents and up to 28 hours for residents beyond their first year.[1] Although the focus on these mandates has been scheduling and staffing in residency programs, another important area of attention is for handoff education and evaluation. The Common Program Requirements for the ACGME state that all residency programs should ensure that residents are competent in handoff communications and that programs should monitor handoffs to ensure that they are safe.[2] Moreover, recent efforts have defined milestones for handoffs, specifically that by 12 months, residents should be able to effectively communicate with other caregivers to maintain continuity during transitions of care.[3] Although more detailed handoff‐specific milestones have to be flushed out, a need for evaluation instruments to assess milestones is critical. In addition, handoffs continue to represent a vulnerable time for patients in many specialties, such as surgery and pediatrics.[4, 5]

Evaluating handoffs poses specific challenges for internal medicine residency programs because handoffs are often conducted on the fly or wherever convenient, and not always at a dedicated time and place.[6] Even when evaluations could be conducted at a dedicated time and place, program faculty and leadership may not be comfortable evaluating handoffs in real time due to lack of faculty development and recent experience with handoffs. Although supervising faculty may be in the most ideal position due to their intimate knowledge of the patient and their ability to evaluate the clinical judgment of trainees, they may face additional pressures of supervision and direct patient care that prevent their attendance at the time of the handoff. For these reasons, potential people to evaluate the quality of a resident handoff may be the peers to whom they frequently handoff. Because handoffs are also conceptualized as an interactive dialogue between sender and receiver, an ideal handoff performance evaluation would capture both of these roles.[7] For these reasons, peer evaluation may be a viable modality to assist programs in evaluating handoffs. Peer evaluation has been shown to be an effective method of rating performance of medical students,[8] practicing physicians,[9] and residents.[10] Moreover, peer evaluation is now a required feature in assessing internal medicine resident performance.[11] Although enthusiasm for peer evaluation has grown in residency training, the use of it can still be limited by a variety of problems, such as reluctance to rate peers poorly, difficulty obtaining evaluations, and the utility of such evaluations. For these reasons, it is important to understand whether peer evaluation of handoffs is feasible. Therefore, the aim of this study was to assess feasibility of an online peer evaluation survey tool of handoffs in an internal medicine residency and to characterize performance over time as well and associations between workload and performance.

METHODS

From July 2009 to March 2010, all interns on the general medicine inpatient service at 2 hospitals were asked to complete an end‐of‐month anonymous peer evaluation that included 14‐items addressing all core competencies. The evaluation tool was administered electronically using New Innovations (New Innovations, Inc., Uniontown, OH). Interns signed out to each other in a cross‐cover circuit that included 3 other interns on an every fourth night call cycle.[12] Call teams included 1 resident and 1 intern who worked from 7 am on the on‐call day to noon on the postcall day. Therefore, postcall interns were expected to hand off to the next on‐call intern before noon. Although attendings and senior residents were not required to formally supervise the handoff, supervising senior residents were often present during postcall intern sign‐out to facilitate departure of the team. When interns were not postcall, they were expected to sign out before they went to the clinic in the afternoon or when their foreseeable work was complete. The interns were provided with a 45‐minute lecture on handoffs and introduced to the peer evaluation tool in July 2009 at an intern orientation. They were also prompted to complete the tool to the best of their ability after their general medicine rotation. We chose the general medicine rotation because each intern completed approximately 2 months of general medicine in their first year. This would provide ratings over time without overburdening interns to complete 3 additional evaluations after every inpatient rotation.

The peer evaluation was constructed to correspond to specific ACGME core competencies and was also linked to specific handoff behaviors that were known to be effective. The questions were adapted from prior items used in a validated direct‐observation tool previously developed by the authors (the Handoff Clinical Evaluation Exercise), which was based on literature review as well as expert opinion.[13, 14] For example, under the core competency of communication, interns were asked to rate each other on communication skills using the anchors of No questions, no acknowledgement of to do tasks, transfer of information face to face is not a priority for low unsatisfactory (1) and Appropriate use of questions, acknowledgement and read‐back of to‐do and priority tasks, face to face communication a priority for high superior (9). Items that referred to behaviors related to both giving handoff and receiving handoff were used to capture the interactive dialogue between senders and receivers that characterize ideal handoffs. In addition, specific items referring to written sign‐out and verbal sign‐out were developed to capture the specific differences. For instance, for the patient care competency in written sign‐out, low unsatisfactory (1) was defined as Incomplete written content; to do's omitted or requested with no rationale or plan, or with inadequate preparation (ie, request to transfuse but consent not obtained), and high superior (9) was defined as Content is complete with to do's accompanied by clear plan of action and rationale. Pilot testing with trainees was conducted, including residents not involved in the study and clinical students. The tool was also reviewed by the residency program leadership, and in an effort to standardize the reporting of the items with our other evaluation forms, each item was mapped to a core competency that it was most related to. Debriefing of the instrument experience following usage was performed with 3 residents who had an interest in medical education and handoff performance.

The tool was deployed to interns following a brief educational session for interns, in which the tool was previewed and reviewed. Interns were counseled to use the form as a global performance assessment over the course of the month, in contrast to an episodic evaluation. This would also avoid the use of negative event bias by raters, in which the rater allows a single negative event to influence the perception of the person's performance, even long after the event has passed into history.

To analyze the data, descriptive statistics were used to summarize mean performance across domains. To assess whether intern performance improved over time, we split the academic year into 3 time periods of 3 months each, which we have used in earlier studies assessing intern experience.[15] Prior to analysis, postcall interns were identified by using the intern monthly call schedule located in the AMiON software program (Norwich, VT) to label the evaluation of the postcall intern. Then, all names were removed and replaced with a unique identifier for the evaluator and the evaluatee. In addition, each evaluation was also categorized as either having come from the main teaching hospital or the community hospital affiliate.

Multivariate random effects linear regression models, controlling for evaluator, evaluatee, and hospital, were used to assess the association between time (using indicator variables for season) and postcall status on intern performance. In addition, because of the skewness in the ratings, we also undertook additional analysis by transforming our data into dichotomous variables reflecting superior performance. After conducting conditional ordinal logistic regression, the main findings did not change. We also investigated within‐subject and between‐subject variation using intraclass correlation coefficients. Within‐subject intraclass correlation enabled assessment of inter‐rater reliability. Between‐subject intraclass correlation enabled the assessment of evaluator effects. Evaluator effects can encompass a variety of forms of rater bias such as leniency (in which evaluators tended to rate individuals uniformly positively), severity (rater tends to significantly avoid using positive ratings), or the halo effect (the individual being evaluated has 1 significantly positive attribute that overrides that which is being evaluated). All analyses were completed using STATA 10.0 (StataCorp, College Station, TX) with statistical significance defined as P < 0.05. This study was deemed to be exempt from institutional review board review after all data were deidentified prior to analysis.

RESULTS

From July 2009 to March 2010, 31 interns (78%) returned 60% (172/288) of the peer evaluations they received. Almost all (39/40, 98%) interns were evaluated at least once with a median of 4 ratings per intern (range, 19). Thirty‐five percent of ratings occurred when an intern was rotating at the community hospital. Ratings were very high on all domains (mean, 8.38.6). Overall sign‐out performance was rated as 8.4 (95% confidence interval [CI], 8.3‐8.5), with over 55% rating peers as 9 (maximal score). The lowest score given was 5. Individual items ranged from a low of 8.34 (95% CI, 8.21‐8.47) for updating written sign‐outs, to a high of 8.60 (95% CI, 8.50‐8.69) for collegiality (Table 1) The internal consistency of the instrument was calculated using all items and was very high, with a Cronbach = 0.98.

Mean Intern Ratings on Sign‐out Peer Evaluation by Item and Competency
ACGME Core CompetencyRoleItemsItemMean95% CIRange% Receiving 9 as Rating
  • NOTE: Abbreviations: ACGME, Accreditation Council of Graduate Medical Education; CI, confidence interval.

Patient careSenderWritten sign‐outQ18.348.25 to 8.486953.2
SenderUpdated contentQ28.358.22 to 8.475954.4
ReceiverDocumentation of overnight eventsQ68.418.30 to 8.526956.3
Medical knowledgeSenderAnticipatory guidanceQ38.408.28 to 8.516956.3
ReceiverClinical decision making during cross‐coverQ78.458.35 to 8.556956.0
ProfessionalismSenderCollegialityQ48.608.51 to 8.686965.7
ReceiverAcknowledgement of professional responsibilityQ108.538.43 to 8.626962.4
ReceiverTimeliness/responsivenessQ118.508.39 to 8.606961.9
Interpersonal and communication skillsReceiverListening behavior when receiving sign‐outsQ88.528.42 to 8.626963.6
ReceiverCommunication when receiving sign‐outQ98.528.43 to 8.626963.0
Systems‐based practiceReceiverResource useQ128.458.35 to 8.556955.6
Practice‐based learning and improvementSenderAccepting of feedbackQ58.458.34 to 8.556958.7
OverallBothOverall sign‐out qualityQ138.448.34 to 8.546955.3

Mean ratings for each item increased in season 2 and 3 and were statistically significant using a test for trend across ordered groups. However, in multivariate regression models, improvements remained statistically significant for only 4 items (Figure 1): 1) communication skills, 2) listening behavior, 3) accepting professional responsibility, and 4) accessing the system (Table 2). Specifically, when compared to season 1, improvements in communication skill were seen in season 2 (+0.34 [95% CI, 0.08‐0.60], P = 0.009) and were sustained in season 3 (+0.34 [95% CI, 0.06‐0.61], P = 0.018). A similar pattern was observed for listening behavior, with improvement in ratings that were similar in magnitude with increasing intern experience (season 2, +0.29 [95% CI, 0.04‐0.55], P = 0.025 compared to season 1). Although accessing the system scores showed a similar pattern of improvement with an increase in season 2 compared to season 1, the magnitude of this change was smaller (season 2, +0.21 [95% CI, 0.03‐0.39], P = 0.023). Interestingly, improvements in accepting professional responsibility rose during season 2, but the difference did not reach statistical significance until season 3 (+0.37 [95% CI, 0.08‐0.65], P = 0.012 compared to season 1).

Figure 1
Graph showing improvements over time in performance in domains of sign‐out performance by season, where season 1 is July to September, season 2 is October to December, and season 3 is January to March. Results are obtained from random effects linear regression models controlling for evaluator, evaluate, postcall status, and site (community vs tertiary).
Increasing Scores on Peer Handoff Evaluation by Season
 Outcome
 Coefficient (95% CI)
PredictorCommunication SkillsListening BehaviorProfessional ResponsibilityAccessing the SystemWritten Sign‐out Quality
  • NOTE: Results are from multivariable linear regression models examining the association between season, community hospital, postcall status controlling for subject (evaluatee) random effects, and evaluator fixed effects (evaluator and evaluate effects not shown). Abbreviations: CI, confidence interval. *P < 0.05.

Season 1RefRefRefRefRef
Season 20.29 (0.04 to 0.55)a0.34 (0.08 to 0.60)a0.24 (0.03 to 0.51)0.21 (0.03 to 0.39)a0.05 (0.25 to 0.15)
Season 30.29 (0.02 to 0.56)a0.34 (0.06 to 0.61)a0.37 (0.08 to 0.65)a0.18 (0.01 to 0.36)a0.08 (0.13 to 0.30)
Community hospital0.18 (0.00 to 0.37)0.23 (0.04 to 0.43)a0.06 (0.13 to 0.26)0.13 (0.00 to 0.25)0.24 (0.08 to 0.39)a
Postcall0.10 (0.25 to 0.05)0.04 (0.21 to 0.13)0.02 (0.18 to 0.13)0.05 (0.16 to 0.05)0.18 (0.31,0.05)a
Constant7.04 (6.51 to 7.58)6.81 (6.23 to 7.38)7.04 (6.50 to 7.60)7.02 (6.59 to 7.45)6.49 (6.04 to 6.94)

In addition to increasing experience, postcall interns were rated significantly lower than nonpostcall interns in 2 items: 1) written sign‐out quality (8.21 vs 8.39, P = 0.008) and 2) accepting feedback (practice‐based learning and improvement) (8.25 vs 8.42, P = 0.006). Interestingly, when interns were at the community hospital general medicine rotation, where overall census was much lower than at the teaching hospital, peer ratings were significantly higher for overall handoff performance and 7 (written sign‐out, update content, collegiality, accepting feedback, documentation of overnight events, clinical decision making during cross‐cover, and listening behavior) of the remaining 12 specific handoff domains (P < 0.05 for all, data not shown).

Last, significant evaluator effects were observed, which contributed to the variance in ratings given. For example, using intraclass correlation coefficients (ICC), we found that there was greater within‐intern variation than between‐intern variation, highlighting that evaluator scores tended to be strongly correlated with each other (eg, ICC overall performance = 0.64) and more so than scores of multiple evaluations of the same intern (eg, ICC overall performance = 0.18).

Because ratings of handoff performance were skewed, we also conducted a sensitivity analysis using ordinal logistic regression to ascertain if our findings remained significant. Using ordinal logistic regression models, significant improvements were seen in season 3 for 3 of the above‐listed behaviors, specifically listening behavior, professional responsibility, and accessing the system. Although there was no improvement in communication, there was an improvement observed in collegiality scores that were significant in season 3.

DISCUSSION

Using an end‐of‐rotation online peer assessment of handoff skills, it is feasible to obtain ratings of intern handoff performance from peers. Although there is evidence of rater bias toward leniency and low inter‐rater reliability, peer ratings of intern performance did increase over time. In addition, peer ratings were lower for interns who were handing off their postcall service. Working on a rotation at a community affiliate with a lower census was associated with higher peer ratings of handoffs.

It is worth considering the mechanism of these findings. First, the leniency observed in peer ratings likely reflects peers unwilling to critique each other due to a desire for an esprit de corps among their classmates. The low intraclass correlation coefficient for ratings of the same intern highlight that peers do not easily converge on their ratings of the same intern. Nevertheless, the ratings on the peer evaluation did demonstrate improvements over time. This improvement could easily reflect on‐the‐job learning, as interns become more acquainted with their roles and efficient and competent in their tasks. Together, these data provide a foundation for developing milestone handoffs that reflect the natural progression of intern competence in handoffs. For example, communication appeared to improve at 3 months, whereas transfer of professional responsibility improved at 6 months after beginning internship. However, alternative explanations are also important to consider. Although it is easy and somewhat reassuring to assume that increases over time reflect a learning effect, it is also possible that interns are unwilling to critique their peers as familiarity with them increases.

There are several reasons why postcall interns could have been universally rated lower than nonpostcall interns. First, postcall interns likely had the sickest patients with the most to‐do tasks or work associated with their sign‐out because they were handing off newly admitted patients. Because the postcall sign‐out is associated with the highest workload, it may be that interns perceive that a good handoff is nothing to do, and handoffs associated with more work are not highly rated. It is also important to note that postcall interns, who in this study were at the end of a 30‐hour duty shift, were also most fatigued and overworked, which may have also affected the handoff, especially in the 2 domains of interest. Due to the time pressure to leave coupled with fatigue, they may have had less time to invest in written sign‐out quality and may not have been receptive to feedback on their performance. Likewise, performance on handoffs was rated higher when at the community hospital, which could be due to several reasons. The most plausible explanation is that the workload associated with that sign‐out is less due to lower patient census and lower patient acuity. In the community hospital, fewer residents were also geographically co‐located on a quieter ward and work room area, which may contribute to higher ratings across domains.

This study also has implications for future efforts to improve and evaluate handoff performance in residency trainees. For example, our findings suggest the importance of enhancing supervision and training for handoffs during high workload rotations or certain times of the year. In addition, evaluation systems for handoff performance that rely solely on peer evaluation will not likely yield an accurate picture of handoff performance, difficulty obtaining peer evaluations, the halo effect, and other forms of evaluator bias in ratings. Accurate handoff evaluation may require direct observation of verbal communication and faculty audit of written sign‐outs.[16, 17] Moreover, methods such as appreciative inquiry can help identify the peers with the best practices to emulate.[18] Future efforts to validate peer assessment of handoffs against these other assessment methods, such as direct observation by service attendings, are needed.

There are limitations to this study. First, although we have limited our findings to 1 residency program with 1 type of rotation, we have already expanded to a community residency program that used a float system and have disseminated our tool to several other institutions. In addition, we have a small number of participants, and our 60% return rate on monthly peer evaluations raises concerns of nonresponse bias. For example, a peer who perceived the handoff performance of an intern to be poor may be less likely to return the evaluation. Because our dataset has been deidentified per institutional review board request, we do not have any information to differentiate systematic reasons for not responding to the evaluation. Anecdotally, a critique of the tool is that it is lengthy, especially in light of the fact that 1 intern completes 3 additional handoff evaluations. It is worth understanding why the instrument had such a high internal consistency. Although the items were designed to address different competencies initially, peers may make a global assessment about someone's ability to perform a handoff and then fill out the evaluation accordingly. This speaks to the difficulty in evaluating the subcomponents of various actions related to the handoff. Because of the high internal consistency, we were able to shorten the survey to a 5‐item instrument with a Cronbach of 0.93, which we are currently using in our program and have disseminated to other programs. Although it is currently unclear if the ratings of performance on the longer peer evaluation are valid, we are investigating concurrent validity of the shorter tool by comparing peer evaluations to other measures of handoff quality as part of our current work. Last, we are only able to test associations and not make causal inferences.

CONCLUSION

Peer assessment of handoff skills is feasible via an electronic competency‐based tool. Although there is evidence of score inflation, intern performance does increase over time and is associated with various aspects of workload, such as postcall status or working on a rotation at a community affiliate with a lower census. Together, these data can provide a foundation for developing milestones handoffs that reflect the natural progression of intern competence in handoffs.

Acknowledgments

The authors thank the University of Chicago Medicine residents and chief residents, the members of the Curriculum and Housestaff Evaluation Committee, Tyrece Hunter and Amy Ice‐Gibson, and Meryl Prochaska and Laura Ruth Venable for assistance with manuscript preparation.

Disclosures

This study was funded by the University of Chicago Department of Medicine Clinical Excellence and Medical Education Award and AHRQ R03 5R03HS018278‐02 Development of and Validation of a Tool to Evaluate Hand‐off Quality.

Files
References
  1. Nasca TJ, Day SH, Amis ES; the ACGME Duty Hour Task Force. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010; 363.
  2. Common program requirements. Available at: http://acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed December 10, 2012.
  3. Green ML, Aagaard EM, Caverzagie KJ, et al. Charting the road to competence: developmental milestones for internal medicine residency training. J Grad Med Educ. 2009;1(1):520.
  4. Greenberg CC, Regenbogen SE, Studdert DM, et al. Patterns of communication breakdowns resulting in injury to surgical patients. J Am Coll Surg. 2007;204(4):533540.
  5. McSweeney ME, Lightdale JR, Vinci RJ, Moses J. Patient handoffs: pediatric resident experiences and lessons learned. Clin Pediatr (Phila). 2011;50(1):5763.
  6. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257266.
  7. Gibson SC, Ham JJ, Apker J, Mallak LA, Johnson NA. Communication, communication, communication: the art of the handoff. Ann Emerg Med. 2010;55(2):181183.
  8. Arnold L, Willouby L, Calkins V, Gammon L, Eberhardt G. Use of peer evaluation in the assessment of medical students. J Med Educ. 1981;56:3542.
  9. Ramsey PG, Wenrich MD, Carline JD, Inui TS, Larson EB, LoGerfo JP. Use of peer ratings to evaluate physician performance. JAMA. 1993;269:16551660.
  10. Thomas PA, Gebo KA, Hellmann DB. A pilot study of peer review in residency training. J Gen Intern Med. 1999;14(9):551554.
  11. ACGME Program Requirements for Graduate Medical Education in Internal Medicine Effective July 1, 2009. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/ProgramRequirements/140_internal_medicine_07012009.pdf. Accessed December 10, 2012.
  12. Arora V, Dunphy C, Chang VY, Ahmad F, Humphrey HJ, Meltzer D. The effects of on‐duty napping on intern sleep time and fatigue. Ann Intern Med. 2006;144(11):792798.
  13. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129134.
  14. Horwitz LI, Dombroski J, Murphy TE, Farnan JM, Johnson JK, Arora VM. Validation of a handoff assessment tool: the Handoff CEX [published online ahead of print June 7, 2012]. J Clin Nurs. doi: 10.1111/j.1365‐2702.2012.04131.x.
  15. Arora VM, Georgitis E, Siddique J, et al. Association of workload of on‐call medical interns with on‐call sleep duration, shift duration, and participation in educational activities. JAMA. 2008;300(10):11461153.
  16. Gakhar B, Spencer AL. Using direct observation, formal evaluation, and an interactive curriculum to improve the sign‐out practices of internal medicine interns. Acad Med. 2010;85(7):11821188.
  17. Bump GM, Bost JE, Buranosky R, Elnicki M. Faculty member review and feedback using a sign‐out checklist: improving intern written sign‐out. Acad Med. 2012;87(8):11251131.
  18. Helms AS, Perez TE, Baltz J, et al. Use of an appreciative inquiry approach to improve resident sign‐out in an era of multiple shift changes. J Gen Intern Med. 2012;27(3):287291.
Article PDF
Issue
Journal of Hospital Medicine - 8(3)
Publications
Page Number
132-136
Sections
Files
Files
Article PDF
Article PDF

The advent of restricted residency duty hours has thrust the safety risks of handoffs into the spotlight. More recently, the Accreditation Council of Graduate Medical Education (ACGME) has restricted hours even further to a maximum of 16 hours for first‐year residents and up to 28 hours for residents beyond their first year.[1] Although the focus on these mandates has been scheduling and staffing in residency programs, another important area of attention is for handoff education and evaluation. The Common Program Requirements for the ACGME state that all residency programs should ensure that residents are competent in handoff communications and that programs should monitor handoffs to ensure that they are safe.[2] Moreover, recent efforts have defined milestones for handoffs, specifically that by 12 months, residents should be able to effectively communicate with other caregivers to maintain continuity during transitions of care.[3] Although more detailed handoff‐specific milestones have to be flushed out, a need for evaluation instruments to assess milestones is critical. In addition, handoffs continue to represent a vulnerable time for patients in many specialties, such as surgery and pediatrics.[4, 5]

Evaluating handoffs poses specific challenges for internal medicine residency programs because handoffs are often conducted on the fly or wherever convenient, and not always at a dedicated time and place.[6] Even when evaluations could be conducted at a dedicated time and place, program faculty and leadership may not be comfortable evaluating handoffs in real time due to lack of faculty development and recent experience with handoffs. Although supervising faculty may be in the most ideal position due to their intimate knowledge of the patient and their ability to evaluate the clinical judgment of trainees, they may face additional pressures of supervision and direct patient care that prevent their attendance at the time of the handoff. For these reasons, potential people to evaluate the quality of a resident handoff may be the peers to whom they frequently handoff. Because handoffs are also conceptualized as an interactive dialogue between sender and receiver, an ideal handoff performance evaluation would capture both of these roles.[7] For these reasons, peer evaluation may be a viable modality to assist programs in evaluating handoffs. Peer evaluation has been shown to be an effective method of rating performance of medical students,[8] practicing physicians,[9] and residents.[10] Moreover, peer evaluation is now a required feature in assessing internal medicine resident performance.[11] Although enthusiasm for peer evaluation has grown in residency training, the use of it can still be limited by a variety of problems, such as reluctance to rate peers poorly, difficulty obtaining evaluations, and the utility of such evaluations. For these reasons, it is important to understand whether peer evaluation of handoffs is feasible. Therefore, the aim of this study was to assess feasibility of an online peer evaluation survey tool of handoffs in an internal medicine residency and to characterize performance over time as well and associations between workload and performance.

METHODS

From July 2009 to March 2010, all interns on the general medicine inpatient service at 2 hospitals were asked to complete an end‐of‐month anonymous peer evaluation that included 14‐items addressing all core competencies. The evaluation tool was administered electronically using New Innovations (New Innovations, Inc., Uniontown, OH). Interns signed out to each other in a cross‐cover circuit that included 3 other interns on an every fourth night call cycle.[12] Call teams included 1 resident and 1 intern who worked from 7 am on the on‐call day to noon on the postcall day. Therefore, postcall interns were expected to hand off to the next on‐call intern before noon. Although attendings and senior residents were not required to formally supervise the handoff, supervising senior residents were often present during postcall intern sign‐out to facilitate departure of the team. When interns were not postcall, they were expected to sign out before they went to the clinic in the afternoon or when their foreseeable work was complete. The interns were provided with a 45‐minute lecture on handoffs and introduced to the peer evaluation tool in July 2009 at an intern orientation. They were also prompted to complete the tool to the best of their ability after their general medicine rotation. We chose the general medicine rotation because each intern completed approximately 2 months of general medicine in their first year. This would provide ratings over time without overburdening interns to complete 3 additional evaluations after every inpatient rotation.

The peer evaluation was constructed to correspond to specific ACGME core competencies and was also linked to specific handoff behaviors that were known to be effective. The questions were adapted from prior items used in a validated direct‐observation tool previously developed by the authors (the Handoff Clinical Evaluation Exercise), which was based on literature review as well as expert opinion.[13, 14] For example, under the core competency of communication, interns were asked to rate each other on communication skills using the anchors of No questions, no acknowledgement of to do tasks, transfer of information face to face is not a priority for low unsatisfactory (1) and Appropriate use of questions, acknowledgement and read‐back of to‐do and priority tasks, face to face communication a priority for high superior (9). Items that referred to behaviors related to both giving handoff and receiving handoff were used to capture the interactive dialogue between senders and receivers that characterize ideal handoffs. In addition, specific items referring to written sign‐out and verbal sign‐out were developed to capture the specific differences. For instance, for the patient care competency in written sign‐out, low unsatisfactory (1) was defined as Incomplete written content; to do's omitted or requested with no rationale or plan, or with inadequate preparation (ie, request to transfuse but consent not obtained), and high superior (9) was defined as Content is complete with to do's accompanied by clear plan of action and rationale. Pilot testing with trainees was conducted, including residents not involved in the study and clinical students. The tool was also reviewed by the residency program leadership, and in an effort to standardize the reporting of the items with our other evaluation forms, each item was mapped to a core competency that it was most related to. Debriefing of the instrument experience following usage was performed with 3 residents who had an interest in medical education and handoff performance.

The tool was deployed to interns following a brief educational session for interns, in which the tool was previewed and reviewed. Interns were counseled to use the form as a global performance assessment over the course of the month, in contrast to an episodic evaluation. This would also avoid the use of negative event bias by raters, in which the rater allows a single negative event to influence the perception of the person's performance, even long after the event has passed into history.

To analyze the data, descriptive statistics were used to summarize mean performance across domains. To assess whether intern performance improved over time, we split the academic year into 3 time periods of 3 months each, which we have used in earlier studies assessing intern experience.[15] Prior to analysis, postcall interns were identified by using the intern monthly call schedule located in the AMiON software program (Norwich, VT) to label the evaluation of the postcall intern. Then, all names were removed and replaced with a unique identifier for the evaluator and the evaluatee. In addition, each evaluation was also categorized as either having come from the main teaching hospital or the community hospital affiliate.

Multivariate random effects linear regression models, controlling for evaluator, evaluatee, and hospital, were used to assess the association between time (using indicator variables for season) and postcall status on intern performance. In addition, because of the skewness in the ratings, we also undertook additional analysis by transforming our data into dichotomous variables reflecting superior performance. After conducting conditional ordinal logistic regression, the main findings did not change. We also investigated within‐subject and between‐subject variation using intraclass correlation coefficients. Within‐subject intraclass correlation enabled assessment of inter‐rater reliability. Between‐subject intraclass correlation enabled the assessment of evaluator effects. Evaluator effects can encompass a variety of forms of rater bias such as leniency (in which evaluators tended to rate individuals uniformly positively), severity (rater tends to significantly avoid using positive ratings), or the halo effect (the individual being evaluated has 1 significantly positive attribute that overrides that which is being evaluated). All analyses were completed using STATA 10.0 (StataCorp, College Station, TX) with statistical significance defined as P < 0.05. This study was deemed to be exempt from institutional review board review after all data were deidentified prior to analysis.

RESULTS

From July 2009 to March 2010, 31 interns (78%) returned 60% (172/288) of the peer evaluations they received. Almost all (39/40, 98%) interns were evaluated at least once with a median of 4 ratings per intern (range, 19). Thirty‐five percent of ratings occurred when an intern was rotating at the community hospital. Ratings were very high on all domains (mean, 8.38.6). Overall sign‐out performance was rated as 8.4 (95% confidence interval [CI], 8.3‐8.5), with over 55% rating peers as 9 (maximal score). The lowest score given was 5. Individual items ranged from a low of 8.34 (95% CI, 8.21‐8.47) for updating written sign‐outs, to a high of 8.60 (95% CI, 8.50‐8.69) for collegiality (Table 1) The internal consistency of the instrument was calculated using all items and was very high, with a Cronbach = 0.98.

Mean Intern Ratings on Sign‐out Peer Evaluation by Item and Competency
ACGME Core CompetencyRoleItemsItemMean95% CIRange% Receiving 9 as Rating
  • NOTE: Abbreviations: ACGME, Accreditation Council of Graduate Medical Education; CI, confidence interval.

Patient careSenderWritten sign‐outQ18.348.25 to 8.486953.2
SenderUpdated contentQ28.358.22 to 8.475954.4
ReceiverDocumentation of overnight eventsQ68.418.30 to 8.526956.3
Medical knowledgeSenderAnticipatory guidanceQ38.408.28 to 8.516956.3
ReceiverClinical decision making during cross‐coverQ78.458.35 to 8.556956.0
ProfessionalismSenderCollegialityQ48.608.51 to 8.686965.7
ReceiverAcknowledgement of professional responsibilityQ108.538.43 to 8.626962.4
ReceiverTimeliness/responsivenessQ118.508.39 to 8.606961.9
Interpersonal and communication skillsReceiverListening behavior when receiving sign‐outsQ88.528.42 to 8.626963.6
ReceiverCommunication when receiving sign‐outQ98.528.43 to 8.626963.0
Systems‐based practiceReceiverResource useQ128.458.35 to 8.556955.6
Practice‐based learning and improvementSenderAccepting of feedbackQ58.458.34 to 8.556958.7
OverallBothOverall sign‐out qualityQ138.448.34 to 8.546955.3

Mean ratings for each item increased in season 2 and 3 and were statistically significant using a test for trend across ordered groups. However, in multivariate regression models, improvements remained statistically significant for only 4 items (Figure 1): 1) communication skills, 2) listening behavior, 3) accepting professional responsibility, and 4) accessing the system (Table 2). Specifically, when compared to season 1, improvements in communication skill were seen in season 2 (+0.34 [95% CI, 0.08‐0.60], P = 0.009) and were sustained in season 3 (+0.34 [95% CI, 0.06‐0.61], P = 0.018). A similar pattern was observed for listening behavior, with improvement in ratings that were similar in magnitude with increasing intern experience (season 2, +0.29 [95% CI, 0.04‐0.55], P = 0.025 compared to season 1). Although accessing the system scores showed a similar pattern of improvement with an increase in season 2 compared to season 1, the magnitude of this change was smaller (season 2, +0.21 [95% CI, 0.03‐0.39], P = 0.023). Interestingly, improvements in accepting professional responsibility rose during season 2, but the difference did not reach statistical significance until season 3 (+0.37 [95% CI, 0.08‐0.65], P = 0.012 compared to season 1).

Figure 1
Graph showing improvements over time in performance in domains of sign‐out performance by season, where season 1 is July to September, season 2 is October to December, and season 3 is January to March. Results are obtained from random effects linear regression models controlling for evaluator, evaluate, postcall status, and site (community vs tertiary).
Increasing Scores on Peer Handoff Evaluation by Season
 Outcome
 Coefficient (95% CI)
PredictorCommunication SkillsListening BehaviorProfessional ResponsibilityAccessing the SystemWritten Sign‐out Quality
  • NOTE: Results are from multivariable linear regression models examining the association between season, community hospital, postcall status controlling for subject (evaluatee) random effects, and evaluator fixed effects (evaluator and evaluate effects not shown). Abbreviations: CI, confidence interval. *P < 0.05.

Season 1RefRefRefRefRef
Season 20.29 (0.04 to 0.55)a0.34 (0.08 to 0.60)a0.24 (0.03 to 0.51)0.21 (0.03 to 0.39)a0.05 (0.25 to 0.15)
Season 30.29 (0.02 to 0.56)a0.34 (0.06 to 0.61)a0.37 (0.08 to 0.65)a0.18 (0.01 to 0.36)a0.08 (0.13 to 0.30)
Community hospital0.18 (0.00 to 0.37)0.23 (0.04 to 0.43)a0.06 (0.13 to 0.26)0.13 (0.00 to 0.25)0.24 (0.08 to 0.39)a
Postcall0.10 (0.25 to 0.05)0.04 (0.21 to 0.13)0.02 (0.18 to 0.13)0.05 (0.16 to 0.05)0.18 (0.31,0.05)a
Constant7.04 (6.51 to 7.58)6.81 (6.23 to 7.38)7.04 (6.50 to 7.60)7.02 (6.59 to 7.45)6.49 (6.04 to 6.94)

In addition to increasing experience, postcall interns were rated significantly lower than nonpostcall interns in 2 items: 1) written sign‐out quality (8.21 vs 8.39, P = 0.008) and 2) accepting feedback (practice‐based learning and improvement) (8.25 vs 8.42, P = 0.006). Interestingly, when interns were at the community hospital general medicine rotation, where overall census was much lower than at the teaching hospital, peer ratings were significantly higher for overall handoff performance and 7 (written sign‐out, update content, collegiality, accepting feedback, documentation of overnight events, clinical decision making during cross‐cover, and listening behavior) of the remaining 12 specific handoff domains (P < 0.05 for all, data not shown).

Last, significant evaluator effects were observed, which contributed to the variance in ratings given. For example, using intraclass correlation coefficients (ICC), we found that there was greater within‐intern variation than between‐intern variation, highlighting that evaluator scores tended to be strongly correlated with each other (eg, ICC overall performance = 0.64) and more so than scores of multiple evaluations of the same intern (eg, ICC overall performance = 0.18).

Because ratings of handoff performance were skewed, we also conducted a sensitivity analysis using ordinal logistic regression to ascertain if our findings remained significant. Using ordinal logistic regression models, significant improvements were seen in season 3 for 3 of the above‐listed behaviors, specifically listening behavior, professional responsibility, and accessing the system. Although there was no improvement in communication, there was an improvement observed in collegiality scores that were significant in season 3.

DISCUSSION

Using an end‐of‐rotation online peer assessment of handoff skills, it is feasible to obtain ratings of intern handoff performance from peers. Although there is evidence of rater bias toward leniency and low inter‐rater reliability, peer ratings of intern performance did increase over time. In addition, peer ratings were lower for interns who were handing off their postcall service. Working on a rotation at a community affiliate with a lower census was associated with higher peer ratings of handoffs.

It is worth considering the mechanism of these findings. First, the leniency observed in peer ratings likely reflects peers unwilling to critique each other due to a desire for an esprit de corps among their classmates. The low intraclass correlation coefficient for ratings of the same intern highlight that peers do not easily converge on their ratings of the same intern. Nevertheless, the ratings on the peer evaluation did demonstrate improvements over time. This improvement could easily reflect on‐the‐job learning, as interns become more acquainted with their roles and efficient and competent in their tasks. Together, these data provide a foundation for developing milestone handoffs that reflect the natural progression of intern competence in handoffs. For example, communication appeared to improve at 3 months, whereas transfer of professional responsibility improved at 6 months after beginning internship. However, alternative explanations are also important to consider. Although it is easy and somewhat reassuring to assume that increases over time reflect a learning effect, it is also possible that interns are unwilling to critique their peers as familiarity with them increases.

There are several reasons why postcall interns could have been universally rated lower than nonpostcall interns. First, postcall interns likely had the sickest patients with the most to‐do tasks or work associated with their sign‐out because they were handing off newly admitted patients. Because the postcall sign‐out is associated with the highest workload, it may be that interns perceive that a good handoff is nothing to do, and handoffs associated with more work are not highly rated. It is also important to note that postcall interns, who in this study were at the end of a 30‐hour duty shift, were also most fatigued and overworked, which may have also affected the handoff, especially in the 2 domains of interest. Due to the time pressure to leave coupled with fatigue, they may have had less time to invest in written sign‐out quality and may not have been receptive to feedback on their performance. Likewise, performance on handoffs was rated higher when at the community hospital, which could be due to several reasons. The most plausible explanation is that the workload associated with that sign‐out is less due to lower patient census and lower patient acuity. In the community hospital, fewer residents were also geographically co‐located on a quieter ward and work room area, which may contribute to higher ratings across domains.

This study also has implications for future efforts to improve and evaluate handoff performance in residency trainees. For example, our findings suggest the importance of enhancing supervision and training for handoffs during high workload rotations or certain times of the year. In addition, evaluation systems for handoff performance that rely solely on peer evaluation will not likely yield an accurate picture of handoff performance, difficulty obtaining peer evaluations, the halo effect, and other forms of evaluator bias in ratings. Accurate handoff evaluation may require direct observation of verbal communication and faculty audit of written sign‐outs.[16, 17] Moreover, methods such as appreciative inquiry can help identify the peers with the best practices to emulate.[18] Future efforts to validate peer assessment of handoffs against these other assessment methods, such as direct observation by service attendings, are needed.

There are limitations to this study. First, although we have limited our findings to 1 residency program with 1 type of rotation, we have already expanded to a community residency program that used a float system and have disseminated our tool to several other institutions. In addition, we have a small number of participants, and our 60% return rate on monthly peer evaluations raises concerns of nonresponse bias. For example, a peer who perceived the handoff performance of an intern to be poor may be less likely to return the evaluation. Because our dataset has been deidentified per institutional review board request, we do not have any information to differentiate systematic reasons for not responding to the evaluation. Anecdotally, a critique of the tool is that it is lengthy, especially in light of the fact that 1 intern completes 3 additional handoff evaluations. It is worth understanding why the instrument had such a high internal consistency. Although the items were designed to address different competencies initially, peers may make a global assessment about someone's ability to perform a handoff and then fill out the evaluation accordingly. This speaks to the difficulty in evaluating the subcomponents of various actions related to the handoff. Because of the high internal consistency, we were able to shorten the survey to a 5‐item instrument with a Cronbach of 0.93, which we are currently using in our program and have disseminated to other programs. Although it is currently unclear if the ratings of performance on the longer peer evaluation are valid, we are investigating concurrent validity of the shorter tool by comparing peer evaluations to other measures of handoff quality as part of our current work. Last, we are only able to test associations and not make causal inferences.

CONCLUSION

Peer assessment of handoff skills is feasible via an electronic competency‐based tool. Although there is evidence of score inflation, intern performance does increase over time and is associated with various aspects of workload, such as postcall status or working on a rotation at a community affiliate with a lower census. Together, these data can provide a foundation for developing milestones handoffs that reflect the natural progression of intern competence in handoffs.

Acknowledgments

The authors thank the University of Chicago Medicine residents and chief residents, the members of the Curriculum and Housestaff Evaluation Committee, Tyrece Hunter and Amy Ice‐Gibson, and Meryl Prochaska and Laura Ruth Venable for assistance with manuscript preparation.

Disclosures

This study was funded by the University of Chicago Department of Medicine Clinical Excellence and Medical Education Award and AHRQ R03 5R03HS018278‐02 Development of and Validation of a Tool to Evaluate Hand‐off Quality.

The advent of restricted residency duty hours has thrust the safety risks of handoffs into the spotlight. More recently, the Accreditation Council of Graduate Medical Education (ACGME) has restricted hours even further to a maximum of 16 hours for first‐year residents and up to 28 hours for residents beyond their first year.[1] Although the focus on these mandates has been scheduling and staffing in residency programs, another important area of attention is for handoff education and evaluation. The Common Program Requirements for the ACGME state that all residency programs should ensure that residents are competent in handoff communications and that programs should monitor handoffs to ensure that they are safe.[2] Moreover, recent efforts have defined milestones for handoffs, specifically that by 12 months, residents should be able to effectively communicate with other caregivers to maintain continuity during transitions of care.[3] Although more detailed handoff‐specific milestones have to be flushed out, a need for evaluation instruments to assess milestones is critical. In addition, handoffs continue to represent a vulnerable time for patients in many specialties, such as surgery and pediatrics.[4, 5]

Evaluating handoffs poses specific challenges for internal medicine residency programs because handoffs are often conducted on the fly or wherever convenient, and not always at a dedicated time and place.[6] Even when evaluations could be conducted at a dedicated time and place, program faculty and leadership may not be comfortable evaluating handoffs in real time due to lack of faculty development and recent experience with handoffs. Although supervising faculty may be in the most ideal position due to their intimate knowledge of the patient and their ability to evaluate the clinical judgment of trainees, they may face additional pressures of supervision and direct patient care that prevent their attendance at the time of the handoff. For these reasons, potential people to evaluate the quality of a resident handoff may be the peers to whom they frequently handoff. Because handoffs are also conceptualized as an interactive dialogue between sender and receiver, an ideal handoff performance evaluation would capture both of these roles.[7] For these reasons, peer evaluation may be a viable modality to assist programs in evaluating handoffs. Peer evaluation has been shown to be an effective method of rating performance of medical students,[8] practicing physicians,[9] and residents.[10] Moreover, peer evaluation is now a required feature in assessing internal medicine resident performance.[11] Although enthusiasm for peer evaluation has grown in residency training, the use of it can still be limited by a variety of problems, such as reluctance to rate peers poorly, difficulty obtaining evaluations, and the utility of such evaluations. For these reasons, it is important to understand whether peer evaluation of handoffs is feasible. Therefore, the aim of this study was to assess feasibility of an online peer evaluation survey tool of handoffs in an internal medicine residency and to characterize performance over time as well and associations between workload and performance.

METHODS

From July 2009 to March 2010, all interns on the general medicine inpatient service at 2 hospitals were asked to complete an end‐of‐month anonymous peer evaluation that included 14‐items addressing all core competencies. The evaluation tool was administered electronically using New Innovations (New Innovations, Inc., Uniontown, OH). Interns signed out to each other in a cross‐cover circuit that included 3 other interns on an every fourth night call cycle.[12] Call teams included 1 resident and 1 intern who worked from 7 am on the on‐call day to noon on the postcall day. Therefore, postcall interns were expected to hand off to the next on‐call intern before noon. Although attendings and senior residents were not required to formally supervise the handoff, supervising senior residents were often present during postcall intern sign‐out to facilitate departure of the team. When interns were not postcall, they were expected to sign out before they went to the clinic in the afternoon or when their foreseeable work was complete. The interns were provided with a 45‐minute lecture on handoffs and introduced to the peer evaluation tool in July 2009 at an intern orientation. They were also prompted to complete the tool to the best of their ability after their general medicine rotation. We chose the general medicine rotation because each intern completed approximately 2 months of general medicine in their first year. This would provide ratings over time without overburdening interns to complete 3 additional evaluations after every inpatient rotation.

The peer evaluation was constructed to correspond to specific ACGME core competencies and was also linked to specific handoff behaviors that were known to be effective. The questions were adapted from prior items used in a validated direct‐observation tool previously developed by the authors (the Handoff Clinical Evaluation Exercise), which was based on literature review as well as expert opinion.[13, 14] For example, under the core competency of communication, interns were asked to rate each other on communication skills using the anchors of No questions, no acknowledgement of to do tasks, transfer of information face to face is not a priority for low unsatisfactory (1) and Appropriate use of questions, acknowledgement and read‐back of to‐do and priority tasks, face to face communication a priority for high superior (9). Items that referred to behaviors related to both giving handoff and receiving handoff were used to capture the interactive dialogue between senders and receivers that characterize ideal handoffs. In addition, specific items referring to written sign‐out and verbal sign‐out were developed to capture the specific differences. For instance, for the patient care competency in written sign‐out, low unsatisfactory (1) was defined as Incomplete written content; to do's omitted or requested with no rationale or plan, or with inadequate preparation (ie, request to transfuse but consent not obtained), and high superior (9) was defined as Content is complete with to do's accompanied by clear plan of action and rationale. Pilot testing with trainees was conducted, including residents not involved in the study and clinical students. The tool was also reviewed by the residency program leadership, and in an effort to standardize the reporting of the items with our other evaluation forms, each item was mapped to a core competency that it was most related to. Debriefing of the instrument experience following usage was performed with 3 residents who had an interest in medical education and handoff performance.

The tool was deployed to interns following a brief educational session for interns, in which the tool was previewed and reviewed. Interns were counseled to use the form as a global performance assessment over the course of the month, in contrast to an episodic evaluation. This would also avoid the use of negative event bias by raters, in which the rater allows a single negative event to influence the perception of the person's performance, even long after the event has passed into history.

To analyze the data, descriptive statistics were used to summarize mean performance across domains. To assess whether intern performance improved over time, we split the academic year into 3 time periods of 3 months each, which we have used in earlier studies assessing intern experience.[15] Prior to analysis, postcall interns were identified by using the intern monthly call schedule located in the AMiON software program (Norwich, VT) to label the evaluation of the postcall intern. Then, all names were removed and replaced with a unique identifier for the evaluator and the evaluatee. In addition, each evaluation was also categorized as either having come from the main teaching hospital or the community hospital affiliate.

Multivariate random effects linear regression models, controlling for evaluator, evaluatee, and hospital, were used to assess the association between time (using indicator variables for season) and postcall status on intern performance. In addition, because of the skewness in the ratings, we also undertook additional analysis by transforming our data into dichotomous variables reflecting superior performance. After conducting conditional ordinal logistic regression, the main findings did not change. We also investigated within‐subject and between‐subject variation using intraclass correlation coefficients. Within‐subject intraclass correlation enabled assessment of inter‐rater reliability. Between‐subject intraclass correlation enabled the assessment of evaluator effects. Evaluator effects can encompass a variety of forms of rater bias such as leniency (in which evaluators tended to rate individuals uniformly positively), severity (rater tends to significantly avoid using positive ratings), or the halo effect (the individual being evaluated has 1 significantly positive attribute that overrides that which is being evaluated). All analyses were completed using STATA 10.0 (StataCorp, College Station, TX) with statistical significance defined as P < 0.05. This study was deemed to be exempt from institutional review board review after all data were deidentified prior to analysis.

RESULTS

From July 2009 to March 2010, 31 interns (78%) returned 60% (172/288) of the peer evaluations they received. Almost all (39/40, 98%) interns were evaluated at least once with a median of 4 ratings per intern (range, 19). Thirty‐five percent of ratings occurred when an intern was rotating at the community hospital. Ratings were very high on all domains (mean, 8.38.6). Overall sign‐out performance was rated as 8.4 (95% confidence interval [CI], 8.3‐8.5), with over 55% rating peers as 9 (maximal score). The lowest score given was 5. Individual items ranged from a low of 8.34 (95% CI, 8.21‐8.47) for updating written sign‐outs, to a high of 8.60 (95% CI, 8.50‐8.69) for collegiality (Table 1) The internal consistency of the instrument was calculated using all items and was very high, with a Cronbach = 0.98.

Mean Intern Ratings on Sign‐out Peer Evaluation by Item and Competency
ACGME Core CompetencyRoleItemsItemMean95% CIRange% Receiving 9 as Rating
  • NOTE: Abbreviations: ACGME, Accreditation Council of Graduate Medical Education; CI, confidence interval.

Patient careSenderWritten sign‐outQ18.348.25 to 8.486953.2
SenderUpdated contentQ28.358.22 to 8.475954.4
ReceiverDocumentation of overnight eventsQ68.418.30 to 8.526956.3
Medical knowledgeSenderAnticipatory guidanceQ38.408.28 to 8.516956.3
ReceiverClinical decision making during cross‐coverQ78.458.35 to 8.556956.0
ProfessionalismSenderCollegialityQ48.608.51 to 8.686965.7
ReceiverAcknowledgement of professional responsibilityQ108.538.43 to 8.626962.4
ReceiverTimeliness/responsivenessQ118.508.39 to 8.606961.9
Interpersonal and communication skillsReceiverListening behavior when receiving sign‐outsQ88.528.42 to 8.626963.6
ReceiverCommunication when receiving sign‐outQ98.528.43 to 8.626963.0
Systems‐based practiceReceiverResource useQ128.458.35 to 8.556955.6
Practice‐based learning and improvementSenderAccepting of feedbackQ58.458.34 to 8.556958.7
OverallBothOverall sign‐out qualityQ138.448.34 to 8.546955.3

Mean ratings for each item increased in season 2 and 3 and were statistically significant using a test for trend across ordered groups. However, in multivariate regression models, improvements remained statistically significant for only 4 items (Figure 1): 1) communication skills, 2) listening behavior, 3) accepting professional responsibility, and 4) accessing the system (Table 2). Specifically, when compared to season 1, improvements in communication skill were seen in season 2 (+0.34 [95% CI, 0.08‐0.60], P = 0.009) and were sustained in season 3 (+0.34 [95% CI, 0.06‐0.61], P = 0.018). A similar pattern was observed for listening behavior, with improvement in ratings that were similar in magnitude with increasing intern experience (season 2, +0.29 [95% CI, 0.04‐0.55], P = 0.025 compared to season 1). Although accessing the system scores showed a similar pattern of improvement with an increase in season 2 compared to season 1, the magnitude of this change was smaller (season 2, +0.21 [95% CI, 0.03‐0.39], P = 0.023). Interestingly, improvements in accepting professional responsibility rose during season 2, but the difference did not reach statistical significance until season 3 (+0.37 [95% CI, 0.08‐0.65], P = 0.012 compared to season 1).

Figure 1
Graph showing improvements over time in performance in domains of sign‐out performance by season, where season 1 is July to September, season 2 is October to December, and season 3 is January to March. Results are obtained from random effects linear regression models controlling for evaluator, evaluate, postcall status, and site (community vs tertiary).
Increasing Scores on Peer Handoff Evaluation by Season
 Outcome
 Coefficient (95% CI)
PredictorCommunication SkillsListening BehaviorProfessional ResponsibilityAccessing the SystemWritten Sign‐out Quality
  • NOTE: Results are from multivariable linear regression models examining the association between season, community hospital, postcall status controlling for subject (evaluatee) random effects, and evaluator fixed effects (evaluator and evaluate effects not shown). Abbreviations: CI, confidence interval. *P < 0.05.

Season 1RefRefRefRefRef
Season 20.29 (0.04 to 0.55)a0.34 (0.08 to 0.60)a0.24 (0.03 to 0.51)0.21 (0.03 to 0.39)a0.05 (0.25 to 0.15)
Season 30.29 (0.02 to 0.56)a0.34 (0.06 to 0.61)a0.37 (0.08 to 0.65)a0.18 (0.01 to 0.36)a0.08 (0.13 to 0.30)
Community hospital0.18 (0.00 to 0.37)0.23 (0.04 to 0.43)a0.06 (0.13 to 0.26)0.13 (0.00 to 0.25)0.24 (0.08 to 0.39)a
Postcall0.10 (0.25 to 0.05)0.04 (0.21 to 0.13)0.02 (0.18 to 0.13)0.05 (0.16 to 0.05)0.18 (0.31,0.05)a
Constant7.04 (6.51 to 7.58)6.81 (6.23 to 7.38)7.04 (6.50 to 7.60)7.02 (6.59 to 7.45)6.49 (6.04 to 6.94)

In addition to increasing experience, postcall interns were rated significantly lower than nonpostcall interns in 2 items: 1) written sign‐out quality (8.21 vs 8.39, P = 0.008) and 2) accepting feedback (practice‐based learning and improvement) (8.25 vs 8.42, P = 0.006). Interestingly, when interns were at the community hospital general medicine rotation, where overall census was much lower than at the teaching hospital, peer ratings were significantly higher for overall handoff performance and 7 (written sign‐out, update content, collegiality, accepting feedback, documentation of overnight events, clinical decision making during cross‐cover, and listening behavior) of the remaining 12 specific handoff domains (P < 0.05 for all, data not shown).

Last, significant evaluator effects were observed, which contributed to the variance in ratings given. For example, using intraclass correlation coefficients (ICC), we found that there was greater within‐intern variation than between‐intern variation, highlighting that evaluator scores tended to be strongly correlated with each other (eg, ICC overall performance = 0.64) and more so than scores of multiple evaluations of the same intern (eg, ICC overall performance = 0.18).

Because ratings of handoff performance were skewed, we also conducted a sensitivity analysis using ordinal logistic regression to ascertain if our findings remained significant. Using ordinal logistic regression models, significant improvements were seen in season 3 for 3 of the above‐listed behaviors, specifically listening behavior, professional responsibility, and accessing the system. Although there was no improvement in communication, there was an improvement observed in collegiality scores that were significant in season 3.

DISCUSSION

Using an end‐of‐rotation online peer assessment of handoff skills, it is feasible to obtain ratings of intern handoff performance from peers. Although there is evidence of rater bias toward leniency and low inter‐rater reliability, peer ratings of intern performance did increase over time. In addition, peer ratings were lower for interns who were handing off their postcall service. Working on a rotation at a community affiliate with a lower census was associated with higher peer ratings of handoffs.

It is worth considering the mechanism of these findings. First, the leniency observed in peer ratings likely reflects peers unwilling to critique each other due to a desire for an esprit de corps among their classmates. The low intraclass correlation coefficient for ratings of the same intern highlight that peers do not easily converge on their ratings of the same intern. Nevertheless, the ratings on the peer evaluation did demonstrate improvements over time. This improvement could easily reflect on‐the‐job learning, as interns become more acquainted with their roles and efficient and competent in their tasks. Together, these data provide a foundation for developing milestone handoffs that reflect the natural progression of intern competence in handoffs. For example, communication appeared to improve at 3 months, whereas transfer of professional responsibility improved at 6 months after beginning internship. However, alternative explanations are also important to consider. Although it is easy and somewhat reassuring to assume that increases over time reflect a learning effect, it is also possible that interns are unwilling to critique their peers as familiarity with them increases.

There are several reasons why postcall interns could have been universally rated lower than nonpostcall interns. First, postcall interns likely had the sickest patients with the most to‐do tasks or work associated with their sign‐out because they were handing off newly admitted patients. Because the postcall sign‐out is associated with the highest workload, it may be that interns perceive that a good handoff is nothing to do, and handoffs associated with more work are not highly rated. It is also important to note that postcall interns, who in this study were at the end of a 30‐hour duty shift, were also most fatigued and overworked, which may have also affected the handoff, especially in the 2 domains of interest. Due to the time pressure to leave coupled with fatigue, they may have had less time to invest in written sign‐out quality and may not have been receptive to feedback on their performance. Likewise, performance on handoffs was rated higher when at the community hospital, which could be due to several reasons. The most plausible explanation is that the workload associated with that sign‐out is less due to lower patient census and lower patient acuity. In the community hospital, fewer residents were also geographically co‐located on a quieter ward and work room area, which may contribute to higher ratings across domains.

This study also has implications for future efforts to improve and evaluate handoff performance in residency trainees. For example, our findings suggest the importance of enhancing supervision and training for handoffs during high workload rotations or certain times of the year. In addition, evaluation systems for handoff performance that rely solely on peer evaluation will not likely yield an accurate picture of handoff performance, difficulty obtaining peer evaluations, the halo effect, and other forms of evaluator bias in ratings. Accurate handoff evaluation may require direct observation of verbal communication and faculty audit of written sign‐outs.[16, 17] Moreover, methods such as appreciative inquiry can help identify the peers with the best practices to emulate.[18] Future efforts to validate peer assessment of handoffs against these other assessment methods, such as direct observation by service attendings, are needed.

There are limitations to this study. First, although we have limited our findings to 1 residency program with 1 type of rotation, we have already expanded to a community residency program that used a float system and have disseminated our tool to several other institutions. In addition, we have a small number of participants, and our 60% return rate on monthly peer evaluations raises concerns of nonresponse bias. For example, a peer who perceived the handoff performance of an intern to be poor may be less likely to return the evaluation. Because our dataset has been deidentified per institutional review board request, we do not have any information to differentiate systematic reasons for not responding to the evaluation. Anecdotally, a critique of the tool is that it is lengthy, especially in light of the fact that 1 intern completes 3 additional handoff evaluations. It is worth understanding why the instrument had such a high internal consistency. Although the items were designed to address different competencies initially, peers may make a global assessment about someone's ability to perform a handoff and then fill out the evaluation accordingly. This speaks to the difficulty in evaluating the subcomponents of various actions related to the handoff. Because of the high internal consistency, we were able to shorten the survey to a 5‐item instrument with a Cronbach of 0.93, which we are currently using in our program and have disseminated to other programs. Although it is currently unclear if the ratings of performance on the longer peer evaluation are valid, we are investigating concurrent validity of the shorter tool by comparing peer evaluations to other measures of handoff quality as part of our current work. Last, we are only able to test associations and not make causal inferences.

CONCLUSION

Peer assessment of handoff skills is feasible via an electronic competency‐based tool. Although there is evidence of score inflation, intern performance does increase over time and is associated with various aspects of workload, such as postcall status or working on a rotation at a community affiliate with a lower census. Together, these data can provide a foundation for developing milestones handoffs that reflect the natural progression of intern competence in handoffs.

Acknowledgments

The authors thank the University of Chicago Medicine residents and chief residents, the members of the Curriculum and Housestaff Evaluation Committee, Tyrece Hunter and Amy Ice‐Gibson, and Meryl Prochaska and Laura Ruth Venable for assistance with manuscript preparation.

Disclosures

This study was funded by the University of Chicago Department of Medicine Clinical Excellence and Medical Education Award and AHRQ R03 5R03HS018278‐02 Development of and Validation of a Tool to Evaluate Hand‐off Quality.

References
  1. Nasca TJ, Day SH, Amis ES; the ACGME Duty Hour Task Force. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010; 363.
  2. Common program requirements. Available at: http://acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed December 10, 2012.
  3. Green ML, Aagaard EM, Caverzagie KJ, et al. Charting the road to competence: developmental milestones for internal medicine residency training. J Grad Med Educ. 2009;1(1):520.
  4. Greenberg CC, Regenbogen SE, Studdert DM, et al. Patterns of communication breakdowns resulting in injury to surgical patients. J Am Coll Surg. 2007;204(4):533540.
  5. McSweeney ME, Lightdale JR, Vinci RJ, Moses J. Patient handoffs: pediatric resident experiences and lessons learned. Clin Pediatr (Phila). 2011;50(1):5763.
  6. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257266.
  7. Gibson SC, Ham JJ, Apker J, Mallak LA, Johnson NA. Communication, communication, communication: the art of the handoff. Ann Emerg Med. 2010;55(2):181183.
  8. Arnold L, Willouby L, Calkins V, Gammon L, Eberhardt G. Use of peer evaluation in the assessment of medical students. J Med Educ. 1981;56:3542.
  9. Ramsey PG, Wenrich MD, Carline JD, Inui TS, Larson EB, LoGerfo JP. Use of peer ratings to evaluate physician performance. JAMA. 1993;269:16551660.
  10. Thomas PA, Gebo KA, Hellmann DB. A pilot study of peer review in residency training. J Gen Intern Med. 1999;14(9):551554.
  11. ACGME Program Requirements for Graduate Medical Education in Internal Medicine Effective July 1, 2009. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/ProgramRequirements/140_internal_medicine_07012009.pdf. Accessed December 10, 2012.
  12. Arora V, Dunphy C, Chang VY, Ahmad F, Humphrey HJ, Meltzer D. The effects of on‐duty napping on intern sleep time and fatigue. Ann Intern Med. 2006;144(11):792798.
  13. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129134.
  14. Horwitz LI, Dombroski J, Murphy TE, Farnan JM, Johnson JK, Arora VM. Validation of a handoff assessment tool: the Handoff CEX [published online ahead of print June 7, 2012]. J Clin Nurs. doi: 10.1111/j.1365‐2702.2012.04131.x.
  15. Arora VM, Georgitis E, Siddique J, et al. Association of workload of on‐call medical interns with on‐call sleep duration, shift duration, and participation in educational activities. JAMA. 2008;300(10):11461153.
  16. Gakhar B, Spencer AL. Using direct observation, formal evaluation, and an interactive curriculum to improve the sign‐out practices of internal medicine interns. Acad Med. 2010;85(7):11821188.
  17. Bump GM, Bost JE, Buranosky R, Elnicki M. Faculty member review and feedback using a sign‐out checklist: improving intern written sign‐out. Acad Med. 2012;87(8):11251131.
  18. Helms AS, Perez TE, Baltz J, et al. Use of an appreciative inquiry approach to improve resident sign‐out in an era of multiple shift changes. J Gen Intern Med. 2012;27(3):287291.
References
  1. Nasca TJ, Day SH, Amis ES; the ACGME Duty Hour Task Force. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010; 363.
  2. Common program requirements. Available at: http://acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed December 10, 2012.
  3. Green ML, Aagaard EM, Caverzagie KJ, et al. Charting the road to competence: developmental milestones for internal medicine residency training. J Grad Med Educ. 2009;1(1):520.
  4. Greenberg CC, Regenbogen SE, Studdert DM, et al. Patterns of communication breakdowns resulting in injury to surgical patients. J Am Coll Surg. 2007;204(4):533540.
  5. McSweeney ME, Lightdale JR, Vinci RJ, Moses J. Patient handoffs: pediatric resident experiences and lessons learned. Clin Pediatr (Phila). 2011;50(1):5763.
  6. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257266.
  7. Gibson SC, Ham JJ, Apker J, Mallak LA, Johnson NA. Communication, communication, communication: the art of the handoff. Ann Emerg Med. 2010;55(2):181183.
  8. Arnold L, Willouby L, Calkins V, Gammon L, Eberhardt G. Use of peer evaluation in the assessment of medical students. J Med Educ. 1981;56:3542.
  9. Ramsey PG, Wenrich MD, Carline JD, Inui TS, Larson EB, LoGerfo JP. Use of peer ratings to evaluate physician performance. JAMA. 1993;269:16551660.
  10. Thomas PA, Gebo KA, Hellmann DB. A pilot study of peer review in residency training. J Gen Intern Med. 1999;14(9):551554.
  11. ACGME Program Requirements for Graduate Medical Education in Internal Medicine Effective July 1, 2009. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/ProgramRequirements/140_internal_medicine_07012009.pdf. Accessed December 10, 2012.
  12. Arora V, Dunphy C, Chang VY, Ahmad F, Humphrey HJ, Meltzer D. The effects of on‐duty napping on intern sleep time and fatigue. Ann Intern Med. 2006;144(11):792798.
  13. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129134.
  14. Horwitz LI, Dombroski J, Murphy TE, Farnan JM, Johnson JK, Arora VM. Validation of a handoff assessment tool: the Handoff CEX [published online ahead of print June 7, 2012]. J Clin Nurs. doi: 10.1111/j.1365‐2702.2012.04131.x.
  15. Arora VM, Georgitis E, Siddique J, et al. Association of workload of on‐call medical interns with on‐call sleep duration, shift duration, and participation in educational activities. JAMA. 2008;300(10):11461153.
  16. Gakhar B, Spencer AL. Using direct observation, formal evaluation, and an interactive curriculum to improve the sign‐out practices of internal medicine interns. Acad Med. 2010;85(7):11821188.
  17. Bump GM, Bost JE, Buranosky R, Elnicki M. Faculty member review and feedback using a sign‐out checklist: improving intern written sign‐out. Acad Med. 2012;87(8):11251131.
  18. Helms AS, Perez TE, Baltz J, et al. Use of an appreciative inquiry approach to improve resident sign‐out in an era of multiple shift changes. J Gen Intern Med. 2012;27(3):287291.
Issue
Journal of Hospital Medicine - 8(3)
Issue
Journal of Hospital Medicine - 8(3)
Page Number
132-136
Page Number
132-136
Publications
Publications
Article Type
Display Headline
Implementing Peer Evaluation of Handoffs: Associations With Experience and Workload
Display Headline
Implementing Peer Evaluation of Handoffs: Associations With Experience and Workload
Sections
Article Source

Copyright © 2012 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Vineet Arora MD, University of Chicago, 5841 S Maryland Ave., MC 2007 AMB W216, Chicago, IL 60637; Tel.: (773) 702‐8157, Fax: (773) 834‐2238; E‐mail: varora@medicine.bsd.uchicago.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Survey of Hospitalist Supervision

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Survey of overnight academic hospitalist supervision of trainees

In 2003, the Accreditation Council for Graduate Medical Education (ACGME) announced the first in a series of guidelines related to the regulation and oversight of residency training.1 The initial iteration specifically focused on the total and consecutive numbers of duty hours worked by trainees. These limitations began a new era of shift work in internal medicine residency training. With decreases in housestaff admitting capacity, clinical work has frequently been offloaded to non‐teaching or attending‐only services, increasing the demand for hospitalists to fill the void in physician‐staffed care in the hospital.2, 3 Since the implementation of the 2003 ACGME guidelines and a growing focus on patient safety, there has been increased study of, and call for, oversight of trainees in medicine; among these was the 2008 Institute of Medicine report,4 calling for 24/7 attending‐level supervision. The updated ACGME requirements,5 effective July 1, 2011, mandate enhanced on‐site supervision of trainee physicians. These new regulations not only define varying levels of supervision for trainees, including direct supervision with the physical presence of a supervisor and the degree of availability of said supervisor, they also describe ensuring the quality of supervision provided.5 While continuous attending‐level supervision is not yet mandated, many residency programs look to their academic hospitalists to fill the supervisory void, particularly at night. However, what specific roles hospitalists play in the nighttime supervision of trainees or the impact of this supervision remains unclear. To date, no study has examined a broad sample of hospitalist programs in teaching hospitals and the types of resident oversight they provide. We aimed to describe the current state of academic hospitalists in the clinical supervision of housestaff, specifically during the overnight period, and hospitalist perceptions of how the new ACGME requirements would impact traineehospitalist interactions.

METHODS

The Housestaff Oversight Subcommittee, a working group of the Society of General Internal Medicine (SGIM) Academic Hospitalist Task Force, surveyed a sample of academic hospitalist program leaders to assess the current status of trainee supervision performed by hospitalists. Programs were considered academic if they were located in the primary hospital of a residency that participates in the National Resident Matching Program for Internal Medicine. To obtain a broad geographic spectrum of academic hospitalist programs, all programs, both university and community‐based, in 4 states and 2 metropolitan regions were sampled: Washington, Oregon, Texas, Maryland, and the Philadelphia and Chicago metropolitan areas. Hospitalist program leaders were identified by members of the Taskforce using individual program websites and by querying departmental leadership at eligible teaching hospitals. Respondents were contacted by e‐mail for participation. None of the authors of the manuscript were participants in the survey.

The survey was developed by consensus of the working group after reviewing the salient literature and included additional questions queried to internal medicine program directors.6 The 19‐item SurveyMonkey instrument included questions about hospitalists' role in trainees' education and evaluation. A Likert‐type scale was used to assess perceptions regarding the impact of on‐site hospitalist supervision on trainee autonomy and hospitalist workload (1 = strongly disagree to 5 = strongly agree). Descriptive statistics were performed and, where appropriate, t test and Fisher's exact test were performed to identify associations between program characteristics and perceptions. Stata SE was used (STATA Corp, College Station, TX) for all statistical analysis.

RESULTS

The survey was sent to 47 individuals identified as likely hospitalist program leaders and completed by 41 individuals (87%). However, 7 respondents turned out not to be program leaders and were therefore excluded, resulting in a 72% (34/47) survey response rate.

The programs for which we did not obtain responses were similar to respondent programs, and did not include a larger proportion of community‐based programs or overrepresent a specific geographic region. Twenty‐five (73%) of the 34 hospitalist program leaders were male, with an average age of 44.3 years, and an average of 12 years post‐residency training (range, 530 years). They reported leading groups with an average of 18 full‐time equivalent (FTE) faculty (range, 350 persons).

Relationship of Hospitalist Programs With the Residency Program

The majority (32/34, 94%) of respondents describe their program as having traditional housestaffhospitalist interactions on an attending‐covered housestaff teaching service. Other hospitalists' clinical roles included: attending on uncovered (non‐housestaff services; 29/34, 85%); nighttime coverage (24/34, 70%); attending on consult services with housestaff (24/34, 70%). All respondents reported that hospitalist faculty are expected to participate in housestaff teaching or to fulfill other educational roles within the residency training program. These educational roles include participating in didactics or educational conferences, and serving as advisors. Additionally, the faculty of 30 (88%) programs have a formal evaluative role over the housestaff they supervise on teaching services (eg, members of formal housestaff evaluation committee). Finally, 28 (82%) programs have faculty who play administrative roles in the residency programs, such as involvement in program leadership or recruitment. Although 63% of the corresponding internal medicine residency programs have a formal housestaff supervision policy, only 43% of program leaders stated that their hospitalists receive formal faculty development on how to provide this supervision to resident trainees. Instead, the majority of hospitalist programs were described as having teaching expectations in the absence of a formal policy.

Twenty‐one programs (21/34, 61%) described having an attending hospitalist physician on‐site overnight to provide ongoing patient care or admit new patients. Of those with on‐site attending coverage, a minority of programs (8/21, 38%) reported having a formal defined supervisory role of housestaff trainees for hospitalists during the overnight period. In these 8 programs, this defined role included a requirement for housestaff to present newly admitted patients or contact hospitalists with questions regarding patient management. Twenty‐four percent (5/21) of the programs with nighttime coverage stated that the role of the nocturnal attending was only to cover the non‐teaching services, without housestaff interaction or supervision. The remainder of programs (8/21, 38%) describe only informal interactions between housestaff and hospitalist faculty, without clearly defined expectations for supervision.

Perceptions of New Regulations and Night Work

Hospitalist leaders viewed increased supervision of housestaff both positively and negatively. Leaders were asked their level of agreement with the potential impact of increased hospitalist nighttime supervision. Of respondents, 85% (27/32) agreed that formal overnight supervision by an attending hospitalist would improve patient safety, and 60% (20/33) agreed that formal overnight supervision would improve traineehospitalist relationships. In addition, 60% (20/33) of respondents felt that nighttime supervision of housestaff by faculty hospitalists would improve resident education. However, approximately 40% (13/33) expressed concern that increased on‐site hospitalist supervision would hamper resident decision‐making autonomy, and 75% (25/33) agreed that a formal housestaff supervisory role would increase hospitalist work load. The perception of increased workload was influenced by a hospitalist program's current supervisory role. Hospitalists programs providing formal nighttime supervision for housestaff, compared to those with informal or poorly defined faculty roles, were less likely to perceive these new regulations as resulting in an increase in hospitalist workload (3.72 vs 4.42; P = 0.02). In addition, hospitalist programs with a formal nighttime role were more likely to identify lack of specific parameters for attending‐level contact as a barrier to residents not contacting their supervisors during the overnight period (2.54 vs 3.54; P = 0.03). No differences in perception of the regulations were noted for those hospitalist programs which had existing faculty development on clinical supervision.

DISCUSSION

This study provides important information about how academic hospitalists currently contribute to the supervision of internal medicine residents. While academic hospitalist groups frequently have faculty providing clinical care on‐site at night, and often hospitalists provide overnight supervision of internal medicine trainees, formal supervision of trainees is not uniform, and few hospitalists groups have a mechanism to provide training or faculty development on how to effectively supervise resident trainees. Hospitalist leaders expressed concerns that creating additional formal overnight supervisory responsibilities may add to an already burdened overnight hospitalist. Formalizing this supervisory role, including explicit role definitions and faculty training for trainee supervision, is necessary.

Though our sample size is small, we captured a diverse geographic range of both university and community‐based academic hospitalist programs by surveying group leaders in several distinct regions. We are unable to comment on differences between responding and non‐responding hospitalist programs, but there does not appear to be a systematic difference between these groups.

Our findings are consistent with work describing a lack of structured conceptual frameworks in effectively supervising trainees,7, 8 and also, at times, nebulous expectations for hospitalist faculty. We found that the existence of a formal supervisory policy within the associated residency program, as well as defined roles for hospitalists, increases the likelihood of positive perceptions of the new ACGME supervisory recommendations. However, the existence of these requirements does not mean that all programs are capable of following them. While additional discussion is required to best delineate a formal overnight hospitalist role in trainee supervision, clearly defining expectations for both faculty and trainees, and their interactions, may alleviate the struggles that exist in programs with ill‐defined roles for hospitalist faculty supervision. While faculty duty hours standards do not exist, additional duties of nighttime coverage for hospitalists suggests that close attention should be paid to burn‐out.9 Faculty development on nighttime supervision and teaching may help maximize both learning and patient care efficiency, and provide a framework for this often unstructured educational time.

Acknowledgements

The research reported here was supported by the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service (REA 05‐129, CDA 07‐022). The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs.

Files
References
  1. Philibert I,Friedman P,Williams WT.New requirements for resident duty hours.JAMA.2002;288:11121114.
  2. Nuckol T,Bhattacharya J,Wolman DM,Ulmer C,Escarce J.Cost implications of reduced work hours and workloads for resident physicians.N Engl J Med.2009;360:22022215.
  3. Horwitz L.Why have working hour restrictions apparently not improved patient safety?BMJ.2011;342:d1200.
  4. Ulmer C, Wolman DM, Johns MME, eds.Resident Duty Hours: Enhancing Sleep, Supervision, and Safety.Washington, DC:National Academies Press;2008.
  5. Nasca TJ,Day SH,Amis ES;for the ACGME Duty Hour Task Force.The new recommendations on duty hours from the ACGME Task Force.N Engl J Med.2010;363.
  6. Association of Program Directors in Internal Medicine (APDIM) Survey 2009. Available at: http://www.im.org/toolbox/surveys/SurveyDataand Reports/APDIMSurveyData/Documents/2009_APDIM_summary_web. pdf. Accessed on July 30, 2012.
  7. Kennedy TJ,Lingard L,Baker GR,Kitchen L,Regehr G.Clinical oversight: conceptualizing the relationship between supervision and safety.J Gen Intern Med.2007;22(8):10801085.
  8. Farnan JM,Johnson JK,Meltzer DO, et al.Strategies for effective on‐call supervision for internal medicine residents: the SUPERB/SAFETY model.J Grad Med Educ.2010;2(1):4652.
  9. Glasheen J,Misky G,Reid M,Harrison R,Sharpe B,Auerbach A.Career satisfaction and burn‐out in academic hospital medicine.Arch Intern Med.2011;171(8):782785.
Article PDF
Issue
Journal of Hospital Medicine - 7(7)
Publications
Page Number
521-523
Sections
Files
Files
Article PDF
Article PDF

In 2003, the Accreditation Council for Graduate Medical Education (ACGME) announced the first in a series of guidelines related to the regulation and oversight of residency training.1 The initial iteration specifically focused on the total and consecutive numbers of duty hours worked by trainees. These limitations began a new era of shift work in internal medicine residency training. With decreases in housestaff admitting capacity, clinical work has frequently been offloaded to non‐teaching or attending‐only services, increasing the demand for hospitalists to fill the void in physician‐staffed care in the hospital.2, 3 Since the implementation of the 2003 ACGME guidelines and a growing focus on patient safety, there has been increased study of, and call for, oversight of trainees in medicine; among these was the 2008 Institute of Medicine report,4 calling for 24/7 attending‐level supervision. The updated ACGME requirements,5 effective July 1, 2011, mandate enhanced on‐site supervision of trainee physicians. These new regulations not only define varying levels of supervision for trainees, including direct supervision with the physical presence of a supervisor and the degree of availability of said supervisor, they also describe ensuring the quality of supervision provided.5 While continuous attending‐level supervision is not yet mandated, many residency programs look to their academic hospitalists to fill the supervisory void, particularly at night. However, what specific roles hospitalists play in the nighttime supervision of trainees or the impact of this supervision remains unclear. To date, no study has examined a broad sample of hospitalist programs in teaching hospitals and the types of resident oversight they provide. We aimed to describe the current state of academic hospitalists in the clinical supervision of housestaff, specifically during the overnight period, and hospitalist perceptions of how the new ACGME requirements would impact traineehospitalist interactions.

METHODS

The Housestaff Oversight Subcommittee, a working group of the Society of General Internal Medicine (SGIM) Academic Hospitalist Task Force, surveyed a sample of academic hospitalist program leaders to assess the current status of trainee supervision performed by hospitalists. Programs were considered academic if they were located in the primary hospital of a residency that participates in the National Resident Matching Program for Internal Medicine. To obtain a broad geographic spectrum of academic hospitalist programs, all programs, both university and community‐based, in 4 states and 2 metropolitan regions were sampled: Washington, Oregon, Texas, Maryland, and the Philadelphia and Chicago metropolitan areas. Hospitalist program leaders were identified by members of the Taskforce using individual program websites and by querying departmental leadership at eligible teaching hospitals. Respondents were contacted by e‐mail for participation. None of the authors of the manuscript were participants in the survey.

The survey was developed by consensus of the working group after reviewing the salient literature and included additional questions queried to internal medicine program directors.6 The 19‐item SurveyMonkey instrument included questions about hospitalists' role in trainees' education and evaluation. A Likert‐type scale was used to assess perceptions regarding the impact of on‐site hospitalist supervision on trainee autonomy and hospitalist workload (1 = strongly disagree to 5 = strongly agree). Descriptive statistics were performed and, where appropriate, t test and Fisher's exact test were performed to identify associations between program characteristics and perceptions. Stata SE was used (STATA Corp, College Station, TX) for all statistical analysis.

RESULTS

The survey was sent to 47 individuals identified as likely hospitalist program leaders and completed by 41 individuals (87%). However, 7 respondents turned out not to be program leaders and were therefore excluded, resulting in a 72% (34/47) survey response rate.

The programs for which we did not obtain responses were similar to respondent programs, and did not include a larger proportion of community‐based programs or overrepresent a specific geographic region. Twenty‐five (73%) of the 34 hospitalist program leaders were male, with an average age of 44.3 years, and an average of 12 years post‐residency training (range, 530 years). They reported leading groups with an average of 18 full‐time equivalent (FTE) faculty (range, 350 persons).

Relationship of Hospitalist Programs With the Residency Program

The majority (32/34, 94%) of respondents describe their program as having traditional housestaffhospitalist interactions on an attending‐covered housestaff teaching service. Other hospitalists' clinical roles included: attending on uncovered (non‐housestaff services; 29/34, 85%); nighttime coverage (24/34, 70%); attending on consult services with housestaff (24/34, 70%). All respondents reported that hospitalist faculty are expected to participate in housestaff teaching or to fulfill other educational roles within the residency training program. These educational roles include participating in didactics or educational conferences, and serving as advisors. Additionally, the faculty of 30 (88%) programs have a formal evaluative role over the housestaff they supervise on teaching services (eg, members of formal housestaff evaluation committee). Finally, 28 (82%) programs have faculty who play administrative roles in the residency programs, such as involvement in program leadership or recruitment. Although 63% of the corresponding internal medicine residency programs have a formal housestaff supervision policy, only 43% of program leaders stated that their hospitalists receive formal faculty development on how to provide this supervision to resident trainees. Instead, the majority of hospitalist programs were described as having teaching expectations in the absence of a formal policy.

Twenty‐one programs (21/34, 61%) described having an attending hospitalist physician on‐site overnight to provide ongoing patient care or admit new patients. Of those with on‐site attending coverage, a minority of programs (8/21, 38%) reported having a formal defined supervisory role of housestaff trainees for hospitalists during the overnight period. In these 8 programs, this defined role included a requirement for housestaff to present newly admitted patients or contact hospitalists with questions regarding patient management. Twenty‐four percent (5/21) of the programs with nighttime coverage stated that the role of the nocturnal attending was only to cover the non‐teaching services, without housestaff interaction or supervision. The remainder of programs (8/21, 38%) describe only informal interactions between housestaff and hospitalist faculty, without clearly defined expectations for supervision.

Perceptions of New Regulations and Night Work

Hospitalist leaders viewed increased supervision of housestaff both positively and negatively. Leaders were asked their level of agreement with the potential impact of increased hospitalist nighttime supervision. Of respondents, 85% (27/32) agreed that formal overnight supervision by an attending hospitalist would improve patient safety, and 60% (20/33) agreed that formal overnight supervision would improve traineehospitalist relationships. In addition, 60% (20/33) of respondents felt that nighttime supervision of housestaff by faculty hospitalists would improve resident education. However, approximately 40% (13/33) expressed concern that increased on‐site hospitalist supervision would hamper resident decision‐making autonomy, and 75% (25/33) agreed that a formal housestaff supervisory role would increase hospitalist work load. The perception of increased workload was influenced by a hospitalist program's current supervisory role. Hospitalists programs providing formal nighttime supervision for housestaff, compared to those with informal or poorly defined faculty roles, were less likely to perceive these new regulations as resulting in an increase in hospitalist workload (3.72 vs 4.42; P = 0.02). In addition, hospitalist programs with a formal nighttime role were more likely to identify lack of specific parameters for attending‐level contact as a barrier to residents not contacting their supervisors during the overnight period (2.54 vs 3.54; P = 0.03). No differences in perception of the regulations were noted for those hospitalist programs which had existing faculty development on clinical supervision.

DISCUSSION

This study provides important information about how academic hospitalists currently contribute to the supervision of internal medicine residents. While academic hospitalist groups frequently have faculty providing clinical care on‐site at night, and often hospitalists provide overnight supervision of internal medicine trainees, formal supervision of trainees is not uniform, and few hospitalists groups have a mechanism to provide training or faculty development on how to effectively supervise resident trainees. Hospitalist leaders expressed concerns that creating additional formal overnight supervisory responsibilities may add to an already burdened overnight hospitalist. Formalizing this supervisory role, including explicit role definitions and faculty training for trainee supervision, is necessary.

Though our sample size is small, we captured a diverse geographic range of both university and community‐based academic hospitalist programs by surveying group leaders in several distinct regions. We are unable to comment on differences between responding and non‐responding hospitalist programs, but there does not appear to be a systematic difference between these groups.

Our findings are consistent with work describing a lack of structured conceptual frameworks in effectively supervising trainees,7, 8 and also, at times, nebulous expectations for hospitalist faculty. We found that the existence of a formal supervisory policy within the associated residency program, as well as defined roles for hospitalists, increases the likelihood of positive perceptions of the new ACGME supervisory recommendations. However, the existence of these requirements does not mean that all programs are capable of following them. While additional discussion is required to best delineate a formal overnight hospitalist role in trainee supervision, clearly defining expectations for both faculty and trainees, and their interactions, may alleviate the struggles that exist in programs with ill‐defined roles for hospitalist faculty supervision. While faculty duty hours standards do not exist, additional duties of nighttime coverage for hospitalists suggests that close attention should be paid to burn‐out.9 Faculty development on nighttime supervision and teaching may help maximize both learning and patient care efficiency, and provide a framework for this often unstructured educational time.

Acknowledgements

The research reported here was supported by the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service (REA 05‐129, CDA 07‐022). The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs.

In 2003, the Accreditation Council for Graduate Medical Education (ACGME) announced the first in a series of guidelines related to the regulation and oversight of residency training.1 The initial iteration specifically focused on the total and consecutive numbers of duty hours worked by trainees. These limitations began a new era of shift work in internal medicine residency training. With decreases in housestaff admitting capacity, clinical work has frequently been offloaded to non‐teaching or attending‐only services, increasing the demand for hospitalists to fill the void in physician‐staffed care in the hospital.2, 3 Since the implementation of the 2003 ACGME guidelines and a growing focus on patient safety, there has been increased study of, and call for, oversight of trainees in medicine; among these was the 2008 Institute of Medicine report,4 calling for 24/7 attending‐level supervision. The updated ACGME requirements,5 effective July 1, 2011, mandate enhanced on‐site supervision of trainee physicians. These new regulations not only define varying levels of supervision for trainees, including direct supervision with the physical presence of a supervisor and the degree of availability of said supervisor, they also describe ensuring the quality of supervision provided.5 While continuous attending‐level supervision is not yet mandated, many residency programs look to their academic hospitalists to fill the supervisory void, particularly at night. However, what specific roles hospitalists play in the nighttime supervision of trainees or the impact of this supervision remains unclear. To date, no study has examined a broad sample of hospitalist programs in teaching hospitals and the types of resident oversight they provide. We aimed to describe the current state of academic hospitalists in the clinical supervision of housestaff, specifically during the overnight period, and hospitalist perceptions of how the new ACGME requirements would impact traineehospitalist interactions.

METHODS

The Housestaff Oversight Subcommittee, a working group of the Society of General Internal Medicine (SGIM) Academic Hospitalist Task Force, surveyed a sample of academic hospitalist program leaders to assess the current status of trainee supervision performed by hospitalists. Programs were considered academic if they were located in the primary hospital of a residency that participates in the National Resident Matching Program for Internal Medicine. To obtain a broad geographic spectrum of academic hospitalist programs, all programs, both university and community‐based, in 4 states and 2 metropolitan regions were sampled: Washington, Oregon, Texas, Maryland, and the Philadelphia and Chicago metropolitan areas. Hospitalist program leaders were identified by members of the Taskforce using individual program websites and by querying departmental leadership at eligible teaching hospitals. Respondents were contacted by e‐mail for participation. None of the authors of the manuscript were participants in the survey.

The survey was developed by consensus of the working group after reviewing the salient literature and included additional questions queried to internal medicine program directors.6 The 19‐item SurveyMonkey instrument included questions about hospitalists' role in trainees' education and evaluation. A Likert‐type scale was used to assess perceptions regarding the impact of on‐site hospitalist supervision on trainee autonomy and hospitalist workload (1 = strongly disagree to 5 = strongly agree). Descriptive statistics were performed and, where appropriate, t test and Fisher's exact test were performed to identify associations between program characteristics and perceptions. Stata SE was used (STATA Corp, College Station, TX) for all statistical analysis.

RESULTS

The survey was sent to 47 individuals identified as likely hospitalist program leaders and completed by 41 individuals (87%). However, 7 respondents turned out not to be program leaders and were therefore excluded, resulting in a 72% (34/47) survey response rate.

The programs for which we did not obtain responses were similar to respondent programs, and did not include a larger proportion of community‐based programs or overrepresent a specific geographic region. Twenty‐five (73%) of the 34 hospitalist program leaders were male, with an average age of 44.3 years, and an average of 12 years post‐residency training (range, 530 years). They reported leading groups with an average of 18 full‐time equivalent (FTE) faculty (range, 350 persons).

Relationship of Hospitalist Programs With the Residency Program

The majority (32/34, 94%) of respondents describe their program as having traditional housestaffhospitalist interactions on an attending‐covered housestaff teaching service. Other hospitalists' clinical roles included: attending on uncovered (non‐housestaff services; 29/34, 85%); nighttime coverage (24/34, 70%); attending on consult services with housestaff (24/34, 70%). All respondents reported that hospitalist faculty are expected to participate in housestaff teaching or to fulfill other educational roles within the residency training program. These educational roles include participating in didactics or educational conferences, and serving as advisors. Additionally, the faculty of 30 (88%) programs have a formal evaluative role over the housestaff they supervise on teaching services (eg, members of formal housestaff evaluation committee). Finally, 28 (82%) programs have faculty who play administrative roles in the residency programs, such as involvement in program leadership or recruitment. Although 63% of the corresponding internal medicine residency programs have a formal housestaff supervision policy, only 43% of program leaders stated that their hospitalists receive formal faculty development on how to provide this supervision to resident trainees. Instead, the majority of hospitalist programs were described as having teaching expectations in the absence of a formal policy.

Twenty‐one programs (21/34, 61%) described having an attending hospitalist physician on‐site overnight to provide ongoing patient care or admit new patients. Of those with on‐site attending coverage, a minority of programs (8/21, 38%) reported having a formal defined supervisory role of housestaff trainees for hospitalists during the overnight period. In these 8 programs, this defined role included a requirement for housestaff to present newly admitted patients or contact hospitalists with questions regarding patient management. Twenty‐four percent (5/21) of the programs with nighttime coverage stated that the role of the nocturnal attending was only to cover the non‐teaching services, without housestaff interaction or supervision. The remainder of programs (8/21, 38%) describe only informal interactions between housestaff and hospitalist faculty, without clearly defined expectations for supervision.

Perceptions of New Regulations and Night Work

Hospitalist leaders viewed increased supervision of housestaff both positively and negatively. Leaders were asked their level of agreement with the potential impact of increased hospitalist nighttime supervision. Of respondents, 85% (27/32) agreed that formal overnight supervision by an attending hospitalist would improve patient safety, and 60% (20/33) agreed that formal overnight supervision would improve traineehospitalist relationships. In addition, 60% (20/33) of respondents felt that nighttime supervision of housestaff by faculty hospitalists would improve resident education. However, approximately 40% (13/33) expressed concern that increased on‐site hospitalist supervision would hamper resident decision‐making autonomy, and 75% (25/33) agreed that a formal housestaff supervisory role would increase hospitalist work load. The perception of increased workload was influenced by a hospitalist program's current supervisory role. Hospitalists programs providing formal nighttime supervision for housestaff, compared to those with informal or poorly defined faculty roles, were less likely to perceive these new regulations as resulting in an increase in hospitalist workload (3.72 vs 4.42; P = 0.02). In addition, hospitalist programs with a formal nighttime role were more likely to identify lack of specific parameters for attending‐level contact as a barrier to residents not contacting their supervisors during the overnight period (2.54 vs 3.54; P = 0.03). No differences in perception of the regulations were noted for those hospitalist programs which had existing faculty development on clinical supervision.

DISCUSSION

This study provides important information about how academic hospitalists currently contribute to the supervision of internal medicine residents. While academic hospitalist groups frequently have faculty providing clinical care on‐site at night, and often hospitalists provide overnight supervision of internal medicine trainees, formal supervision of trainees is not uniform, and few hospitalists groups have a mechanism to provide training or faculty development on how to effectively supervise resident trainees. Hospitalist leaders expressed concerns that creating additional formal overnight supervisory responsibilities may add to an already burdened overnight hospitalist. Formalizing this supervisory role, including explicit role definitions and faculty training for trainee supervision, is necessary.

Though our sample size is small, we captured a diverse geographic range of both university and community‐based academic hospitalist programs by surveying group leaders in several distinct regions. We are unable to comment on differences between responding and non‐responding hospitalist programs, but there does not appear to be a systematic difference between these groups.

Our findings are consistent with work describing a lack of structured conceptual frameworks in effectively supervising trainees,7, 8 and also, at times, nebulous expectations for hospitalist faculty. We found that the existence of a formal supervisory policy within the associated residency program, as well as defined roles for hospitalists, increases the likelihood of positive perceptions of the new ACGME supervisory recommendations. However, the existence of these requirements does not mean that all programs are capable of following them. While additional discussion is required to best delineate a formal overnight hospitalist role in trainee supervision, clearly defining expectations for both faculty and trainees, and their interactions, may alleviate the struggles that exist in programs with ill‐defined roles for hospitalist faculty supervision. While faculty duty hours standards do not exist, additional duties of nighttime coverage for hospitalists suggests that close attention should be paid to burn‐out.9 Faculty development on nighttime supervision and teaching may help maximize both learning and patient care efficiency, and provide a framework for this often unstructured educational time.

Acknowledgements

The research reported here was supported by the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service (REA 05‐129, CDA 07‐022). The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs.

References
  1. Philibert I,Friedman P,Williams WT.New requirements for resident duty hours.JAMA.2002;288:11121114.
  2. Nuckol T,Bhattacharya J,Wolman DM,Ulmer C,Escarce J.Cost implications of reduced work hours and workloads for resident physicians.N Engl J Med.2009;360:22022215.
  3. Horwitz L.Why have working hour restrictions apparently not improved patient safety?BMJ.2011;342:d1200.
  4. Ulmer C, Wolman DM, Johns MME, eds.Resident Duty Hours: Enhancing Sleep, Supervision, and Safety.Washington, DC:National Academies Press;2008.
  5. Nasca TJ,Day SH,Amis ES;for the ACGME Duty Hour Task Force.The new recommendations on duty hours from the ACGME Task Force.N Engl J Med.2010;363.
  6. Association of Program Directors in Internal Medicine (APDIM) Survey 2009. Available at: http://www.im.org/toolbox/surveys/SurveyDataand Reports/APDIMSurveyData/Documents/2009_APDIM_summary_web. pdf. Accessed on July 30, 2012.
  7. Kennedy TJ,Lingard L,Baker GR,Kitchen L,Regehr G.Clinical oversight: conceptualizing the relationship between supervision and safety.J Gen Intern Med.2007;22(8):10801085.
  8. Farnan JM,Johnson JK,Meltzer DO, et al.Strategies for effective on‐call supervision for internal medicine residents: the SUPERB/SAFETY model.J Grad Med Educ.2010;2(1):4652.
  9. Glasheen J,Misky G,Reid M,Harrison R,Sharpe B,Auerbach A.Career satisfaction and burn‐out in academic hospital medicine.Arch Intern Med.2011;171(8):782785.
References
  1. Philibert I,Friedman P,Williams WT.New requirements for resident duty hours.JAMA.2002;288:11121114.
  2. Nuckol T,Bhattacharya J,Wolman DM,Ulmer C,Escarce J.Cost implications of reduced work hours and workloads for resident physicians.N Engl J Med.2009;360:22022215.
  3. Horwitz L.Why have working hour restrictions apparently not improved patient safety?BMJ.2011;342:d1200.
  4. Ulmer C, Wolman DM, Johns MME, eds.Resident Duty Hours: Enhancing Sleep, Supervision, and Safety.Washington, DC:National Academies Press;2008.
  5. Nasca TJ,Day SH,Amis ES;for the ACGME Duty Hour Task Force.The new recommendations on duty hours from the ACGME Task Force.N Engl J Med.2010;363.
  6. Association of Program Directors in Internal Medicine (APDIM) Survey 2009. Available at: http://www.im.org/toolbox/surveys/SurveyDataand Reports/APDIMSurveyData/Documents/2009_APDIM_summary_web. pdf. Accessed on July 30, 2012.
  7. Kennedy TJ,Lingard L,Baker GR,Kitchen L,Regehr G.Clinical oversight: conceptualizing the relationship between supervision and safety.J Gen Intern Med.2007;22(8):10801085.
  8. Farnan JM,Johnson JK,Meltzer DO, et al.Strategies for effective on‐call supervision for internal medicine residents: the SUPERB/SAFETY model.J Grad Med Educ.2010;2(1):4652.
  9. Glasheen J,Misky G,Reid M,Harrison R,Sharpe B,Auerbach A.Career satisfaction and burn‐out in academic hospital medicine.Arch Intern Med.2011;171(8):782785.
Issue
Journal of Hospital Medicine - 7(7)
Issue
Journal of Hospital Medicine - 7(7)
Page Number
521-523
Page Number
521-523
Publications
Publications
Article Type
Display Headline
Survey of overnight academic hospitalist supervision of trainees
Display Headline
Survey of overnight academic hospitalist supervision of trainees
Sections
Article Source
Copyright © 2012 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Department of Medicine and Pritzker School of Medicine, The University of Chicago, 5841 S Maryland Ave, MC 2007, AMB W216, Chicago, IL 60637
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files