User login
Portable Ultrasound Device Usage and Learning Outcomes Among Internal Medicine Trainees: A Parallel-Group Randomized Trial
Point-of-care ultrasonography (POCUS) can transform healthcare delivery through its diagnostic and therapeutic expediency.1 POCUS has been shown to bolster diagnostic accuracy, reduce procedural complications, decrease inpatient length of stay, and improve patient satisfaction by encouraging the physician to be present at the bedside.2-8
POCUS has become widespread across a variety of clinical settings as more investigations have demonstrated its positive impact on patient care.1,9-12 This includes the use of POCUS by trainees, who are now utilizing this technology as part of their assessments of patients.13,14 However, trainees may be performing these examinations with minimal oversight, and outside of emergency medicine, there are few guidelines on how to effectively teach POCUS or measure competency.13,14 While POCUS is rapidly becoming a part of inpatient care, teaching physicians may have little experience in ultrasound or the expertise to adequately supervise trainees.14 There is a growing need to study what trainees can learn and how this knowledge is acquired.
Previous investigations have demonstrated that inexperienced users can be taught to use POCUS to identify a variety of pathological states.2,3,15-23 Most of these curricula used a single lecture series as their pedagogical vehicle, and they variably included junior medical trainees. More importantly, the investigations did not explore whether personal access to handheld ultrasound devices (HUDs) improved learning. In theory, improved access to POCUS devices increases opportunities for authentic and deliberate practice, which may be needed to improve trainee skill with POCUS beyond the classroom setting.14
This study aimed to address several ongoing gaps in knowledge related to learning POCUS. First, we hypothesized that personal HUD access would improve trainees’ POCUS-related knowledge and interpretive ability as a result of increased practice opportunities. Second, we hypothesized that trainees who receive personal access to HUDs would be more likely to perform POCUS examinations and feel more confident in their interpretations. Finally, we hypothesized that repeated exposure to POCUS-related lectures would result in greater improvements in knowledge as compared with a single lecture series.
METHODS
Participants and Setting
The 2017 intern class (n = 47) at an academic internal medicine residency program participated in the study. Control data were obtained from the 2016 intern class (historical control; n = 50) and the 2018 intern class (contemporaneous control; n = 52). The Stanford University Institutional Review Board approved this study.
Study Design
The 2017 intern class (n = 47) received POCUS didactics from June 2017 to June 2018. To evaluate if increased access to HUDs improved learning outcomes, the 2017 interns were randomized 1:1 to receive their own personal HUD that could be used for patient care and/or self-directed learning (n = 24) vs no-HUD (n = 23; Figure). Learning outcomes were assessed over the course of 1 year (see “Outcomes” below) and were compared with the 2016 and 2018 controls. The 2016 intern class had completed a year of training but had not received formalized POCUS didactics (historical control), whereas the 2018 intern class was assessed at the beginning of their year (contemporaneous control; Figure). In order to make comparisons based on intern experience, baseline data for the 2017 intern class were compared with the 2018 intern class, whereas end-of-study data for 2017 interns were compared with 2016 interns.
Outcomes
The primary outcome was the difference in assessment scores at the end of the study period between interns randomized to receive a HUD and those who were not. Secondary outcomes included differences in HUD usage rates, lecture attendance, and assessment scores. To assess whether repeated lecture exposure resulted in greater amounts of learning, this study evaluated for assessment score improvements after each lecture block. Finally, trainee attitudes toward POCUS and their confidence in their interpretative ability were measured at the beginning and end of the study period.
Curriculum Implementation
The lectures were administered as once-weekly didactics of 1-hour duration to interns rotating on the inpatient wards rotation. This rotation is 4 weeks long, and each intern will experience the rotation two to four times per year. Each lecture contained two parts: (1) 20-30 minutes of didactics via Microsoft PowerPointTM and (2) 30-40 minutes of supervised practice using HUDs on standardized patients. Four lectures were given each month: (1) introduction to POCUS and ultrasound physics, (2) thoracic/lung ultrasound, (3) echocardiography, and (4) abdominal POCUS. The lectures consisted of contrasting cases of normal/abnormal videos and clinical vignettes. These four lectures were repeated each month as new interns rotated on service. Some interns experienced the same content multiple times, which was intentional in order to assess their rates of learning over time. Lecture contents were based on previously published guidelines and expert consensus for teaching POCUS in internal medicine.13, 24-26 Content from the Accreditation Council for Graduate Medical Education (ACGME) and the American College of Emergency Physicians (ACEP) was also incorporated because these organizations had published relevant guidelines for teaching POCUS.13,26 Further development of the lectures occurred through review of previously described POCUS-relevant curricula.27-32
Handheld Ultrasound Devices
This study used the Philips LumifyTM, a United States Food and Drug Administration–approved device. Interns randomized to HUDs received their own device at the start of the rotation. It was at their discretion to use the device outside of the course. All devices were approved for patient use and were encrypted in compliance with our information security office. For privacy reasons, any saved patient images were not reviewed by the researchers. Interns were encouraged to share their findings with supervising physicians during rounds, but actual oversight was not measured. Interns not randomized to HUDs could access a single community device that was shared among all residents and fellows in the hospital. Interns reported the average number of POCUS examinations performed each week via a survey sent during the last week of the rotation.
Assessment Design and Implementation
Assessments evaluating trainee knowledge were administered before, during, and after the study period (Figure). For the 2017 cohort, assessments were also administered at the start and end of the ward month to track knowledge acquisition. Assessment contents were selected from POCUS guidelines for internal medicine and adaptation of the ACGME and ACEP guidelines.13,24,26 Additional content was obtained from major society POCUS tutorials and deidentified images collected by the study authors.13,24,33 In keeping with previously described methodology, the images were shown for approximately 12 seconds, followed by five additional seconds to allow the learner to answer the question.32 Final assessment contents were determined by the authors using the Delphi method.34 A sample assessment can be found in the Appendix Material.
Surveys
Surveys were administered alongside the assessments to the 2016-2018 intern classes. These surveys assessed trainee attitudes toward POCUS and were based on previously validated assessments.27,28,30 Attitudes were measured using 5-point Likert scales.
Statistical Analysis
For the primary outcome, we performed generalized binomial mixed-effect regressions using the survey periods, randomization group, and the interaction of the two as independent variables after adjusting for attendance and controlling of intra-intern correlations. The bivariate unadjusted analysis was performed to display the distribution of overall correctness on the assessments. Wilcoxon signed rank test was used to determine score significance for dependent score variables (R-Statistical Programming Language, Vienna, Austria).
RESULTS
Baseline Characteristics
There were 149 interns who participated in this study (Figure). Assessment/survey completion rates were as follows: 2016 control: 68.0%; 2017 preintervention: 97.9%; 2017 postintervention: 89.4%; and 2018 control: 100%. The 2017 interns reported similar amounts of prior POCUS exposure in medical school (Table 1).
Primary Outcome: Assessment Scores (HUD vs no HUD)
There were no significant differences in assessment scores at the end of the study between interns randomized to personal HUD access vs those to no-HUD access (Table 1). HUD interns reported performing POCUS assessments on patients a mean 6.8 (standard deviation [SD] 2.2) times per week vs 6.4 (SD 2.9) times per week in the no-HUD arm (P = .66). The mean lecture attendance was 75.0% and did not significantly differ between the HUD arms (Table 1).
Secondary Outcomes
Impact of Repeating Lectures
The 2017 interns demonstrated significant increases in preblock vs postblock assessment scores after first-time exposure to the lectures (median preblock score 0.61 [interquartile range (IQR), 0.53-0.70] vs postblock score 0.81 [IQR, 0.72-0.86]; P < .001; Table 2). However, intern performance on the preblock vs postblock assessments after second-time exposure to the curriculum failed to improve (median second preblock score 0.78 [IQR, 0.69-0.83] vs postblock score 0.81 [IQR, 0.64-0.89]; P = .94). Intern performance on individual domains of knowledge for each block is listed in Appendix Table 1.
Intervention Performance vs Controls
The 2016 historical control had significantly higher scores compared with the 2017 preintervention group (P < .001; Appendix Table 2). The year-long lecture series resulted in significant increases in median scores for the 2017 group (median preintervention score 0.55 [0.41-0.61] vs median postintervention score 0.84 [0.71-0.90]; P = .006; Appendix Table 1). At the end of the study, the 2017 postintervention scores were significantly higher across multiple knowledge domains compared with the 2016 historical control (Appendix Table 2).
Survey Results
Notably, the 2017 intern class at the end of the intervention did not have significantly different assessment scores for several disease-specific domains, compared with the 2016 control (Appendix Table 2). Nonetheless, the 2017 intern class reported higher levels of confidence in these same domains despite similar scores (Supplementary Figure). The HUD group seldomly cited a lack of confidence in their abilities as a barrier to performing POCUS examinations (17.6%), compared with the no-HUD group (50.0%), despite nearly identical assessment scores between the two groups (Table 1).
DISCUSSION
Previous guidelines have recommended increased HUD access for learners,13,24,35,36 but there have been few investigations that have evaluated the impact of such access on learning POCUS. One previous investigation found that hospitalists who carried HUDs were more likely to identify heart failure on bedside examination.37 In contrast, our study found no improvement in interpretative ability when randomizing interns to carry HUDs for patient care. Notably, interns did not perform more POCUS examinations when given HUDs. We offer several explanations for this finding. First, time-motion studies have demonstrated that internal medicine interns spend less than 15% of their time toward direct patient care.38 It is possible that the demands of being an intern impeded their ability to perform more POCUS examinations on their patients, regardless of HUD access. Alternatively, the interns randomized to no personal access may have used the community device more frequently as a result of the lecture series. Given the cost of HUDs, further studies are needed to assess the degree to which HUD access will improve trainee interpretive ability, especially as more training programs consider the creation of ultrasound curricula.10,11,24,39,40
This study was unique because it followed interns over a year-long course that repeated the same material to assess rates of learning with repeated exposure. Learners improved their scores after the first, but not second, block. Furthermore, the median scores were nearly identical between the first postblock assessment and second preblock assessment (0.81 vs 0.78), suggesting that knowledge was retained between blocks. Together, these findings suggest there may be limitations of traditional lectures that use standardized patient models for practice. Supplementary pedagogies, such as in-the-moment feedback with actual patients, may be needed to promote mastery.14,35
Despite no formal curriculum, the 2016 intern class (historical control) had learned POCUS to some degree based on their higher assessment scores compared with the 2017 intern class during the preintervention period. Such learning may be informal, and yet, trainees may feel confident in making clinical decisions without formalized training, accreditation, or oversight. As suggested by this study, adding regular didactics or giving trainees HUDs may not immediately solve this issue. For assessment items in which the 2017 interns did not significantly differ from the controls, they nonetheless reported higher confidence in their abilities. Similarly, interns randomized to HUDs less frequently cited a lack of confidence in their abilities, despite similar scores to the no-HUD group. Such confidence may be incongruent with their actual knowledge or ability to safely use POCUS. This phenomenon of misplaced confidence is known as the Dunning–Kruger effect, and it may be common with ultrasound learning.41 While confidence can be part of a holistic definition of competency,14 these results raise the concern that trainees may have difficulty assessing their own competency level with POCUS.35
There are several limitations to this study. It was performed at a single institution with limited sample size. It examined only intern physicians because of funding constraints, which limits the generalizability of these findings among medical trainees. Technical ability assessments (including obtaining and interpreting images) were not included. We were unable to track the timing or location of the devices’ usage, and the interns’ self-reported usage rates may be subject to recall bias. To our knowledge, there were no significant lapses in device availability/functionality. Intern physicians in the HUD arm did not receive formal feedback on personally acquired patient images, which may have limited the intervention’s impact.
In conclusion, internal medicine interns who received personal HUDs were not better at recognizing normal/abnormal findings on image assessments, and they did not report performing more POCUS examinations. Since the minority of a trainee’s time is spent toward direct patient care, offering trainees HUDs without substantial guidance may not be enough to promote mastery. Notably, trainees who received HUDs felt more confident in their abilities, despite no objective increase in their actual skill. Finally, interns who received POCUS-related lectures experienced significant benefit upon first exposure to the material, while repeated exposures did not improve performance. Future investigations should stringently track trainee POCUS usage rates with HUDs and assess whether image acquisition ability improves as a result of personal access.
1. Moore CL, Copel JA. Point-of-care ultrasonography. N Engl J Med. 2011;364(8):749-757. https://doi.org/10.1056/NEJMra0909487.
2. Akkaya A, Yesilaras M, Aksay E, Sever M, Atilla OD. The interrater reliability of ultrasound imaging of the inferior vena cava performed by emergency residents. Am J Emerg Med. 2013;31(10):1509-1511. https://doi.org/10.1016/j.ajem.2013.07.006.
3. Razi R, Estrada JR, Doll J, Spencer KT. Bedside hand-carried ultrasound by internal medicine residents versus traditional clinical assessment for the identification of systolic dysfunction in patients admitted with decompensated heart failure. J Am Soc Echocardiogr. 2011;24(12):1319-1324. https://doi.org/10.1016/j.echo.2011.07.013.
4. Dodge KL, Lynch CA, Moore CL, Biroscak BJ, Evans LV. Use of ultrasound guidance improves central venous catheter insertion success rates among junior residents. J Ultrasound Med. 2012;31(10):1519-1526. https://doi.org/10.7863/jum.2012.31.10.1519.
5. Cavanna L, Mordenti P, Bertè R, et al. Ultrasound guidance reduces pneumothorax rate and improves safety of thoracentesis in malignant pleural effusion: Report on 445 consecutive patients with advanced cancer. World J Surg Oncol. 2014;12:139. https://doi.org/10.1186/1477-7819-12-139.
6. Testa A, Francesconi A, Giannuzzi R, Berardi S, Sbraccia P. Economic analysis of bedside ultrasonography (US) implementation in an Internal Medicine department. Intern Emerg Med. 2015;10(8):1015-1024. https://doi.org/10.1007/s11739-015-1320-7.
7. Howard ZD, Noble VE, Marill KA, et al. Bedside ultrasound maximizes patient satisfaction. J Emerg Med. 2014;46(1):46-53. https://doi.org/10.1016/j.jemermed.2013.05.044.
8. Park YH, Jung RB, Lee YG, et al. Does the use of bedside ultrasonography reduce emergency department length of stay for patients with renal colic? A pilot study. Clin Exp Emerg Med. 2016;3(4):197-203. https://doi.org/10.15441/ceem.15.109.
9. Glomb N, D’Amico B, Rus M, Chen C. Point-of-care ultrasound in resource-limited settings. Clin Pediatr Emerg Med. 2015;16(4):256-261. https://doi.org/10.1016/j.cpem.2015.10.001.
10. Bahner DP, Goldman E, Way D, Royall NA, Liu YT. The state of ultrasound education in U.S. medical schools: results of a national survey. Acad Med. 2014;89(12):1681-1686. https://doi.org/10.1097/ACM.0000000000000414.
11. Hall JWW, Holman H, Bornemann P, et al. Point of care ultrasound in family medicine residency programs: A CERA study. Fam Med. 2015;47(9):706-711.
12. Schnobrich DJ, Gladding S, Olson APJ, Duran-Nelson A. Point-of-care ultrasound in internal medicine: A national survey of educational leadership. J Grad Med Educ. 2013;5(3):498-502. https://doi.org/10.4300/JGME-D-12-00215.1.
13. Stolz LA, Stolz U, Fields JM, et al. Emergency medicine resident assessment of the emergency ultrasound milestones and current training recommendations. Acad Emerg Med. 2017;24(3):353-361. https://doi.org/10.1111/acem.13113.
14. Kumar, A., Jensen, T., Kugler, J. Evaluation of trainee competency with point-of-care ultrasonography (POCUS): A conceptual framework and review of existing assessments. J Gen Intern Med. 2019;34(6):1025-1031. https://doi.org/10.1007/s11606-019-04945-4.
15. Levitov A, Frankel HL, Blaivas M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients—part ii: Cardiac ultrasonography. Crit Care Med. 2016;44(6):1206-1227. https://doi.org/10.1097/CCM.0000000000001847.
16. Kobal SL, Trento L, Baharami S, et al. Comparison of effectiveness of hand-carried ultrasound to bedside cardiovascular physical examination. Am J Cardiol. 2005;96(7):1002-1006. https://doi.org/10.1016/j.amjcard.2005.05.060.
17. Ceriani E, Cogliati C. Update on bedside ultrasound diagnosis of pericardial effusion. Intern Emerg Med. 2016;11(3):477-480. https://doi.org/10.1007/s11739-015-1372-8.
18. Labovitz AJ, Noble VE, Bierig M, et al. Focused cardiac ultrasound in the emergent setting: A consensus statement of the American Society of Echocardiography and American College of Emergency Physicians. J Am Soc Echocardiogr. 2010;23(12):1225-1230. https://doi.org/10.1016/j.echo.2010.10.005.
19. Keil-Ríos D, Terrazas-Solís H, González-Garay A, Sánchez-Ávila JF, García-Juárez I. Pocket ultrasound device as a complement to physical examination for ascites evaluation and guided paracentesis. Intern Emerg Med. 2016;11(3):461-466. https://doi.org/10.1007/s11739-016-1406-x.
20. Riddell J, Case A, Wopat R, et al. Sensitivity of emergency bedside ultrasound to detect hydronephrosis in patients with computed tomography–proven stones. West J Emerg Med. 2014;15(1):96-100. https://doi.org/10.5811/westjem.2013.9.15874.
21. Dalziel PJ, Noble VE. Bedside ultrasound and the assessment of renal colic: A review. Emerg Med J. 2013;30(1):3-8. https://doi.org/10.1136/emermed-2012-201375.
22. Whitson MR, Mayo PH. Ultrasonography in the emergency department. Crit Care. 2016;20(1):227. https://doi.org/10.1186/s13054-016-1399-x.
23. Kumar A, Liu G, Chi J, Kugler J. The role of technology in the bedside encounter. Med Clin North Am. 2018;102(3):443-451. https://doi.org/10.1016/j.mcna.2017.12.006.
24. Ma IWY, Arishenkoff S, Wiseman J, et al. Internal medicine point-of-care ultrasound curriculum: Consensus recommendations from the Canadian Internal Medicine Ultrasound (CIMUS) Group. J Gen Intern Med. 2017;32(9):1052-1057. https://doi.org/10.1007/s11606-017-4071-5.
15. Sabath BF, Singh G. Point-of-care ultrasonography as a training milestone for internal medicine residents: The time is now. J Community Hosp Intern Med Perspect. 2016;6(5):33094. https://doi.org/10.3402/jchimp.v6.33094.
26. American College of Emergency Physicians. Ultrasound guidelines: emergency, point-of-care and clinical ultrasound guidelines in medicine. Ann Emerg Med. 2017;69(5):e27-e54. https://doi.org/10.1016/j.annemergmed.2016.08.457.
27. Ramsingh D, Rinehart J, Kain Z, et al. Impact assessment of perioperative point-of-care ultrasound training on anesthesiology residents. Anesthesiology. 2015;123(3):670-682. https://doi.org/10.1097/ALN.0000000000000776.
28. Keddis MT, Cullen MW, Reed DA, et al. Effectiveness of an ultrasound training module for internal medicine residents. BMC Med Educ. 2011;11:75. https://doi.org/10.1186/1472-6920-11-75.
29. Townsend NT, Kendall J, Barnett C, Robinson T. An effective curriculum for focused assessment diagnostic echocardiography: Establishing the learning curve in surgical residents. J Surg Educ. 2016;73(2):190-196. https://doi.org/10.1016/j.jsurg.2015.10.009.
30. Hoppmann RA, Rao VV, Bell F, et al. The evolution of an integrated ultrasound curriculum (iUSC) for medical students: 9-year experience. Crit Ultrasound J. 2015;7(1):18. https://doi.org/10.1186/s13089-015-0035-3.
31. Skalski JH, Elrashidi M, Reed DA, McDonald FS, Bhagra A. Using standardized patients to teach point-of-care ultrasound–guided physical examination skills to internal medicine residents. J Grad Med Educ. 2015;7(1):95-97. https://doi.org/10.4300/JGME-D-14-00178.1.
32. Chisholm CB, Dodge WR, Balise RR, Williams SR, Gharahbaghian L, Beraud A-S. Focused cardiac ultrasound training: How much is enough? J Emerg Med. 2013;44(4):818-822. https://doi.org/10.1016/j.jemermed.2012.07.092.
33. Schmidt GA, Schraufnagel D. Introduction to ATS seminars: Intensive care ultrasound. Ann Am Thorac Soc. 2013;10(5):538-539. https://doi.org/10.1513/AnnalsATS.201306-203ED.
34. Skaarup SH, Laursen CB, Bjerrum AS, Hilberg O. Objective and structured assessment of lung ultrasound competence. A multispecialty Delphi consensus and construct validity study. Ann Am Thorac Soc. 2017;14(4):555-560. https://doi.org/10.1513/AnnalsATS.201611-894OC.
35. Lucas BP, Tierney DM, Jensen TP, et al. Credentialing of hospitalists in ultrasound-guided bedside procedures: A position statement of the Society of Hospital Medicine. J Hosp Med. 2018;13(2):117-125. https://doi.org/10.12788/jhm.2917.
36. Frankel HL, Kirkpatrick AW, Elbarbary M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients-part i: General ultrasonography. Crit Care Med. 2015;43(11):2479-2502. https://doi.org/10.1097/CCM.0000000000001216.
37. Martin LD, Howell EE, Ziegelstein RC, et al. Hand-carried ultrasound performed by hospitalists: Does it improve the cardiac physical examination? Am J Med. 2009;122(1):35-41. https://doi.org/10.1016/j.amjmed.2008.07.022.
38. Desai SV, Asch DA, Bellini LM, et al. Education outcomes in a duty-hour flexibility trial in internal medicine. N Engl J Med. 2018;378(16):1494-1508. https://doi.org/10.1056/NEJMoa1800965.
39. Baltarowich OH, Di Salvo DN, Scoutt LM, et al. National ultrasound curriculum for medical students. Ultrasound Q. 2014;30(1):13-19. https://doi.org/10.1097/RUQ.0000000000000066.
40. Beal EW, Sigmond BR, Sage-Silski L, Lahey S, Nguyen V, Bahner DP. Point-of-care ultrasound in general surgery residency training: A proposal for milestones in graduate medical education ultrasound. J Ultrasound Med. 2017;36(12):2577-2584. https://doi.org/10.1002/jum.14298.
41. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J Pers Soc Psychol. 1999;77(6):1121-1134. https://doi.org/10.1037//0022-3514.77.6.1121.
Point-of-care ultrasonography (POCUS) can transform healthcare delivery through its diagnostic and therapeutic expediency.1 POCUS has been shown to bolster diagnostic accuracy, reduce procedural complications, decrease inpatient length of stay, and improve patient satisfaction by encouraging the physician to be present at the bedside.2-8
POCUS has become widespread across a variety of clinical settings as more investigations have demonstrated its positive impact on patient care.1,9-12 This includes the use of POCUS by trainees, who are now utilizing this technology as part of their assessments of patients.13,14 However, trainees may be performing these examinations with minimal oversight, and outside of emergency medicine, there are few guidelines on how to effectively teach POCUS or measure competency.13,14 While POCUS is rapidly becoming a part of inpatient care, teaching physicians may have little experience in ultrasound or the expertise to adequately supervise trainees.14 There is a growing need to study what trainees can learn and how this knowledge is acquired.
Previous investigations have demonstrated that inexperienced users can be taught to use POCUS to identify a variety of pathological states.2,3,15-23 Most of these curricula used a single lecture series as their pedagogical vehicle, and they variably included junior medical trainees. More importantly, the investigations did not explore whether personal access to handheld ultrasound devices (HUDs) improved learning. In theory, improved access to POCUS devices increases opportunities for authentic and deliberate practice, which may be needed to improve trainee skill with POCUS beyond the classroom setting.14
This study aimed to address several ongoing gaps in knowledge related to learning POCUS. First, we hypothesized that personal HUD access would improve trainees’ POCUS-related knowledge and interpretive ability as a result of increased practice opportunities. Second, we hypothesized that trainees who receive personal access to HUDs would be more likely to perform POCUS examinations and feel more confident in their interpretations. Finally, we hypothesized that repeated exposure to POCUS-related lectures would result in greater improvements in knowledge as compared with a single lecture series.
METHODS
Participants and Setting
The 2017 intern class (n = 47) at an academic internal medicine residency program participated in the study. Control data were obtained from the 2016 intern class (historical control; n = 50) and the 2018 intern class (contemporaneous control; n = 52). The Stanford University Institutional Review Board approved this study.
Study Design
The 2017 intern class (n = 47) received POCUS didactics from June 2017 to June 2018. To evaluate if increased access to HUDs improved learning outcomes, the 2017 interns were randomized 1:1 to receive their own personal HUD that could be used for patient care and/or self-directed learning (n = 24) vs no-HUD (n = 23; Figure). Learning outcomes were assessed over the course of 1 year (see “Outcomes” below) and were compared with the 2016 and 2018 controls. The 2016 intern class had completed a year of training but had not received formalized POCUS didactics (historical control), whereas the 2018 intern class was assessed at the beginning of their year (contemporaneous control; Figure). In order to make comparisons based on intern experience, baseline data for the 2017 intern class were compared with the 2018 intern class, whereas end-of-study data for 2017 interns were compared with 2016 interns.
Outcomes
The primary outcome was the difference in assessment scores at the end of the study period between interns randomized to receive a HUD and those who were not. Secondary outcomes included differences in HUD usage rates, lecture attendance, and assessment scores. To assess whether repeated lecture exposure resulted in greater amounts of learning, this study evaluated for assessment score improvements after each lecture block. Finally, trainee attitudes toward POCUS and their confidence in their interpretative ability were measured at the beginning and end of the study period.
Curriculum Implementation
The lectures were administered as once-weekly didactics of 1-hour duration to interns rotating on the inpatient wards rotation. This rotation is 4 weeks long, and each intern will experience the rotation two to four times per year. Each lecture contained two parts: (1) 20-30 minutes of didactics via Microsoft PowerPointTM and (2) 30-40 minutes of supervised practice using HUDs on standardized patients. Four lectures were given each month: (1) introduction to POCUS and ultrasound physics, (2) thoracic/lung ultrasound, (3) echocardiography, and (4) abdominal POCUS. The lectures consisted of contrasting cases of normal/abnormal videos and clinical vignettes. These four lectures were repeated each month as new interns rotated on service. Some interns experienced the same content multiple times, which was intentional in order to assess their rates of learning over time. Lecture contents were based on previously published guidelines and expert consensus for teaching POCUS in internal medicine.13, 24-26 Content from the Accreditation Council for Graduate Medical Education (ACGME) and the American College of Emergency Physicians (ACEP) was also incorporated because these organizations had published relevant guidelines for teaching POCUS.13,26 Further development of the lectures occurred through review of previously described POCUS-relevant curricula.27-32
Handheld Ultrasound Devices
This study used the Philips LumifyTM, a United States Food and Drug Administration–approved device. Interns randomized to HUDs received their own device at the start of the rotation. It was at their discretion to use the device outside of the course. All devices were approved for patient use and were encrypted in compliance with our information security office. For privacy reasons, any saved patient images were not reviewed by the researchers. Interns were encouraged to share their findings with supervising physicians during rounds, but actual oversight was not measured. Interns not randomized to HUDs could access a single community device that was shared among all residents and fellows in the hospital. Interns reported the average number of POCUS examinations performed each week via a survey sent during the last week of the rotation.
Assessment Design and Implementation
Assessments evaluating trainee knowledge were administered before, during, and after the study period (Figure). For the 2017 cohort, assessments were also administered at the start and end of the ward month to track knowledge acquisition. Assessment contents were selected from POCUS guidelines for internal medicine and adaptation of the ACGME and ACEP guidelines.13,24,26 Additional content was obtained from major society POCUS tutorials and deidentified images collected by the study authors.13,24,33 In keeping with previously described methodology, the images were shown for approximately 12 seconds, followed by five additional seconds to allow the learner to answer the question.32 Final assessment contents were determined by the authors using the Delphi method.34 A sample assessment can be found in the Appendix Material.
Surveys
Surveys were administered alongside the assessments to the 2016-2018 intern classes. These surveys assessed trainee attitudes toward POCUS and were based on previously validated assessments.27,28,30 Attitudes were measured using 5-point Likert scales.
Statistical Analysis
For the primary outcome, we performed generalized binomial mixed-effect regressions using the survey periods, randomization group, and the interaction of the two as independent variables after adjusting for attendance and controlling of intra-intern correlations. The bivariate unadjusted analysis was performed to display the distribution of overall correctness on the assessments. Wilcoxon signed rank test was used to determine score significance for dependent score variables (R-Statistical Programming Language, Vienna, Austria).
RESULTS
Baseline Characteristics
There were 149 interns who participated in this study (Figure). Assessment/survey completion rates were as follows: 2016 control: 68.0%; 2017 preintervention: 97.9%; 2017 postintervention: 89.4%; and 2018 control: 100%. The 2017 interns reported similar amounts of prior POCUS exposure in medical school (Table 1).
Primary Outcome: Assessment Scores (HUD vs no HUD)
There were no significant differences in assessment scores at the end of the study between interns randomized to personal HUD access vs those to no-HUD access (Table 1). HUD interns reported performing POCUS assessments on patients a mean 6.8 (standard deviation [SD] 2.2) times per week vs 6.4 (SD 2.9) times per week in the no-HUD arm (P = .66). The mean lecture attendance was 75.0% and did not significantly differ between the HUD arms (Table 1).
Secondary Outcomes
Impact of Repeating Lectures
The 2017 interns demonstrated significant increases in preblock vs postblock assessment scores after first-time exposure to the lectures (median preblock score 0.61 [interquartile range (IQR), 0.53-0.70] vs postblock score 0.81 [IQR, 0.72-0.86]; P < .001; Table 2). However, intern performance on the preblock vs postblock assessments after second-time exposure to the curriculum failed to improve (median second preblock score 0.78 [IQR, 0.69-0.83] vs postblock score 0.81 [IQR, 0.64-0.89]; P = .94). Intern performance on individual domains of knowledge for each block is listed in Appendix Table 1.
Intervention Performance vs Controls
The 2016 historical control had significantly higher scores compared with the 2017 preintervention group (P < .001; Appendix Table 2). The year-long lecture series resulted in significant increases in median scores for the 2017 group (median preintervention score 0.55 [0.41-0.61] vs median postintervention score 0.84 [0.71-0.90]; P = .006; Appendix Table 1). At the end of the study, the 2017 postintervention scores were significantly higher across multiple knowledge domains compared with the 2016 historical control (Appendix Table 2).
Survey Results
Notably, the 2017 intern class at the end of the intervention did not have significantly different assessment scores for several disease-specific domains, compared with the 2016 control (Appendix Table 2). Nonetheless, the 2017 intern class reported higher levels of confidence in these same domains despite similar scores (Supplementary Figure). The HUD group seldomly cited a lack of confidence in their abilities as a barrier to performing POCUS examinations (17.6%), compared with the no-HUD group (50.0%), despite nearly identical assessment scores between the two groups (Table 1).
DISCUSSION
Previous guidelines have recommended increased HUD access for learners,13,24,35,36 but there have been few investigations that have evaluated the impact of such access on learning POCUS. One previous investigation found that hospitalists who carried HUDs were more likely to identify heart failure on bedside examination.37 In contrast, our study found no improvement in interpretative ability when randomizing interns to carry HUDs for patient care. Notably, interns did not perform more POCUS examinations when given HUDs. We offer several explanations for this finding. First, time-motion studies have demonstrated that internal medicine interns spend less than 15% of their time toward direct patient care.38 It is possible that the demands of being an intern impeded their ability to perform more POCUS examinations on their patients, regardless of HUD access. Alternatively, the interns randomized to no personal access may have used the community device more frequently as a result of the lecture series. Given the cost of HUDs, further studies are needed to assess the degree to which HUD access will improve trainee interpretive ability, especially as more training programs consider the creation of ultrasound curricula.10,11,24,39,40
This study was unique because it followed interns over a year-long course that repeated the same material to assess rates of learning with repeated exposure. Learners improved their scores after the first, but not second, block. Furthermore, the median scores were nearly identical between the first postblock assessment and second preblock assessment (0.81 vs 0.78), suggesting that knowledge was retained between blocks. Together, these findings suggest there may be limitations of traditional lectures that use standardized patient models for practice. Supplementary pedagogies, such as in-the-moment feedback with actual patients, may be needed to promote mastery.14,35
Despite no formal curriculum, the 2016 intern class (historical control) had learned POCUS to some degree based on their higher assessment scores compared with the 2017 intern class during the preintervention period. Such learning may be informal, and yet, trainees may feel confident in making clinical decisions without formalized training, accreditation, or oversight. As suggested by this study, adding regular didactics or giving trainees HUDs may not immediately solve this issue. For assessment items in which the 2017 interns did not significantly differ from the controls, they nonetheless reported higher confidence in their abilities. Similarly, interns randomized to HUDs less frequently cited a lack of confidence in their abilities, despite similar scores to the no-HUD group. Such confidence may be incongruent with their actual knowledge or ability to safely use POCUS. This phenomenon of misplaced confidence is known as the Dunning–Kruger effect, and it may be common with ultrasound learning.41 While confidence can be part of a holistic definition of competency,14 these results raise the concern that trainees may have difficulty assessing their own competency level with POCUS.35
There are several limitations to this study. It was performed at a single institution with limited sample size. It examined only intern physicians because of funding constraints, which limits the generalizability of these findings among medical trainees. Technical ability assessments (including obtaining and interpreting images) were not included. We were unable to track the timing or location of the devices’ usage, and the interns’ self-reported usage rates may be subject to recall bias. To our knowledge, there were no significant lapses in device availability/functionality. Intern physicians in the HUD arm did not receive formal feedback on personally acquired patient images, which may have limited the intervention’s impact.
In conclusion, internal medicine interns who received personal HUDs were not better at recognizing normal/abnormal findings on image assessments, and they did not report performing more POCUS examinations. Since the minority of a trainee’s time is spent toward direct patient care, offering trainees HUDs without substantial guidance may not be enough to promote mastery. Notably, trainees who received HUDs felt more confident in their abilities, despite no objective increase in their actual skill. Finally, interns who received POCUS-related lectures experienced significant benefit upon first exposure to the material, while repeated exposures did not improve performance. Future investigations should stringently track trainee POCUS usage rates with HUDs and assess whether image acquisition ability improves as a result of personal access.
Point-of-care ultrasonography (POCUS) can transform healthcare delivery through its diagnostic and therapeutic expediency.1 POCUS has been shown to bolster diagnostic accuracy, reduce procedural complications, decrease inpatient length of stay, and improve patient satisfaction by encouraging the physician to be present at the bedside.2-8
POCUS has become widespread across a variety of clinical settings as more investigations have demonstrated its positive impact on patient care.1,9-12 This includes the use of POCUS by trainees, who are now utilizing this technology as part of their assessments of patients.13,14 However, trainees may be performing these examinations with minimal oversight, and outside of emergency medicine, there are few guidelines on how to effectively teach POCUS or measure competency.13,14 While POCUS is rapidly becoming a part of inpatient care, teaching physicians may have little experience in ultrasound or the expertise to adequately supervise trainees.14 There is a growing need to study what trainees can learn and how this knowledge is acquired.
Previous investigations have demonstrated that inexperienced users can be taught to use POCUS to identify a variety of pathological states.2,3,15-23 Most of these curricula used a single lecture series as their pedagogical vehicle, and they variably included junior medical trainees. More importantly, the investigations did not explore whether personal access to handheld ultrasound devices (HUDs) improved learning. In theory, improved access to POCUS devices increases opportunities for authentic and deliberate practice, which may be needed to improve trainee skill with POCUS beyond the classroom setting.14
This study aimed to address several ongoing gaps in knowledge related to learning POCUS. First, we hypothesized that personal HUD access would improve trainees’ POCUS-related knowledge and interpretive ability as a result of increased practice opportunities. Second, we hypothesized that trainees who receive personal access to HUDs would be more likely to perform POCUS examinations and feel more confident in their interpretations. Finally, we hypothesized that repeated exposure to POCUS-related lectures would result in greater improvements in knowledge as compared with a single lecture series.
METHODS
Participants and Setting
The 2017 intern class (n = 47) at an academic internal medicine residency program participated in the study. Control data were obtained from the 2016 intern class (historical control; n = 50) and the 2018 intern class (contemporaneous control; n = 52). The Stanford University Institutional Review Board approved this study.
Study Design
The 2017 intern class (n = 47) received POCUS didactics from June 2017 to June 2018. To evaluate if increased access to HUDs improved learning outcomes, the 2017 interns were randomized 1:1 to receive their own personal HUD that could be used for patient care and/or self-directed learning (n = 24) vs no-HUD (n = 23; Figure). Learning outcomes were assessed over the course of 1 year (see “Outcomes” below) and were compared with the 2016 and 2018 controls. The 2016 intern class had completed a year of training but had not received formalized POCUS didactics (historical control), whereas the 2018 intern class was assessed at the beginning of their year (contemporaneous control; Figure). In order to make comparisons based on intern experience, baseline data for the 2017 intern class were compared with the 2018 intern class, whereas end-of-study data for 2017 interns were compared with 2016 interns.
Outcomes
The primary outcome was the difference in assessment scores at the end of the study period between interns randomized to receive a HUD and those who were not. Secondary outcomes included differences in HUD usage rates, lecture attendance, and assessment scores. To assess whether repeated lecture exposure resulted in greater amounts of learning, this study evaluated for assessment score improvements after each lecture block. Finally, trainee attitudes toward POCUS and their confidence in their interpretative ability were measured at the beginning and end of the study period.
Curriculum Implementation
The lectures were administered as once-weekly didactics of 1-hour duration to interns rotating on the inpatient wards rotation. This rotation is 4 weeks long, and each intern will experience the rotation two to four times per year. Each lecture contained two parts: (1) 20-30 minutes of didactics via Microsoft PowerPointTM and (2) 30-40 minutes of supervised practice using HUDs on standardized patients. Four lectures were given each month: (1) introduction to POCUS and ultrasound physics, (2) thoracic/lung ultrasound, (3) echocardiography, and (4) abdominal POCUS. The lectures consisted of contrasting cases of normal/abnormal videos and clinical vignettes. These four lectures were repeated each month as new interns rotated on service. Some interns experienced the same content multiple times, which was intentional in order to assess their rates of learning over time. Lecture contents were based on previously published guidelines and expert consensus for teaching POCUS in internal medicine.13, 24-26 Content from the Accreditation Council for Graduate Medical Education (ACGME) and the American College of Emergency Physicians (ACEP) was also incorporated because these organizations had published relevant guidelines for teaching POCUS.13,26 Further development of the lectures occurred through review of previously described POCUS-relevant curricula.27-32
Handheld Ultrasound Devices
This study used the Philips LumifyTM, a United States Food and Drug Administration–approved device. Interns randomized to HUDs received their own device at the start of the rotation. It was at their discretion to use the device outside of the course. All devices were approved for patient use and were encrypted in compliance with our information security office. For privacy reasons, any saved patient images were not reviewed by the researchers. Interns were encouraged to share their findings with supervising physicians during rounds, but actual oversight was not measured. Interns not randomized to HUDs could access a single community device that was shared among all residents and fellows in the hospital. Interns reported the average number of POCUS examinations performed each week via a survey sent during the last week of the rotation.
Assessment Design and Implementation
Assessments evaluating trainee knowledge were administered before, during, and after the study period (Figure). For the 2017 cohort, assessments were also administered at the start and end of the ward month to track knowledge acquisition. Assessment contents were selected from POCUS guidelines for internal medicine and adaptation of the ACGME and ACEP guidelines.13,24,26 Additional content was obtained from major society POCUS tutorials and deidentified images collected by the study authors.13,24,33 In keeping with previously described methodology, the images were shown for approximately 12 seconds, followed by five additional seconds to allow the learner to answer the question.32 Final assessment contents were determined by the authors using the Delphi method.34 A sample assessment can be found in the Appendix Material.
Surveys
Surveys were administered alongside the assessments to the 2016-2018 intern classes. These surveys assessed trainee attitudes toward POCUS and were based on previously validated assessments.27,28,30 Attitudes were measured using 5-point Likert scales.
Statistical Analysis
For the primary outcome, we performed generalized binomial mixed-effect regressions using the survey periods, randomization group, and the interaction of the two as independent variables after adjusting for attendance and controlling of intra-intern correlations. The bivariate unadjusted analysis was performed to display the distribution of overall correctness on the assessments. Wilcoxon signed rank test was used to determine score significance for dependent score variables (R-Statistical Programming Language, Vienna, Austria).
RESULTS
Baseline Characteristics
There were 149 interns who participated in this study (Figure). Assessment/survey completion rates were as follows: 2016 control: 68.0%; 2017 preintervention: 97.9%; 2017 postintervention: 89.4%; and 2018 control: 100%. The 2017 interns reported similar amounts of prior POCUS exposure in medical school (Table 1).
Primary Outcome: Assessment Scores (HUD vs no HUD)
There were no significant differences in assessment scores at the end of the study between interns randomized to personal HUD access vs those to no-HUD access (Table 1). HUD interns reported performing POCUS assessments on patients a mean 6.8 (standard deviation [SD] 2.2) times per week vs 6.4 (SD 2.9) times per week in the no-HUD arm (P = .66). The mean lecture attendance was 75.0% and did not significantly differ between the HUD arms (Table 1).
Secondary Outcomes
Impact of Repeating Lectures
The 2017 interns demonstrated significant increases in preblock vs postblock assessment scores after first-time exposure to the lectures (median preblock score 0.61 [interquartile range (IQR), 0.53-0.70] vs postblock score 0.81 [IQR, 0.72-0.86]; P < .001; Table 2). However, intern performance on the preblock vs postblock assessments after second-time exposure to the curriculum failed to improve (median second preblock score 0.78 [IQR, 0.69-0.83] vs postblock score 0.81 [IQR, 0.64-0.89]; P = .94). Intern performance on individual domains of knowledge for each block is listed in Appendix Table 1.
Intervention Performance vs Controls
The 2016 historical control had significantly higher scores compared with the 2017 preintervention group (P < .001; Appendix Table 2). The year-long lecture series resulted in significant increases in median scores for the 2017 group (median preintervention score 0.55 [0.41-0.61] vs median postintervention score 0.84 [0.71-0.90]; P = .006; Appendix Table 1). At the end of the study, the 2017 postintervention scores were significantly higher across multiple knowledge domains compared with the 2016 historical control (Appendix Table 2).
Survey Results
Notably, the 2017 intern class at the end of the intervention did not have significantly different assessment scores for several disease-specific domains, compared with the 2016 control (Appendix Table 2). Nonetheless, the 2017 intern class reported higher levels of confidence in these same domains despite similar scores (Supplementary Figure). The HUD group seldomly cited a lack of confidence in their abilities as a barrier to performing POCUS examinations (17.6%), compared with the no-HUD group (50.0%), despite nearly identical assessment scores between the two groups (Table 1).
DISCUSSION
Previous guidelines have recommended increased HUD access for learners,13,24,35,36 but there have been few investigations that have evaluated the impact of such access on learning POCUS. One previous investigation found that hospitalists who carried HUDs were more likely to identify heart failure on bedside examination.37 In contrast, our study found no improvement in interpretative ability when randomizing interns to carry HUDs for patient care. Notably, interns did not perform more POCUS examinations when given HUDs. We offer several explanations for this finding. First, time-motion studies have demonstrated that internal medicine interns spend less than 15% of their time toward direct patient care.38 It is possible that the demands of being an intern impeded their ability to perform more POCUS examinations on their patients, regardless of HUD access. Alternatively, the interns randomized to no personal access may have used the community device more frequently as a result of the lecture series. Given the cost of HUDs, further studies are needed to assess the degree to which HUD access will improve trainee interpretive ability, especially as more training programs consider the creation of ultrasound curricula.10,11,24,39,40
This study was unique because it followed interns over a year-long course that repeated the same material to assess rates of learning with repeated exposure. Learners improved their scores after the first, but not second, block. Furthermore, the median scores were nearly identical between the first postblock assessment and second preblock assessment (0.81 vs 0.78), suggesting that knowledge was retained between blocks. Together, these findings suggest there may be limitations of traditional lectures that use standardized patient models for practice. Supplementary pedagogies, such as in-the-moment feedback with actual patients, may be needed to promote mastery.14,35
Despite no formal curriculum, the 2016 intern class (historical control) had learned POCUS to some degree based on their higher assessment scores compared with the 2017 intern class during the preintervention period. Such learning may be informal, and yet, trainees may feel confident in making clinical decisions without formalized training, accreditation, or oversight. As suggested by this study, adding regular didactics or giving trainees HUDs may not immediately solve this issue. For assessment items in which the 2017 interns did not significantly differ from the controls, they nonetheless reported higher confidence in their abilities. Similarly, interns randomized to HUDs less frequently cited a lack of confidence in their abilities, despite similar scores to the no-HUD group. Such confidence may be incongruent with their actual knowledge or ability to safely use POCUS. This phenomenon of misplaced confidence is known as the Dunning–Kruger effect, and it may be common with ultrasound learning.41 While confidence can be part of a holistic definition of competency,14 these results raise the concern that trainees may have difficulty assessing their own competency level with POCUS.35
There are several limitations to this study. It was performed at a single institution with limited sample size. It examined only intern physicians because of funding constraints, which limits the generalizability of these findings among medical trainees. Technical ability assessments (including obtaining and interpreting images) were not included. We were unable to track the timing or location of the devices’ usage, and the interns’ self-reported usage rates may be subject to recall bias. To our knowledge, there were no significant lapses in device availability/functionality. Intern physicians in the HUD arm did not receive formal feedback on personally acquired patient images, which may have limited the intervention’s impact.
In conclusion, internal medicine interns who received personal HUDs were not better at recognizing normal/abnormal findings on image assessments, and they did not report performing more POCUS examinations. Since the minority of a trainee’s time is spent toward direct patient care, offering trainees HUDs without substantial guidance may not be enough to promote mastery. Notably, trainees who received HUDs felt more confident in their abilities, despite no objective increase in their actual skill. Finally, interns who received POCUS-related lectures experienced significant benefit upon first exposure to the material, while repeated exposures did not improve performance. Future investigations should stringently track trainee POCUS usage rates with HUDs and assess whether image acquisition ability improves as a result of personal access.
1. Moore CL, Copel JA. Point-of-care ultrasonography. N Engl J Med. 2011;364(8):749-757. https://doi.org/10.1056/NEJMra0909487.
2. Akkaya A, Yesilaras M, Aksay E, Sever M, Atilla OD. The interrater reliability of ultrasound imaging of the inferior vena cava performed by emergency residents. Am J Emerg Med. 2013;31(10):1509-1511. https://doi.org/10.1016/j.ajem.2013.07.006.
3. Razi R, Estrada JR, Doll J, Spencer KT. Bedside hand-carried ultrasound by internal medicine residents versus traditional clinical assessment for the identification of systolic dysfunction in patients admitted with decompensated heart failure. J Am Soc Echocardiogr. 2011;24(12):1319-1324. https://doi.org/10.1016/j.echo.2011.07.013.
4. Dodge KL, Lynch CA, Moore CL, Biroscak BJ, Evans LV. Use of ultrasound guidance improves central venous catheter insertion success rates among junior residents. J Ultrasound Med. 2012;31(10):1519-1526. https://doi.org/10.7863/jum.2012.31.10.1519.
5. Cavanna L, Mordenti P, Bertè R, et al. Ultrasound guidance reduces pneumothorax rate and improves safety of thoracentesis in malignant pleural effusion: Report on 445 consecutive patients with advanced cancer. World J Surg Oncol. 2014;12:139. https://doi.org/10.1186/1477-7819-12-139.
6. Testa A, Francesconi A, Giannuzzi R, Berardi S, Sbraccia P. Economic analysis of bedside ultrasonography (US) implementation in an Internal Medicine department. Intern Emerg Med. 2015;10(8):1015-1024. https://doi.org/10.1007/s11739-015-1320-7.
7. Howard ZD, Noble VE, Marill KA, et al. Bedside ultrasound maximizes patient satisfaction. J Emerg Med. 2014;46(1):46-53. https://doi.org/10.1016/j.jemermed.2013.05.044.
8. Park YH, Jung RB, Lee YG, et al. Does the use of bedside ultrasonography reduce emergency department length of stay for patients with renal colic? A pilot study. Clin Exp Emerg Med. 2016;3(4):197-203. https://doi.org/10.15441/ceem.15.109.
9. Glomb N, D’Amico B, Rus M, Chen C. Point-of-care ultrasound in resource-limited settings. Clin Pediatr Emerg Med. 2015;16(4):256-261. https://doi.org/10.1016/j.cpem.2015.10.001.
10. Bahner DP, Goldman E, Way D, Royall NA, Liu YT. The state of ultrasound education in U.S. medical schools: results of a national survey. Acad Med. 2014;89(12):1681-1686. https://doi.org/10.1097/ACM.0000000000000414.
11. Hall JWW, Holman H, Bornemann P, et al. Point of care ultrasound in family medicine residency programs: A CERA study. Fam Med. 2015;47(9):706-711.
12. Schnobrich DJ, Gladding S, Olson APJ, Duran-Nelson A. Point-of-care ultrasound in internal medicine: A national survey of educational leadership. J Grad Med Educ. 2013;5(3):498-502. https://doi.org/10.4300/JGME-D-12-00215.1.
13. Stolz LA, Stolz U, Fields JM, et al. Emergency medicine resident assessment of the emergency ultrasound milestones and current training recommendations. Acad Emerg Med. 2017;24(3):353-361. https://doi.org/10.1111/acem.13113.
14. Kumar, A., Jensen, T., Kugler, J. Evaluation of trainee competency with point-of-care ultrasonography (POCUS): A conceptual framework and review of existing assessments. J Gen Intern Med. 2019;34(6):1025-1031. https://doi.org/10.1007/s11606-019-04945-4.
15. Levitov A, Frankel HL, Blaivas M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients—part ii: Cardiac ultrasonography. Crit Care Med. 2016;44(6):1206-1227. https://doi.org/10.1097/CCM.0000000000001847.
16. Kobal SL, Trento L, Baharami S, et al. Comparison of effectiveness of hand-carried ultrasound to bedside cardiovascular physical examination. Am J Cardiol. 2005;96(7):1002-1006. https://doi.org/10.1016/j.amjcard.2005.05.060.
17. Ceriani E, Cogliati C. Update on bedside ultrasound diagnosis of pericardial effusion. Intern Emerg Med. 2016;11(3):477-480. https://doi.org/10.1007/s11739-015-1372-8.
18. Labovitz AJ, Noble VE, Bierig M, et al. Focused cardiac ultrasound in the emergent setting: A consensus statement of the American Society of Echocardiography and American College of Emergency Physicians. J Am Soc Echocardiogr. 2010;23(12):1225-1230. https://doi.org/10.1016/j.echo.2010.10.005.
19. Keil-Ríos D, Terrazas-Solís H, González-Garay A, Sánchez-Ávila JF, García-Juárez I. Pocket ultrasound device as a complement to physical examination for ascites evaluation and guided paracentesis. Intern Emerg Med. 2016;11(3):461-466. https://doi.org/10.1007/s11739-016-1406-x.
20. Riddell J, Case A, Wopat R, et al. Sensitivity of emergency bedside ultrasound to detect hydronephrosis in patients with computed tomography–proven stones. West J Emerg Med. 2014;15(1):96-100. https://doi.org/10.5811/westjem.2013.9.15874.
21. Dalziel PJ, Noble VE. Bedside ultrasound and the assessment of renal colic: A review. Emerg Med J. 2013;30(1):3-8. https://doi.org/10.1136/emermed-2012-201375.
22. Whitson MR, Mayo PH. Ultrasonography in the emergency department. Crit Care. 2016;20(1):227. https://doi.org/10.1186/s13054-016-1399-x.
23. Kumar A, Liu G, Chi J, Kugler J. The role of technology in the bedside encounter. Med Clin North Am. 2018;102(3):443-451. https://doi.org/10.1016/j.mcna.2017.12.006.
24. Ma IWY, Arishenkoff S, Wiseman J, et al. Internal medicine point-of-care ultrasound curriculum: Consensus recommendations from the Canadian Internal Medicine Ultrasound (CIMUS) Group. J Gen Intern Med. 2017;32(9):1052-1057. https://doi.org/10.1007/s11606-017-4071-5.
15. Sabath BF, Singh G. Point-of-care ultrasonography as a training milestone for internal medicine residents: The time is now. J Community Hosp Intern Med Perspect. 2016;6(5):33094. https://doi.org/10.3402/jchimp.v6.33094.
26. American College of Emergency Physicians. Ultrasound guidelines: emergency, point-of-care and clinical ultrasound guidelines in medicine. Ann Emerg Med. 2017;69(5):e27-e54. https://doi.org/10.1016/j.annemergmed.2016.08.457.
27. Ramsingh D, Rinehart J, Kain Z, et al. Impact assessment of perioperative point-of-care ultrasound training on anesthesiology residents. Anesthesiology. 2015;123(3):670-682. https://doi.org/10.1097/ALN.0000000000000776.
28. Keddis MT, Cullen MW, Reed DA, et al. Effectiveness of an ultrasound training module for internal medicine residents. BMC Med Educ. 2011;11:75. https://doi.org/10.1186/1472-6920-11-75.
29. Townsend NT, Kendall J, Barnett C, Robinson T. An effective curriculum for focused assessment diagnostic echocardiography: Establishing the learning curve in surgical residents. J Surg Educ. 2016;73(2):190-196. https://doi.org/10.1016/j.jsurg.2015.10.009.
30. Hoppmann RA, Rao VV, Bell F, et al. The evolution of an integrated ultrasound curriculum (iUSC) for medical students: 9-year experience. Crit Ultrasound J. 2015;7(1):18. https://doi.org/10.1186/s13089-015-0035-3.
31. Skalski JH, Elrashidi M, Reed DA, McDonald FS, Bhagra A. Using standardized patients to teach point-of-care ultrasound–guided physical examination skills to internal medicine residents. J Grad Med Educ. 2015;7(1):95-97. https://doi.org/10.4300/JGME-D-14-00178.1.
32. Chisholm CB, Dodge WR, Balise RR, Williams SR, Gharahbaghian L, Beraud A-S. Focused cardiac ultrasound training: How much is enough? J Emerg Med. 2013;44(4):818-822. https://doi.org/10.1016/j.jemermed.2012.07.092.
33. Schmidt GA, Schraufnagel D. Introduction to ATS seminars: Intensive care ultrasound. Ann Am Thorac Soc. 2013;10(5):538-539. https://doi.org/10.1513/AnnalsATS.201306-203ED.
34. Skaarup SH, Laursen CB, Bjerrum AS, Hilberg O. Objective and structured assessment of lung ultrasound competence. A multispecialty Delphi consensus and construct validity study. Ann Am Thorac Soc. 2017;14(4):555-560. https://doi.org/10.1513/AnnalsATS.201611-894OC.
35. Lucas BP, Tierney DM, Jensen TP, et al. Credentialing of hospitalists in ultrasound-guided bedside procedures: A position statement of the Society of Hospital Medicine. J Hosp Med. 2018;13(2):117-125. https://doi.org/10.12788/jhm.2917.
36. Frankel HL, Kirkpatrick AW, Elbarbary M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients-part i: General ultrasonography. Crit Care Med. 2015;43(11):2479-2502. https://doi.org/10.1097/CCM.0000000000001216.
37. Martin LD, Howell EE, Ziegelstein RC, et al. Hand-carried ultrasound performed by hospitalists: Does it improve the cardiac physical examination? Am J Med. 2009;122(1):35-41. https://doi.org/10.1016/j.amjmed.2008.07.022.
38. Desai SV, Asch DA, Bellini LM, et al. Education outcomes in a duty-hour flexibility trial in internal medicine. N Engl J Med. 2018;378(16):1494-1508. https://doi.org/10.1056/NEJMoa1800965.
39. Baltarowich OH, Di Salvo DN, Scoutt LM, et al. National ultrasound curriculum for medical students. Ultrasound Q. 2014;30(1):13-19. https://doi.org/10.1097/RUQ.0000000000000066.
40. Beal EW, Sigmond BR, Sage-Silski L, Lahey S, Nguyen V, Bahner DP. Point-of-care ultrasound in general surgery residency training: A proposal for milestones in graduate medical education ultrasound. J Ultrasound Med. 2017;36(12):2577-2584. https://doi.org/10.1002/jum.14298.
41. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J Pers Soc Psychol. 1999;77(6):1121-1134. https://doi.org/10.1037//0022-3514.77.6.1121.
1. Moore CL, Copel JA. Point-of-care ultrasonography. N Engl J Med. 2011;364(8):749-757. https://doi.org/10.1056/NEJMra0909487.
2. Akkaya A, Yesilaras M, Aksay E, Sever M, Atilla OD. The interrater reliability of ultrasound imaging of the inferior vena cava performed by emergency residents. Am J Emerg Med. 2013;31(10):1509-1511. https://doi.org/10.1016/j.ajem.2013.07.006.
3. Razi R, Estrada JR, Doll J, Spencer KT. Bedside hand-carried ultrasound by internal medicine residents versus traditional clinical assessment for the identification of systolic dysfunction in patients admitted with decompensated heart failure. J Am Soc Echocardiogr. 2011;24(12):1319-1324. https://doi.org/10.1016/j.echo.2011.07.013.
4. Dodge KL, Lynch CA, Moore CL, Biroscak BJ, Evans LV. Use of ultrasound guidance improves central venous catheter insertion success rates among junior residents. J Ultrasound Med. 2012;31(10):1519-1526. https://doi.org/10.7863/jum.2012.31.10.1519.
5. Cavanna L, Mordenti P, Bertè R, et al. Ultrasound guidance reduces pneumothorax rate and improves safety of thoracentesis in malignant pleural effusion: Report on 445 consecutive patients with advanced cancer. World J Surg Oncol. 2014;12:139. https://doi.org/10.1186/1477-7819-12-139.
6. Testa A, Francesconi A, Giannuzzi R, Berardi S, Sbraccia P. Economic analysis of bedside ultrasonography (US) implementation in an Internal Medicine department. Intern Emerg Med. 2015;10(8):1015-1024. https://doi.org/10.1007/s11739-015-1320-7.
7. Howard ZD, Noble VE, Marill KA, et al. Bedside ultrasound maximizes patient satisfaction. J Emerg Med. 2014;46(1):46-53. https://doi.org/10.1016/j.jemermed.2013.05.044.
8. Park YH, Jung RB, Lee YG, et al. Does the use of bedside ultrasonography reduce emergency department length of stay for patients with renal colic? A pilot study. Clin Exp Emerg Med. 2016;3(4):197-203. https://doi.org/10.15441/ceem.15.109.
9. Glomb N, D’Amico B, Rus M, Chen C. Point-of-care ultrasound in resource-limited settings. Clin Pediatr Emerg Med. 2015;16(4):256-261. https://doi.org/10.1016/j.cpem.2015.10.001.
10. Bahner DP, Goldman E, Way D, Royall NA, Liu YT. The state of ultrasound education in U.S. medical schools: results of a national survey. Acad Med. 2014;89(12):1681-1686. https://doi.org/10.1097/ACM.0000000000000414.
11. Hall JWW, Holman H, Bornemann P, et al. Point of care ultrasound in family medicine residency programs: A CERA study. Fam Med. 2015;47(9):706-711.
12. Schnobrich DJ, Gladding S, Olson APJ, Duran-Nelson A. Point-of-care ultrasound in internal medicine: A national survey of educational leadership. J Grad Med Educ. 2013;5(3):498-502. https://doi.org/10.4300/JGME-D-12-00215.1.
13. Stolz LA, Stolz U, Fields JM, et al. Emergency medicine resident assessment of the emergency ultrasound milestones and current training recommendations. Acad Emerg Med. 2017;24(3):353-361. https://doi.org/10.1111/acem.13113.
14. Kumar, A., Jensen, T., Kugler, J. Evaluation of trainee competency with point-of-care ultrasonography (POCUS): A conceptual framework and review of existing assessments. J Gen Intern Med. 2019;34(6):1025-1031. https://doi.org/10.1007/s11606-019-04945-4.
15. Levitov A, Frankel HL, Blaivas M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients—part ii: Cardiac ultrasonography. Crit Care Med. 2016;44(6):1206-1227. https://doi.org/10.1097/CCM.0000000000001847.
16. Kobal SL, Trento L, Baharami S, et al. Comparison of effectiveness of hand-carried ultrasound to bedside cardiovascular physical examination. Am J Cardiol. 2005;96(7):1002-1006. https://doi.org/10.1016/j.amjcard.2005.05.060.
17. Ceriani E, Cogliati C. Update on bedside ultrasound diagnosis of pericardial effusion. Intern Emerg Med. 2016;11(3):477-480. https://doi.org/10.1007/s11739-015-1372-8.
18. Labovitz AJ, Noble VE, Bierig M, et al. Focused cardiac ultrasound in the emergent setting: A consensus statement of the American Society of Echocardiography and American College of Emergency Physicians. J Am Soc Echocardiogr. 2010;23(12):1225-1230. https://doi.org/10.1016/j.echo.2010.10.005.
19. Keil-Ríos D, Terrazas-Solís H, González-Garay A, Sánchez-Ávila JF, García-Juárez I. Pocket ultrasound device as a complement to physical examination for ascites evaluation and guided paracentesis. Intern Emerg Med. 2016;11(3):461-466. https://doi.org/10.1007/s11739-016-1406-x.
20. Riddell J, Case A, Wopat R, et al. Sensitivity of emergency bedside ultrasound to detect hydronephrosis in patients with computed tomography–proven stones. West J Emerg Med. 2014;15(1):96-100. https://doi.org/10.5811/westjem.2013.9.15874.
21. Dalziel PJ, Noble VE. Bedside ultrasound and the assessment of renal colic: A review. Emerg Med J. 2013;30(1):3-8. https://doi.org/10.1136/emermed-2012-201375.
22. Whitson MR, Mayo PH. Ultrasonography in the emergency department. Crit Care. 2016;20(1):227. https://doi.org/10.1186/s13054-016-1399-x.
23. Kumar A, Liu G, Chi J, Kugler J. The role of technology in the bedside encounter. Med Clin North Am. 2018;102(3):443-451. https://doi.org/10.1016/j.mcna.2017.12.006.
24. Ma IWY, Arishenkoff S, Wiseman J, et al. Internal medicine point-of-care ultrasound curriculum: Consensus recommendations from the Canadian Internal Medicine Ultrasound (CIMUS) Group. J Gen Intern Med. 2017;32(9):1052-1057. https://doi.org/10.1007/s11606-017-4071-5.
15. Sabath BF, Singh G. Point-of-care ultrasonography as a training milestone for internal medicine residents: The time is now. J Community Hosp Intern Med Perspect. 2016;6(5):33094. https://doi.org/10.3402/jchimp.v6.33094.
26. American College of Emergency Physicians. Ultrasound guidelines: emergency, point-of-care and clinical ultrasound guidelines in medicine. Ann Emerg Med. 2017;69(5):e27-e54. https://doi.org/10.1016/j.annemergmed.2016.08.457.
27. Ramsingh D, Rinehart J, Kain Z, et al. Impact assessment of perioperative point-of-care ultrasound training on anesthesiology residents. Anesthesiology. 2015;123(3):670-682. https://doi.org/10.1097/ALN.0000000000000776.
28. Keddis MT, Cullen MW, Reed DA, et al. Effectiveness of an ultrasound training module for internal medicine residents. BMC Med Educ. 2011;11:75. https://doi.org/10.1186/1472-6920-11-75.
29. Townsend NT, Kendall J, Barnett C, Robinson T. An effective curriculum for focused assessment diagnostic echocardiography: Establishing the learning curve in surgical residents. J Surg Educ. 2016;73(2):190-196. https://doi.org/10.1016/j.jsurg.2015.10.009.
30. Hoppmann RA, Rao VV, Bell F, et al. The evolution of an integrated ultrasound curriculum (iUSC) for medical students: 9-year experience. Crit Ultrasound J. 2015;7(1):18. https://doi.org/10.1186/s13089-015-0035-3.
31. Skalski JH, Elrashidi M, Reed DA, McDonald FS, Bhagra A. Using standardized patients to teach point-of-care ultrasound–guided physical examination skills to internal medicine residents. J Grad Med Educ. 2015;7(1):95-97. https://doi.org/10.4300/JGME-D-14-00178.1.
32. Chisholm CB, Dodge WR, Balise RR, Williams SR, Gharahbaghian L, Beraud A-S. Focused cardiac ultrasound training: How much is enough? J Emerg Med. 2013;44(4):818-822. https://doi.org/10.1016/j.jemermed.2012.07.092.
33. Schmidt GA, Schraufnagel D. Introduction to ATS seminars: Intensive care ultrasound. Ann Am Thorac Soc. 2013;10(5):538-539. https://doi.org/10.1513/AnnalsATS.201306-203ED.
34. Skaarup SH, Laursen CB, Bjerrum AS, Hilberg O. Objective and structured assessment of lung ultrasound competence. A multispecialty Delphi consensus and construct validity study. Ann Am Thorac Soc. 2017;14(4):555-560. https://doi.org/10.1513/AnnalsATS.201611-894OC.
35. Lucas BP, Tierney DM, Jensen TP, et al. Credentialing of hospitalists in ultrasound-guided bedside procedures: A position statement of the Society of Hospital Medicine. J Hosp Med. 2018;13(2):117-125. https://doi.org/10.12788/jhm.2917.
36. Frankel HL, Kirkpatrick AW, Elbarbary M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients-part i: General ultrasonography. Crit Care Med. 2015;43(11):2479-2502. https://doi.org/10.1097/CCM.0000000000001216.
37. Martin LD, Howell EE, Ziegelstein RC, et al. Hand-carried ultrasound performed by hospitalists: Does it improve the cardiac physical examination? Am J Med. 2009;122(1):35-41. https://doi.org/10.1016/j.amjmed.2008.07.022.
38. Desai SV, Asch DA, Bellini LM, et al. Education outcomes in a duty-hour flexibility trial in internal medicine. N Engl J Med. 2018;378(16):1494-1508. https://doi.org/10.1056/NEJMoa1800965.
39. Baltarowich OH, Di Salvo DN, Scoutt LM, et al. National ultrasound curriculum for medical students. Ultrasound Q. 2014;30(1):13-19. https://doi.org/10.1097/RUQ.0000000000000066.
40. Beal EW, Sigmond BR, Sage-Silski L, Lahey S, Nguyen V, Bahner DP. Point-of-care ultrasound in general surgery residency training: A proposal for milestones in graduate medical education ultrasound. J Ultrasound Med. 2017;36(12):2577-2584. https://doi.org/10.1002/jum.14298.
41. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J Pers Soc Psychol. 1999;77(6):1121-1134. https://doi.org/10.1037//0022-3514.77.6.1121.
© 2020 Society of Hospital Medicine
Surgical Comanagement by Hospitalists: Continued Improvement Over 5 Years
In surgical comanagement (SCM), surgeons and hospitalists share responsibility of care for surgical patients. While SCM has been increasingly utilized, many of the reported models are a modification of the consultation model, in which a group of rotating hospitalists, internists, or geriatricians care for the surgical patients, often after medical complications may have occured.1-4
In August 2012, we implemented SCM in Orthopedic and Neurosurgery services at our institution.5 This model is unique because the same Internal Medicine hospitalists are dedicated year round to the same surgical service. SCM hospitalists see patients on their assigned surgical service only; they do not see patients on the Internal Medicine service. After the first year of implementing SCM, we conducted a propensity score–weighted study with 17,057 discharges in the pre-SCM group (January 2009 to July 2012) and 5,533 discharges in the post-SCM group (September 2012 to September 2013).5 In this study, SCM was associated with a decrease in medical complications, length of stay (LOS), medical consultations, 30-day readmissions, and cost.5
Since SCM requires ongoing investment by institutions, we now report a follow-up study to explore if there were continued improvements in patient outcomes with SCM. In this study, we evaluate if there was a decrease in medical complications, LOS, number of medical consultations, rapid response team calls, and code blues and an increase in patient satisfaction with SCM in Orthopedic and Neurosurgery services between 2012 and 2018.
METHODS
We included 26,380 discharges from Orthopedic and Neurosurgery services between September 1, 2012, and June 30, 2018, at our academic medical center. We excluded patients discharged in August 2012 as we transitioned to the SCM model. Our Institutional Review Board exempted this study from further review.
SCM Structure
SCM structure was detailed in a prior article.5 We have 3.0 clinical full-time equivalents on the Orthopedic surgery SCM service and 1.2 on the Neurosurgery SCM service. On weekdays, during the day (8
During the day, SCM hospitalists receive the first call for medical issues. After 5
SCM hospitalists screen the entire patient list on their assigned surgery service each day. After screening the patient list, SCM hospitalists formally see select patients with preventable or active medical conditions and write notes on the patient’s chart. There are no set criteria to determine which patients would be seen by SCM. This is because surgeries can decompensate stable medical conditions or new unexpected medical complications may occur. Additionally, in our prior study, we reported that SCM reduced medical complications and LOS regardless of age or patient acuity.5
Outcomes
Our primary outcome was proportion of patients with ≥1 medical complication (sepsis, pneumonia, urinary tract infection, delirium, acute kidney injury, atrial fibrillation, or ileus). Our secondary outcomes included mean LOS, proportion of patients with ≥2 medical consultations, rapid response team calls, code blues, and top-box patient satisfaction score. Though cost is an important consideration in implementing SCM, limited financial data were available. However, since LOS is a key component in calculating direct costs,6 we estimated the cost savings per discharge using mean direct cost per day and the difference in mean LOS between pre- and post-SCM groups.5
We defined medical complications using International Classification of Disease (ICD) Codes 9 or 10 that were coded as “not present on admission” (Appendix 1). We used Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey for three questions for patient satisfaction: Did doctors treat with courtesy and respect, listen carefully, and explain things in a way you could understand?
Statistical Analysis
We used regression analysis to assess trends in patient characteristics by year (Appendix 2). Logistic regression with logit link was used to assess the yearly change in our binary outcomes (proportion of patients with ≥1 medical complication, those with ≥2 medical consultations, rapid response team calls, code blue, and top-box patient satisfaction score) and reported odds ratios. Gamma regression with identity link was performed for our continuous outcome (LOS). Beta coefficient was reported to estimate the yearly change in LOS under their original scales. Age, primary insurance, race, Charlson comorbidity score, general or regional anesthesia, surgical service, and duration of surgery were adjusted in the regression analyses for outcomes. SAS 9.4 was used for analysis.
RESULTS
Patient characteristics are shown in Table 1. Overall, 62.8% patients were discharged from Orthopedic surgery service, 72.5% patients underwent elective surgery, and 88.8% received general anesthesia. Between 2012 and 2018, there was a significant increase in the median age of patients (from 60 years to 63 years), mean Charlson comorbidity score increased from 1.07 to 1.46, and median case mix index, a measure of patient acuity, increased from 2.10 to 2.36 (Appendix 2).
Comparing pre-SCM unadjusted rates reported in our prior study (January 2009 to July 2012) to post-SCM (September 2012 to June 2018; Appendix 3), patients with ≥1 medical complication decreased from 10.1% to 6.1%, LOS (mean ± standard deviation) changed from 5.4 ± 2.2 days to 4.6 ± 5.8 days, patients with ≥2 medical consultations decreased from 19.4% to 9.2%, rapid response team calls changed from 1% to 0.9%, code blues changed from 0.3% to 0.2%, and patients with top-box patient satisfaction score increased from 86.4% to 94.2%.5
In the adjusted analysis from 2012 to 2018, the odds of patients with ≥1 medical complication decreased by 3.8% per year (P = .01), estimated LOS decreased by 0.3 days per year (P < .0001), and the odds of rapid response team calls decreased by 12.2% per year (P = .001; Table 2). Changes over time in the odds of patients with ≥2 medical consultations, code blues, or top-box patient satisfaction score were not statistically significant (Table 2). Based on the LOS reduction pre- to post-SCM, there were estimated average direct cost savings of $3,424 per discharge between 2012 and 2018.
DISCUSSION
Since the implementation of SCM on Orthopedic and Neurosurgery services at our institution, there was a decrease in medical complications, LOS, and rapid response team calls. To our knowledge, this is one of the largest studies evaluating the benefits of SCM over 5.8 years. Similar to our prior studies on this SCM model of care,5,7 other studies have reported a decrease in medical complications,8-10 LOS,11-13 and cost of care14 with SCM.
While the changes in the unadjusted rates of outcomes over the years appeared to be small, while our patient population became older and sicker, there were significant changes in several of our outcomes in the adjusted analysis. We believe that SCM hospitalists have developed a skill set and understanding of these surgical patients over time and can manage more medically complex patients without an increase in medical complications or LOS. We attribute this to our unique SCM model in which the same hospitalists stay year round on the same surgical service. SCM hospitalists have built trusting relationships with the surgical team with greater involvement in decision making, care planning, and patient selection. With minimal turnover in the SCM group and with ongoing learning, SCM hospitalists can anticipate fluid or pain medication requirements after specific surgeries and the surgery-specific medical complications. SCM hospitalists are available on the patient units to provide timely intervention in case of medical deterioration; answer any questions from patients, families, or nursing while the surgical teams may be in the operating room; and coordinate with other medical consultants or outpatient providers as needed.
This study has several limitations. This is a single-center study at an academic institution, limited to two surgical services. We did not have a control group and multiple hospital-wide interventions may have affected these outcomes. This is an observational study in which unobserved variables may bias the results. We used ICD codes to identify medical complications, which relies on the quality of physician documentation. While our response rate of 21.1% for HCAHPS was comparable to the national average of 26.7%, it may not reliably represent our patient population.15 Lastly, we had limited financial data.
CONCLUSION
With the move toward value-based payment and increasing medical complexity of surgical patients, SCM by hospitalists may deliver high-quality care.
1. Auerbach AD, Wachter RM, Cheng HQ, et al. Comanagement of surgical patients between neurosurgeons and hospitalists. Arch Intern Med. 2010;170(22):2004-2010. https://doi.org/10.1001/archinternmed.2010.432
2. Ruiz ME, Merino RÁ, Rodríguez R, Sánchez GM, Alonso A, Barbero M. Effect of comanagement with internal medicine on hospital stay of patients admitted to the service of otolaryngology. Acta Otorrinolaringol Esp. 2015;66(5):264-268. https://doi.org/10.1016/j.otorri.2014.09.010.
3. Tadros RO, Faries PL, Malik R, et al. The effect of a hospitalist comanagement service on vascular surgery inpatients. J Vasc Surg. 2015;61(6):1550-1555. https://doi.org/10.1016/j.jvs.2015.01.006
4. Gregersen M, Mørch MM, Hougaard K, Damsgaard EM. Geriatric intervention in elderly patients with hip fracture in an orthopedic ward. J Inj Violence Res. 2012;4(2):45-51. https://doi.org/10.5249/jivr.v4i2.96
5. Rohatgi N, Loftus P, Grujic O, Cullen M, Hopkins J, Ahuja N. Surgical comanagement by hospitalists improves patient outcomes: A propensity score analysis. Ann Surg. 2016;264(2):275-282. https://doi.org/10.1097/SLA.0000000000001629
6. Polverejan E, Gardiner JC, Bradley CJ, Holmes-Rovner M, Rovner D. Estimating mean hospital cost as a function of length of stay and patient characteristics. Health Econ. 2003;12(11):935-947. https://doi.org/10.1002/hec.774
7. Rohatgi N, Wei PH, Grujic O, Ahuja N. Surgical Comanagement by hospitalists in colorectal surgery. J Am Coll Surg. 2018;227(4):404-410. https://doi.org/10.1016/j.jamcollsurg.2018.06.011
8. Huddleston JM, Long KH, Naessens JM, et al. Medical and surgical comanagement after elective hip and knee arthroplasty: A randomized, controlled trial. Ann Intern Med. 2004;141(1):28-38. https://doi.org/10.7326/0003-4819-141-1-200407060-00012.
9. Swart E, Vasudeva E, Makhni EC, Macaulay W, Bozic KJ. Dedicated perioperative hip fracture comanagement programs are cost-effective in high-volume centers: An economic analysis. Clin Orthop Relat Res. 2016;474(1):222-233. https://doi.org/10.1007/s11999-015-4494-4.
10. Iberti CT, Briones A, Gabriel E, Dunn AS. Hospitalist-vascular surgery comanagement: Effects on complications and mortality. Hosp Pract. 2016;44(5):233-236. https://doi.org/10.1080/21548331.2016.1259543.
11. Kammerlander C, Roth T, Friedman SM, et al. Ortho-geriatric service--A literature review comparing different models. Osteoporos Int. 2010;21(Suppl 4):S637-S646. https://doi.org/10.1007/s00198-010-1396-x.
12. Bracey DN, Kiymaz TC, Holst DC, et al. An orthopedic-hospitalist comanaged hip fracture service reduces inpatient length of stay. Geriatr Orthop Surg Rehabil. 2016;7(4):171-177. https://doi.org/10.1177/2151458516661383.
13. Duplantier NL, Briski DC, Luce LT, Meyer MS, Ochsner JL, Chimento GF. The effects of a hospitalist comanagement model for joint arthroplasty patients in a teaching facility. J Arthroplasty. 2016;31(3):567-572. https://doi.org/10.1016/j.arth.2015.10.010.
14. Roy A, Heckman MG, Roy V. Associations between the hospitalist model of care and quality-of-care-related outcomes in patients undergoing hip fracture surgery. Mayo Clin Proc. 2006;81(1):28-31. https://doi.org/10.4065/81.1.28.
15. Godden E, Paseka A, Gnida J, Inguanzo J. The impact of response rate on Hospital Consumer Assessment of Healthcare Providers and System (HCAHPS) dimension scores. Patient Exp J. 2019;6(1):105-114. https://doi.org/10.35680/2372-0247.1357.
In surgical comanagement (SCM), surgeons and hospitalists share responsibility of care for surgical patients. While SCM has been increasingly utilized, many of the reported models are a modification of the consultation model, in which a group of rotating hospitalists, internists, or geriatricians care for the surgical patients, often after medical complications may have occured.1-4
In August 2012, we implemented SCM in Orthopedic and Neurosurgery services at our institution.5 This model is unique because the same Internal Medicine hospitalists are dedicated year round to the same surgical service. SCM hospitalists see patients on their assigned surgical service only; they do not see patients on the Internal Medicine service. After the first year of implementing SCM, we conducted a propensity score–weighted study with 17,057 discharges in the pre-SCM group (January 2009 to July 2012) and 5,533 discharges in the post-SCM group (September 2012 to September 2013).5 In this study, SCM was associated with a decrease in medical complications, length of stay (LOS), medical consultations, 30-day readmissions, and cost.5
Since SCM requires ongoing investment by institutions, we now report a follow-up study to explore if there were continued improvements in patient outcomes with SCM. In this study, we evaluate if there was a decrease in medical complications, LOS, number of medical consultations, rapid response team calls, and code blues and an increase in patient satisfaction with SCM in Orthopedic and Neurosurgery services between 2012 and 2018.
METHODS
We included 26,380 discharges from Orthopedic and Neurosurgery services between September 1, 2012, and June 30, 2018, at our academic medical center. We excluded patients discharged in August 2012 as we transitioned to the SCM model. Our Institutional Review Board exempted this study from further review.
SCM Structure
SCM structure was detailed in a prior article.5 We have 3.0 clinical full-time equivalents on the Orthopedic surgery SCM service and 1.2 on the Neurosurgery SCM service. On weekdays, during the day (8
During the day, SCM hospitalists receive the first call for medical issues. After 5
SCM hospitalists screen the entire patient list on their assigned surgery service each day. After screening the patient list, SCM hospitalists formally see select patients with preventable or active medical conditions and write notes on the patient’s chart. There are no set criteria to determine which patients would be seen by SCM. This is because surgeries can decompensate stable medical conditions or new unexpected medical complications may occur. Additionally, in our prior study, we reported that SCM reduced medical complications and LOS regardless of age or patient acuity.5
Outcomes
Our primary outcome was proportion of patients with ≥1 medical complication (sepsis, pneumonia, urinary tract infection, delirium, acute kidney injury, atrial fibrillation, or ileus). Our secondary outcomes included mean LOS, proportion of patients with ≥2 medical consultations, rapid response team calls, code blues, and top-box patient satisfaction score. Though cost is an important consideration in implementing SCM, limited financial data were available. However, since LOS is a key component in calculating direct costs,6 we estimated the cost savings per discharge using mean direct cost per day and the difference in mean LOS between pre- and post-SCM groups.5
We defined medical complications using International Classification of Disease (ICD) Codes 9 or 10 that were coded as “not present on admission” (Appendix 1). We used Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey for three questions for patient satisfaction: Did doctors treat with courtesy and respect, listen carefully, and explain things in a way you could understand?
Statistical Analysis
We used regression analysis to assess trends in patient characteristics by year (Appendix 2). Logistic regression with logit link was used to assess the yearly change in our binary outcomes (proportion of patients with ≥1 medical complication, those with ≥2 medical consultations, rapid response team calls, code blue, and top-box patient satisfaction score) and reported odds ratios. Gamma regression with identity link was performed for our continuous outcome (LOS). Beta coefficient was reported to estimate the yearly change in LOS under their original scales. Age, primary insurance, race, Charlson comorbidity score, general or regional anesthesia, surgical service, and duration of surgery were adjusted in the regression analyses for outcomes. SAS 9.4 was used for analysis.
RESULTS
Patient characteristics are shown in Table 1. Overall, 62.8% patients were discharged from Orthopedic surgery service, 72.5% patients underwent elective surgery, and 88.8% received general anesthesia. Between 2012 and 2018, there was a significant increase in the median age of patients (from 60 years to 63 years), mean Charlson comorbidity score increased from 1.07 to 1.46, and median case mix index, a measure of patient acuity, increased from 2.10 to 2.36 (Appendix 2).
Comparing pre-SCM unadjusted rates reported in our prior study (January 2009 to July 2012) to post-SCM (September 2012 to June 2018; Appendix 3), patients with ≥1 medical complication decreased from 10.1% to 6.1%, LOS (mean ± standard deviation) changed from 5.4 ± 2.2 days to 4.6 ± 5.8 days, patients with ≥2 medical consultations decreased from 19.4% to 9.2%, rapid response team calls changed from 1% to 0.9%, code blues changed from 0.3% to 0.2%, and patients with top-box patient satisfaction score increased from 86.4% to 94.2%.5
In the adjusted analysis from 2012 to 2018, the odds of patients with ≥1 medical complication decreased by 3.8% per year (P = .01), estimated LOS decreased by 0.3 days per year (P < .0001), and the odds of rapid response team calls decreased by 12.2% per year (P = .001; Table 2). Changes over time in the odds of patients with ≥2 medical consultations, code blues, or top-box patient satisfaction score were not statistically significant (Table 2). Based on the LOS reduction pre- to post-SCM, there were estimated average direct cost savings of $3,424 per discharge between 2012 and 2018.
DISCUSSION
Since the implementation of SCM on Orthopedic and Neurosurgery services at our institution, there was a decrease in medical complications, LOS, and rapid response team calls. To our knowledge, this is one of the largest studies evaluating the benefits of SCM over 5.8 years. Similar to our prior studies on this SCM model of care,5,7 other studies have reported a decrease in medical complications,8-10 LOS,11-13 and cost of care14 with SCM.
While the changes in the unadjusted rates of outcomes over the years appeared to be small, while our patient population became older and sicker, there were significant changes in several of our outcomes in the adjusted analysis. We believe that SCM hospitalists have developed a skill set and understanding of these surgical patients over time and can manage more medically complex patients without an increase in medical complications or LOS. We attribute this to our unique SCM model in which the same hospitalists stay year round on the same surgical service. SCM hospitalists have built trusting relationships with the surgical team with greater involvement in decision making, care planning, and patient selection. With minimal turnover in the SCM group and with ongoing learning, SCM hospitalists can anticipate fluid or pain medication requirements after specific surgeries and the surgery-specific medical complications. SCM hospitalists are available on the patient units to provide timely intervention in case of medical deterioration; answer any questions from patients, families, or nursing while the surgical teams may be in the operating room; and coordinate with other medical consultants or outpatient providers as needed.
This study has several limitations. This is a single-center study at an academic institution, limited to two surgical services. We did not have a control group and multiple hospital-wide interventions may have affected these outcomes. This is an observational study in which unobserved variables may bias the results. We used ICD codes to identify medical complications, which relies on the quality of physician documentation. While our response rate of 21.1% for HCAHPS was comparable to the national average of 26.7%, it may not reliably represent our patient population.15 Lastly, we had limited financial data.
CONCLUSION
With the move toward value-based payment and increasing medical complexity of surgical patients, SCM by hospitalists may deliver high-quality care.
In surgical comanagement (SCM), surgeons and hospitalists share responsibility of care for surgical patients. While SCM has been increasingly utilized, many of the reported models are a modification of the consultation model, in which a group of rotating hospitalists, internists, or geriatricians care for the surgical patients, often after medical complications may have occured.1-4
In August 2012, we implemented SCM in Orthopedic and Neurosurgery services at our institution.5 This model is unique because the same Internal Medicine hospitalists are dedicated year round to the same surgical service. SCM hospitalists see patients on their assigned surgical service only; they do not see patients on the Internal Medicine service. After the first year of implementing SCM, we conducted a propensity score–weighted study with 17,057 discharges in the pre-SCM group (January 2009 to July 2012) and 5,533 discharges in the post-SCM group (September 2012 to September 2013).5 In this study, SCM was associated with a decrease in medical complications, length of stay (LOS), medical consultations, 30-day readmissions, and cost.5
Since SCM requires ongoing investment by institutions, we now report a follow-up study to explore if there were continued improvements in patient outcomes with SCM. In this study, we evaluate if there was a decrease in medical complications, LOS, number of medical consultations, rapid response team calls, and code blues and an increase in patient satisfaction with SCM in Orthopedic and Neurosurgery services between 2012 and 2018.
METHODS
We included 26,380 discharges from Orthopedic and Neurosurgery services between September 1, 2012, and June 30, 2018, at our academic medical center. We excluded patients discharged in August 2012 as we transitioned to the SCM model. Our Institutional Review Board exempted this study from further review.
SCM Structure
SCM structure was detailed in a prior article.5 We have 3.0 clinical full-time equivalents on the Orthopedic surgery SCM service and 1.2 on the Neurosurgery SCM service. On weekdays, during the day (8
During the day, SCM hospitalists receive the first call for medical issues. After 5
SCM hospitalists screen the entire patient list on their assigned surgery service each day. After screening the patient list, SCM hospitalists formally see select patients with preventable or active medical conditions and write notes on the patient’s chart. There are no set criteria to determine which patients would be seen by SCM. This is because surgeries can decompensate stable medical conditions or new unexpected medical complications may occur. Additionally, in our prior study, we reported that SCM reduced medical complications and LOS regardless of age or patient acuity.5
Outcomes
Our primary outcome was proportion of patients with ≥1 medical complication (sepsis, pneumonia, urinary tract infection, delirium, acute kidney injury, atrial fibrillation, or ileus). Our secondary outcomes included mean LOS, proportion of patients with ≥2 medical consultations, rapid response team calls, code blues, and top-box patient satisfaction score. Though cost is an important consideration in implementing SCM, limited financial data were available. However, since LOS is a key component in calculating direct costs,6 we estimated the cost savings per discharge using mean direct cost per day and the difference in mean LOS between pre- and post-SCM groups.5
We defined medical complications using International Classification of Disease (ICD) Codes 9 or 10 that were coded as “not present on admission” (Appendix 1). We used Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey for three questions for patient satisfaction: Did doctors treat with courtesy and respect, listen carefully, and explain things in a way you could understand?
Statistical Analysis
We used regression analysis to assess trends in patient characteristics by year (Appendix 2). Logistic regression with logit link was used to assess the yearly change in our binary outcomes (proportion of patients with ≥1 medical complication, those with ≥2 medical consultations, rapid response team calls, code blue, and top-box patient satisfaction score) and reported odds ratios. Gamma regression with identity link was performed for our continuous outcome (LOS). Beta coefficient was reported to estimate the yearly change in LOS under their original scales. Age, primary insurance, race, Charlson comorbidity score, general or regional anesthesia, surgical service, and duration of surgery were adjusted in the regression analyses for outcomes. SAS 9.4 was used for analysis.
RESULTS
Patient characteristics are shown in Table 1. Overall, 62.8% patients were discharged from Orthopedic surgery service, 72.5% patients underwent elective surgery, and 88.8% received general anesthesia. Between 2012 and 2018, there was a significant increase in the median age of patients (from 60 years to 63 years), mean Charlson comorbidity score increased from 1.07 to 1.46, and median case mix index, a measure of patient acuity, increased from 2.10 to 2.36 (Appendix 2).
Comparing pre-SCM unadjusted rates reported in our prior study (January 2009 to July 2012) to post-SCM (September 2012 to June 2018; Appendix 3), patients with ≥1 medical complication decreased from 10.1% to 6.1%, LOS (mean ± standard deviation) changed from 5.4 ± 2.2 days to 4.6 ± 5.8 days, patients with ≥2 medical consultations decreased from 19.4% to 9.2%, rapid response team calls changed from 1% to 0.9%, code blues changed from 0.3% to 0.2%, and patients with top-box patient satisfaction score increased from 86.4% to 94.2%.5
In the adjusted analysis from 2012 to 2018, the odds of patients with ≥1 medical complication decreased by 3.8% per year (P = .01), estimated LOS decreased by 0.3 days per year (P < .0001), and the odds of rapid response team calls decreased by 12.2% per year (P = .001; Table 2). Changes over time in the odds of patients with ≥2 medical consultations, code blues, or top-box patient satisfaction score were not statistically significant (Table 2). Based on the LOS reduction pre- to post-SCM, there were estimated average direct cost savings of $3,424 per discharge between 2012 and 2018.
DISCUSSION
Since the implementation of SCM on Orthopedic and Neurosurgery services at our institution, there was a decrease in medical complications, LOS, and rapid response team calls. To our knowledge, this is one of the largest studies evaluating the benefits of SCM over 5.8 years. Similar to our prior studies on this SCM model of care,5,7 other studies have reported a decrease in medical complications,8-10 LOS,11-13 and cost of care14 with SCM.
While the changes in the unadjusted rates of outcomes over the years appeared to be small, while our patient population became older and sicker, there were significant changes in several of our outcomes in the adjusted analysis. We believe that SCM hospitalists have developed a skill set and understanding of these surgical patients over time and can manage more medically complex patients without an increase in medical complications or LOS. We attribute this to our unique SCM model in which the same hospitalists stay year round on the same surgical service. SCM hospitalists have built trusting relationships with the surgical team with greater involvement in decision making, care planning, and patient selection. With minimal turnover in the SCM group and with ongoing learning, SCM hospitalists can anticipate fluid or pain medication requirements after specific surgeries and the surgery-specific medical complications. SCM hospitalists are available on the patient units to provide timely intervention in case of medical deterioration; answer any questions from patients, families, or nursing while the surgical teams may be in the operating room; and coordinate with other medical consultants or outpatient providers as needed.
This study has several limitations. This is a single-center study at an academic institution, limited to two surgical services. We did not have a control group and multiple hospital-wide interventions may have affected these outcomes. This is an observational study in which unobserved variables may bias the results. We used ICD codes to identify medical complications, which relies on the quality of physician documentation. While our response rate of 21.1% for HCAHPS was comparable to the national average of 26.7%, it may not reliably represent our patient population.15 Lastly, we had limited financial data.
CONCLUSION
With the move toward value-based payment and increasing medical complexity of surgical patients, SCM by hospitalists may deliver high-quality care.
1. Auerbach AD, Wachter RM, Cheng HQ, et al. Comanagement of surgical patients between neurosurgeons and hospitalists. Arch Intern Med. 2010;170(22):2004-2010. https://doi.org/10.1001/archinternmed.2010.432
2. Ruiz ME, Merino RÁ, Rodríguez R, Sánchez GM, Alonso A, Barbero M. Effect of comanagement with internal medicine on hospital stay of patients admitted to the service of otolaryngology. Acta Otorrinolaringol Esp. 2015;66(5):264-268. https://doi.org/10.1016/j.otorri.2014.09.010.
3. Tadros RO, Faries PL, Malik R, et al. The effect of a hospitalist comanagement service on vascular surgery inpatients. J Vasc Surg. 2015;61(6):1550-1555. https://doi.org/10.1016/j.jvs.2015.01.006
4. Gregersen M, Mørch MM, Hougaard K, Damsgaard EM. Geriatric intervention in elderly patients with hip fracture in an orthopedic ward. J Inj Violence Res. 2012;4(2):45-51. https://doi.org/10.5249/jivr.v4i2.96
5. Rohatgi N, Loftus P, Grujic O, Cullen M, Hopkins J, Ahuja N. Surgical comanagement by hospitalists improves patient outcomes: A propensity score analysis. Ann Surg. 2016;264(2):275-282. https://doi.org/10.1097/SLA.0000000000001629
6. Polverejan E, Gardiner JC, Bradley CJ, Holmes-Rovner M, Rovner D. Estimating mean hospital cost as a function of length of stay and patient characteristics. Health Econ. 2003;12(11):935-947. https://doi.org/10.1002/hec.774
7. Rohatgi N, Wei PH, Grujic O, Ahuja N. Surgical Comanagement by hospitalists in colorectal surgery. J Am Coll Surg. 2018;227(4):404-410. https://doi.org/10.1016/j.jamcollsurg.2018.06.011
8. Huddleston JM, Long KH, Naessens JM, et al. Medical and surgical comanagement after elective hip and knee arthroplasty: A randomized, controlled trial. Ann Intern Med. 2004;141(1):28-38. https://doi.org/10.7326/0003-4819-141-1-200407060-00012.
9. Swart E, Vasudeva E, Makhni EC, Macaulay W, Bozic KJ. Dedicated perioperative hip fracture comanagement programs are cost-effective in high-volume centers: An economic analysis. Clin Orthop Relat Res. 2016;474(1):222-233. https://doi.org/10.1007/s11999-015-4494-4.
10. Iberti CT, Briones A, Gabriel E, Dunn AS. Hospitalist-vascular surgery comanagement: Effects on complications and mortality. Hosp Pract. 2016;44(5):233-236. https://doi.org/10.1080/21548331.2016.1259543.
11. Kammerlander C, Roth T, Friedman SM, et al. Ortho-geriatric service--A literature review comparing different models. Osteoporos Int. 2010;21(Suppl 4):S637-S646. https://doi.org/10.1007/s00198-010-1396-x.
12. Bracey DN, Kiymaz TC, Holst DC, et al. An orthopedic-hospitalist comanaged hip fracture service reduces inpatient length of stay. Geriatr Orthop Surg Rehabil. 2016;7(4):171-177. https://doi.org/10.1177/2151458516661383.
13. Duplantier NL, Briski DC, Luce LT, Meyer MS, Ochsner JL, Chimento GF. The effects of a hospitalist comanagement model for joint arthroplasty patients in a teaching facility. J Arthroplasty. 2016;31(3):567-572. https://doi.org/10.1016/j.arth.2015.10.010.
14. Roy A, Heckman MG, Roy V. Associations between the hospitalist model of care and quality-of-care-related outcomes in patients undergoing hip fracture surgery. Mayo Clin Proc. 2006;81(1):28-31. https://doi.org/10.4065/81.1.28.
15. Godden E, Paseka A, Gnida J, Inguanzo J. The impact of response rate on Hospital Consumer Assessment of Healthcare Providers and System (HCAHPS) dimension scores. Patient Exp J. 2019;6(1):105-114. https://doi.org/10.35680/2372-0247.1357.
1. Auerbach AD, Wachter RM, Cheng HQ, et al. Comanagement of surgical patients between neurosurgeons and hospitalists. Arch Intern Med. 2010;170(22):2004-2010. https://doi.org/10.1001/archinternmed.2010.432
2. Ruiz ME, Merino RÁ, Rodríguez R, Sánchez GM, Alonso A, Barbero M. Effect of comanagement with internal medicine on hospital stay of patients admitted to the service of otolaryngology. Acta Otorrinolaringol Esp. 2015;66(5):264-268. https://doi.org/10.1016/j.otorri.2014.09.010.
3. Tadros RO, Faries PL, Malik R, et al. The effect of a hospitalist comanagement service on vascular surgery inpatients. J Vasc Surg. 2015;61(6):1550-1555. https://doi.org/10.1016/j.jvs.2015.01.006
4. Gregersen M, Mørch MM, Hougaard K, Damsgaard EM. Geriatric intervention in elderly patients with hip fracture in an orthopedic ward. J Inj Violence Res. 2012;4(2):45-51. https://doi.org/10.5249/jivr.v4i2.96
5. Rohatgi N, Loftus P, Grujic O, Cullen M, Hopkins J, Ahuja N. Surgical comanagement by hospitalists improves patient outcomes: A propensity score analysis. Ann Surg. 2016;264(2):275-282. https://doi.org/10.1097/SLA.0000000000001629
6. Polverejan E, Gardiner JC, Bradley CJ, Holmes-Rovner M, Rovner D. Estimating mean hospital cost as a function of length of stay and patient characteristics. Health Econ. 2003;12(11):935-947. https://doi.org/10.1002/hec.774
7. Rohatgi N, Wei PH, Grujic O, Ahuja N. Surgical Comanagement by hospitalists in colorectal surgery. J Am Coll Surg. 2018;227(4):404-410. https://doi.org/10.1016/j.jamcollsurg.2018.06.011
8. Huddleston JM, Long KH, Naessens JM, et al. Medical and surgical comanagement after elective hip and knee arthroplasty: A randomized, controlled trial. Ann Intern Med. 2004;141(1):28-38. https://doi.org/10.7326/0003-4819-141-1-200407060-00012.
9. Swart E, Vasudeva E, Makhni EC, Macaulay W, Bozic KJ. Dedicated perioperative hip fracture comanagement programs are cost-effective in high-volume centers: An economic analysis. Clin Orthop Relat Res. 2016;474(1):222-233. https://doi.org/10.1007/s11999-015-4494-4.
10. Iberti CT, Briones A, Gabriel E, Dunn AS. Hospitalist-vascular surgery comanagement: Effects on complications and mortality. Hosp Pract. 2016;44(5):233-236. https://doi.org/10.1080/21548331.2016.1259543.
11. Kammerlander C, Roth T, Friedman SM, et al. Ortho-geriatric service--A literature review comparing different models. Osteoporos Int. 2010;21(Suppl 4):S637-S646. https://doi.org/10.1007/s00198-010-1396-x.
12. Bracey DN, Kiymaz TC, Holst DC, et al. An orthopedic-hospitalist comanaged hip fracture service reduces inpatient length of stay. Geriatr Orthop Surg Rehabil. 2016;7(4):171-177. https://doi.org/10.1177/2151458516661383.
13. Duplantier NL, Briski DC, Luce LT, Meyer MS, Ochsner JL, Chimento GF. The effects of a hospitalist comanagement model for joint arthroplasty patients in a teaching facility. J Arthroplasty. 2016;31(3):567-572. https://doi.org/10.1016/j.arth.2015.10.010.
14. Roy A, Heckman MG, Roy V. Associations between the hospitalist model of care and quality-of-care-related outcomes in patients undergoing hip fracture surgery. Mayo Clin Proc. 2006;81(1):28-31. https://doi.org/10.4065/81.1.28.
15. Godden E, Paseka A, Gnida J, Inguanzo J. The impact of response rate on Hospital Consumer Assessment of Healthcare Providers and System (HCAHPS) dimension scores. Patient Exp J. 2019;6(1):105-114. https://doi.org/10.35680/2372-0247.1357.
© 2020 Society of Hospital Medicine
Community Pediatric Hospitalist Workload: Results from a National Survey
As a newly recognized specialty, pediatric hospital medicine (PHM) continues to expand and diversify.1 Pediatric hospitalists care for children in hospitals ranging from small, rural community hospitals to large, free-standing quaternary children’s hospitals.2-4 In addition, more than 10% of graduating pediatric residents are seeking future careers within PHM.5
In 2018, Fromme et al. published a study describing clinical workload for pediatric hospitalists within university-based settings.6 They characterized the diversity of work models and programmatic sustainability but limited the study to university-based programs. With over half of children receiving care within community hospitals,7 workforce patterns for community-based pediatric hospitalists should be characterized to maximize sustainability and minimize attrition across the field.
In this study, we describe programmatic variability in clinical work expectations of 70 community-based PHM programs. We aimed to describe existing work models and expectations of community-based program leaders as they relate to their unique clinical setting.
METHODS
We conducted a cross-sectional survey of community-based PHM site directors through structured interviews. Community hospital programs were self-defined by the study participants, although typically defined as general hospitals that admit pediatric patients and are not free-standing or children’s hospitals within a general hospital. Survey respondents were asked to answer questions only reflecting expectations at their community hospital.
Survey Design and Content
Building from a tool used by Fromme et al.6 we created a 12-question structured interview questionnaire focused on three areas: (1) full-time employment (FTE) metrics including definitions of a 1.0 FTE, “typical” shifts, and weekend responsibilities; (2) work volume including census parameters, service-line coverage expectations, back-up systems, and overnight call responsibilities; and (3) programmatic model including sense of sustainability (eg, minimizing burnout and attrition), support for activities such as administrative or research time, and employer model (Appendix).
We modified the survey through research team consensus. After pilot-testing by research team members at their own sites, the survey was refined for item clarity, structural design, and length. We chose to administer surveys through phone interviews over a traditional distribution due to anticipated variability in work models. The research team discussed how each question should be asked, and responses were clarified to maintain consistency.
Survey Administration
Given the absence of a national registry or database for community-based PHM programs, study participation was solicited through an invitation posted on the American Academy of Pediatrics Section on Hospital Medicine (AAP SOHM) Listserv and the AAP SOHM Community Hospitalist Listserv in May 2018. Invitations were posted twice at two weeks apart. Each research team member completed 6-19 interviews. Responses to survey questions were recorded in REDCap, a secure, web-based data capture instrument.8
Participating in the study was considered implied consent, and participants did not receive a monetary incentive, although respondents were offered deidentified survey data for participation. The study was exempted through the University of Chicago Institutional Review Board.
Data Analysis
Employers were dichotomized as community hospital employer (including primary community hospital employment/private organization) or noncommunity hospital employer (including children’s/university hospital employment or school of medicine). Descriptive statistics were reported to compare the demographics of two employer groups. P values were calculated using two-sample t-tests for the continuous variables and chi-square or Fisher-exact tests for the categorical variables. Mann–Whitney U-test was performed for continuous variables without normality. Analyses were performed using the R Statistical Programming Language (R Foundation for Statistical Computing, Vienna, Austria), version 3.4.3.
RESULTS
Participation and Program Characteristics
We interviewed 70 community-based PHM site directors representing programs across 29 states (Table 1) and five geographic regions: Midwest (34.3%), Northeast (11.4%), Southeast (15.7%), Southwest (4.3%), and West (34.3%). Employer models varied across groups, with more noncommunity hospital employers (57%) than community hospital employers (43%). The top three services covered by pediatric hospitalists were pediatric inpatient or observation bed admissions (97%), emergency department consults (89%), and general newborns (67%). PHM programs also provided coverage for other services, including newborn deliveries (43%), Special Care Nursery/Level II Neonatal Intensive Care Unit (41%), step-down unit (20%), and mental health units (13%). About 59% of programs provided education for family medicine residents, 36% were for pediatric residents, and 70% worked with advanced practice providers. The majority of programs (70%) provided in-house coverage overnight.
Clinical Work Expectations and Employer Model
Clinical work expectations varied broadly across programs (Table 2). The median expected hours for a 1.0 FTE was 1,882 hours per year (interquartile range [IQR] 1,805, 2,016), and the median expected weekend coverage/year (defined as covering two days or two nights of the weekend) was 21 (IQR 14, 24). Most programs did not expand staff coverage based on seasonality (73%), and less than 20% of programs operated with a census cap. Median support for nondirect patient care activities was 4% (IQR 0,10) of a program’s total FTE (ie, a 5.0 FTE program would have 0.20 FTE support). Programs with community hospital employers had an 8% higher expectation of 1.0 FTE hours/year (P = .01) and viewed an appropriate pediatric morning census as 20% higher (P = .01; Table 2).
Program Sustainability
DISCUSSION
To our knowledge, this study is the first to describe clinical work models exclusively for pediatric community hospitalist programs. We found that expectations for clinical FTE hours, weekend coverage, appropriate morning census, support for nondirect patient care activities, and perception of sustainability varied broadly across programs. The only variable affecting some of these differences was employer model, with those employed by a community hospital employer having a higher expectation for hours/year and appropriate morning pediatric census than those employed by noncommunity hospital employers.
With a growing emphasis on physician burnout and career satisfaction,9-11 understanding the characteristics of community hospital work settings is critical for identifying and building sustainable employment models. Previous studies have identified that the balance of clinical and nonclinical responsibilities and the setting of community versus university-based programs are major contributors to burnout and career satisfaction.9,11 Interestingly, although community hospital-based programs have limited FTE for nondirect patient care activities, we found that a higher percentage of program site directors perceived their program models as sustainable when compared with university-based programs in prior research (63% versus 50%).6 Elucidating why community hospital PHM programs are perceived as more sustainable provides an opportunity for future research. Potential reasons may include fewer academic requirements for promotion or an increased connection to a local community.
We also found that the employer model had a statistically significant impact on expected FTE hours per year but not on perception of sustainability. Programs employed by community hospitals worked 8% more hours per year than those employed by noncommunity hospital employers and accepted a higher morning pediatric census. This variation in hours and census level appropriateness is likely multifactorial, potentially from higher nonclinical expectations for promotion (eg, academic or scholarly production) at school of medicine or children’s hospital employed programs versus limited reimbursement for administrative responsibilities within community hospital employment models.
There are several potential next steps for our findings. As our data are the first attempt (to our knowledge) at describing the current practice and expectations exclusively within community hospital programs, this study can be used as a starting point for the development of workload expectation standards. Increasing transparency nationally for individual community programs potentially promotes discussions around burnout and attrition. Having objective data to compare program models may assist in advocating with local hospital leadership for restructuring that better aligns with national norms.
Our study has several limitations. First, our sampling frame was based upon a self-selection of program directors. This may have led to a biased representation of programs with higher workloads motivated to develop a standard to compare with other programs, which may have potentially led to an overestimation of hours. Second, without a registry or database for community-based pediatric hospitalist programs, we do not know the percentage of community-based programs that our sample represents. Although our results cannot speak for all community PHM programs, we attempted to mitigate nonresponse bias through the breadth of programs represented, which spanned 29 states, five geographic regions, and teaching and nonteaching programs. The interview-based method for data collection allowed the research team to clarify questions and responses across sites, thereby improving the quality and consistency of the data for the represented study sample. Finally, other factors possibly contributed to sustainability that we did not address in this study, such as programs that are dependent on billable encounters as part of their salary support.
CONCLUSION
As a newly recognized subspecialty, creating a reference for community-based program leaders to determine and discuss individual models and expectations with hospital administrators may help address programmatic sustainability. It may also allow for the analysis of long-term career satisfaction and longevity within community PHM programs based on workload. Future studies should further explore root causes for workload discrepancies between community and university employed programs along with establishing potential standards for PHM program development.
Acknowledgments
We would like to thank the Stanford School of Medicine Quantitative Sciences Unit staff for their assistance in statistical analysis.
Disclosure
The authors have nothing to disclose.
1. Robert MW, Lee G. Zero to 50,000—The 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Gosdin C, Simmons J, Yau C, Sucharew H, Carlson D, Paciorkowski N. Survey of academic pediatric hospitalist programs in the US: organizational, administrative, and financial factors. J Hosp Med. 2013;8(6):285-291. https://doi.org/10.1002/jhm.2020.
3. Paul DH, Jennifer D, Elizabeth R, et al. Proposed dashboard for pediatric hospital medicine groups. Hosp Pediatr. 2012;2(2):59-68. https://doi.org/10.1542/hpeds.2012-0004
4. Gary LF, Kathryn B, Kamilah N, Indu L. Characteristics of the pediatric hospitalist workforce: its roles and work environment. Pediatrics 2007;120(1):33-39. https://doi.org/10.1542/peds.2007-0304
5. Leyenaar JK, Frintner MP. Graduating pediatric residents entering the hospital medicine workforce. 2006-2015. Acad Pediatr. 2018;18(2):200-207. https://doi.org/10.1016/j.acap.2017.05.001.
6. Fromme HB, Chen CO, Fine BR, Gosdin C, Shaughnessy EE. Pediatric hospitalist workload and sustainability in university-based programs: results from a national interview-based survey. J Hosp Med. 2018;13(10):702-705. https://doi.org/10.12788/jhm.2977.
7. Leyenaar JK, Ralston SL, Shieh MS, Pekow PS, Mangione-Smith R, Lindenauer PK. Epidemiology of pediatric hospitalizations at general hospitals and freestanding children’s hospitals in the United States. J Hosp Med. 2016;11(11):743-749. https://doi.org/10.1002/jhm.2624.
8. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377-381. https://doi.org/10.1016/j.jbi.2008.08.010.
9. Laurie AP, Aisha BD, Mary CO. Association between practice setting and pediatric hospitalist career satisfaction. Hosp Pediatr. 2013;3(3):285-291. https://doi.org/10.1542/hpeds.2012-0085
10. Hinami K, Whelan CT, Wolosin RJ, Miller JA, Wetterneck TB. Worklife and satisfaction of hospitalists: toward flourishing careers. J Gen Intern Med. 2011;27(1):28-36. https://doi.org/10.1007/s11606-011-1780-z.
11. Hinami K, Whelan CT, Miller JA, Wolosin RJ, Wetterneck TB. Job characteristics, satisfaction, and burnout across hospitalist practice models. J Hosp Med. 2012;7(5):402-410. https://doi.org/10.1002/jhm.1907
As a newly recognized specialty, pediatric hospital medicine (PHM) continues to expand and diversify.1 Pediatric hospitalists care for children in hospitals ranging from small, rural community hospitals to large, free-standing quaternary children’s hospitals.2-4 In addition, more than 10% of graduating pediatric residents are seeking future careers within PHM.5
In 2018, Fromme et al. published a study describing clinical workload for pediatric hospitalists within university-based settings.6 They characterized the diversity of work models and programmatic sustainability but limited the study to university-based programs. With over half of children receiving care within community hospitals,7 workforce patterns for community-based pediatric hospitalists should be characterized to maximize sustainability and minimize attrition across the field.
In this study, we describe programmatic variability in clinical work expectations of 70 community-based PHM programs. We aimed to describe existing work models and expectations of community-based program leaders as they relate to their unique clinical setting.
METHODS
We conducted a cross-sectional survey of community-based PHM site directors through structured interviews. Community hospital programs were self-defined by the study participants, although typically defined as general hospitals that admit pediatric patients and are not free-standing or children’s hospitals within a general hospital. Survey respondents were asked to answer questions only reflecting expectations at their community hospital.
Survey Design and Content
Building from a tool used by Fromme et al.6 we created a 12-question structured interview questionnaire focused on three areas: (1) full-time employment (FTE) metrics including definitions of a 1.0 FTE, “typical” shifts, and weekend responsibilities; (2) work volume including census parameters, service-line coverage expectations, back-up systems, and overnight call responsibilities; and (3) programmatic model including sense of sustainability (eg, minimizing burnout and attrition), support for activities such as administrative or research time, and employer model (Appendix).
We modified the survey through research team consensus. After pilot-testing by research team members at their own sites, the survey was refined for item clarity, structural design, and length. We chose to administer surveys through phone interviews over a traditional distribution due to anticipated variability in work models. The research team discussed how each question should be asked, and responses were clarified to maintain consistency.
Survey Administration
Given the absence of a national registry or database for community-based PHM programs, study participation was solicited through an invitation posted on the American Academy of Pediatrics Section on Hospital Medicine (AAP SOHM) Listserv and the AAP SOHM Community Hospitalist Listserv in May 2018. Invitations were posted twice at two weeks apart. Each research team member completed 6-19 interviews. Responses to survey questions were recorded in REDCap, a secure, web-based data capture instrument.8
Participating in the study was considered implied consent, and participants did not receive a monetary incentive, although respondents were offered deidentified survey data for participation. The study was exempted through the University of Chicago Institutional Review Board.
Data Analysis
Employers were dichotomized as community hospital employer (including primary community hospital employment/private organization) or noncommunity hospital employer (including children’s/university hospital employment or school of medicine). Descriptive statistics were reported to compare the demographics of two employer groups. P values were calculated using two-sample t-tests for the continuous variables and chi-square or Fisher-exact tests for the categorical variables. Mann–Whitney U-test was performed for continuous variables without normality. Analyses were performed using the R Statistical Programming Language (R Foundation for Statistical Computing, Vienna, Austria), version 3.4.3.
RESULTS
Participation and Program Characteristics
We interviewed 70 community-based PHM site directors representing programs across 29 states (Table 1) and five geographic regions: Midwest (34.3%), Northeast (11.4%), Southeast (15.7%), Southwest (4.3%), and West (34.3%). Employer models varied across groups, with more noncommunity hospital employers (57%) than community hospital employers (43%). The top three services covered by pediatric hospitalists were pediatric inpatient or observation bed admissions (97%), emergency department consults (89%), and general newborns (67%). PHM programs also provided coverage for other services, including newborn deliveries (43%), Special Care Nursery/Level II Neonatal Intensive Care Unit (41%), step-down unit (20%), and mental health units (13%). About 59% of programs provided education for family medicine residents, 36% were for pediatric residents, and 70% worked with advanced practice providers. The majority of programs (70%) provided in-house coverage overnight.
Clinical Work Expectations and Employer Model
Clinical work expectations varied broadly across programs (Table 2). The median expected hours for a 1.0 FTE was 1,882 hours per year (interquartile range [IQR] 1,805, 2,016), and the median expected weekend coverage/year (defined as covering two days or two nights of the weekend) was 21 (IQR 14, 24). Most programs did not expand staff coverage based on seasonality (73%), and less than 20% of programs operated with a census cap. Median support for nondirect patient care activities was 4% (IQR 0,10) of a program’s total FTE (ie, a 5.0 FTE program would have 0.20 FTE support). Programs with community hospital employers had an 8% higher expectation of 1.0 FTE hours/year (P = .01) and viewed an appropriate pediatric morning census as 20% higher (P = .01; Table 2).
Program Sustainability
DISCUSSION
To our knowledge, this study is the first to describe clinical work models exclusively for pediatric community hospitalist programs. We found that expectations for clinical FTE hours, weekend coverage, appropriate morning census, support for nondirect patient care activities, and perception of sustainability varied broadly across programs. The only variable affecting some of these differences was employer model, with those employed by a community hospital employer having a higher expectation for hours/year and appropriate morning pediatric census than those employed by noncommunity hospital employers.
With a growing emphasis on physician burnout and career satisfaction,9-11 understanding the characteristics of community hospital work settings is critical for identifying and building sustainable employment models. Previous studies have identified that the balance of clinical and nonclinical responsibilities and the setting of community versus university-based programs are major contributors to burnout and career satisfaction.9,11 Interestingly, although community hospital-based programs have limited FTE for nondirect patient care activities, we found that a higher percentage of program site directors perceived their program models as sustainable when compared with university-based programs in prior research (63% versus 50%).6 Elucidating why community hospital PHM programs are perceived as more sustainable provides an opportunity for future research. Potential reasons may include fewer academic requirements for promotion or an increased connection to a local community.
We also found that the employer model had a statistically significant impact on expected FTE hours per year but not on perception of sustainability. Programs employed by community hospitals worked 8% more hours per year than those employed by noncommunity hospital employers and accepted a higher morning pediatric census. This variation in hours and census level appropriateness is likely multifactorial, potentially from higher nonclinical expectations for promotion (eg, academic or scholarly production) at school of medicine or children’s hospital employed programs versus limited reimbursement for administrative responsibilities within community hospital employment models.
There are several potential next steps for our findings. As our data are the first attempt (to our knowledge) at describing the current practice and expectations exclusively within community hospital programs, this study can be used as a starting point for the development of workload expectation standards. Increasing transparency nationally for individual community programs potentially promotes discussions around burnout and attrition. Having objective data to compare program models may assist in advocating with local hospital leadership for restructuring that better aligns with national norms.
Our study has several limitations. First, our sampling frame was based upon a self-selection of program directors. This may have led to a biased representation of programs with higher workloads motivated to develop a standard to compare with other programs, which may have potentially led to an overestimation of hours. Second, without a registry or database for community-based pediatric hospitalist programs, we do not know the percentage of community-based programs that our sample represents. Although our results cannot speak for all community PHM programs, we attempted to mitigate nonresponse bias through the breadth of programs represented, which spanned 29 states, five geographic regions, and teaching and nonteaching programs. The interview-based method for data collection allowed the research team to clarify questions and responses across sites, thereby improving the quality and consistency of the data for the represented study sample. Finally, other factors possibly contributed to sustainability that we did not address in this study, such as programs that are dependent on billable encounters as part of their salary support.
CONCLUSION
As a newly recognized subspecialty, creating a reference for community-based program leaders to determine and discuss individual models and expectations with hospital administrators may help address programmatic sustainability. It may also allow for the analysis of long-term career satisfaction and longevity within community PHM programs based on workload. Future studies should further explore root causes for workload discrepancies between community and university employed programs along with establishing potential standards for PHM program development.
Acknowledgments
We would like to thank the Stanford School of Medicine Quantitative Sciences Unit staff for their assistance in statistical analysis.
Disclosure
The authors have nothing to disclose.
As a newly recognized specialty, pediatric hospital medicine (PHM) continues to expand and diversify.1 Pediatric hospitalists care for children in hospitals ranging from small, rural community hospitals to large, free-standing quaternary children’s hospitals.2-4 In addition, more than 10% of graduating pediatric residents are seeking future careers within PHM.5
In 2018, Fromme et al. published a study describing clinical workload for pediatric hospitalists within university-based settings.6 They characterized the diversity of work models and programmatic sustainability but limited the study to university-based programs. With over half of children receiving care within community hospitals,7 workforce patterns for community-based pediatric hospitalists should be characterized to maximize sustainability and minimize attrition across the field.
In this study, we describe programmatic variability in clinical work expectations of 70 community-based PHM programs. We aimed to describe existing work models and expectations of community-based program leaders as they relate to their unique clinical setting.
METHODS
We conducted a cross-sectional survey of community-based PHM site directors through structured interviews. Community hospital programs were self-defined by the study participants, although typically defined as general hospitals that admit pediatric patients and are not free-standing or children’s hospitals within a general hospital. Survey respondents were asked to answer questions only reflecting expectations at their community hospital.
Survey Design and Content
Building from a tool used by Fromme et al.6 we created a 12-question structured interview questionnaire focused on three areas: (1) full-time employment (FTE) metrics including definitions of a 1.0 FTE, “typical” shifts, and weekend responsibilities; (2) work volume including census parameters, service-line coverage expectations, back-up systems, and overnight call responsibilities; and (3) programmatic model including sense of sustainability (eg, minimizing burnout and attrition), support for activities such as administrative or research time, and employer model (Appendix).
We modified the survey through research team consensus. After pilot-testing by research team members at their own sites, the survey was refined for item clarity, structural design, and length. We chose to administer surveys through phone interviews over a traditional distribution due to anticipated variability in work models. The research team discussed how each question should be asked, and responses were clarified to maintain consistency.
Survey Administration
Given the absence of a national registry or database for community-based PHM programs, study participation was solicited through an invitation posted on the American Academy of Pediatrics Section on Hospital Medicine (AAP SOHM) Listserv and the AAP SOHM Community Hospitalist Listserv in May 2018. Invitations were posted twice at two weeks apart. Each research team member completed 6-19 interviews. Responses to survey questions were recorded in REDCap, a secure, web-based data capture instrument.8
Participating in the study was considered implied consent, and participants did not receive a monetary incentive, although respondents were offered deidentified survey data for participation. The study was exempted through the University of Chicago Institutional Review Board.
Data Analysis
Employers were dichotomized as community hospital employer (including primary community hospital employment/private organization) or noncommunity hospital employer (including children’s/university hospital employment or school of medicine). Descriptive statistics were reported to compare the demographics of two employer groups. P values were calculated using two-sample t-tests for the continuous variables and chi-square or Fisher-exact tests for the categorical variables. Mann–Whitney U-test was performed for continuous variables without normality. Analyses were performed using the R Statistical Programming Language (R Foundation for Statistical Computing, Vienna, Austria), version 3.4.3.
RESULTS
Participation and Program Characteristics
We interviewed 70 community-based PHM site directors representing programs across 29 states (Table 1) and five geographic regions: Midwest (34.3%), Northeast (11.4%), Southeast (15.7%), Southwest (4.3%), and West (34.3%). Employer models varied across groups, with more noncommunity hospital employers (57%) than community hospital employers (43%). The top three services covered by pediatric hospitalists were pediatric inpatient or observation bed admissions (97%), emergency department consults (89%), and general newborns (67%). PHM programs also provided coverage for other services, including newborn deliveries (43%), Special Care Nursery/Level II Neonatal Intensive Care Unit (41%), step-down unit (20%), and mental health units (13%). About 59% of programs provided education for family medicine residents, 36% were for pediatric residents, and 70% worked with advanced practice providers. The majority of programs (70%) provided in-house coverage overnight.
Clinical Work Expectations and Employer Model
Clinical work expectations varied broadly across programs (Table 2). The median expected hours for a 1.0 FTE was 1,882 hours per year (interquartile range [IQR] 1,805, 2,016), and the median expected weekend coverage/year (defined as covering two days or two nights of the weekend) was 21 (IQR 14, 24). Most programs did not expand staff coverage based on seasonality (73%), and less than 20% of programs operated with a census cap. Median support for nondirect patient care activities was 4% (IQR 0,10) of a program’s total FTE (ie, a 5.0 FTE program would have 0.20 FTE support). Programs with community hospital employers had an 8% higher expectation of 1.0 FTE hours/year (P = .01) and viewed an appropriate pediatric morning census as 20% higher (P = .01; Table 2).
Program Sustainability
DISCUSSION
To our knowledge, this study is the first to describe clinical work models exclusively for pediatric community hospitalist programs. We found that expectations for clinical FTE hours, weekend coverage, appropriate morning census, support for nondirect patient care activities, and perception of sustainability varied broadly across programs. The only variable affecting some of these differences was employer model, with those employed by a community hospital employer having a higher expectation for hours/year and appropriate morning pediatric census than those employed by noncommunity hospital employers.
With a growing emphasis on physician burnout and career satisfaction,9-11 understanding the characteristics of community hospital work settings is critical for identifying and building sustainable employment models. Previous studies have identified that the balance of clinical and nonclinical responsibilities and the setting of community versus university-based programs are major contributors to burnout and career satisfaction.9,11 Interestingly, although community hospital-based programs have limited FTE for nondirect patient care activities, we found that a higher percentage of program site directors perceived their program models as sustainable when compared with university-based programs in prior research (63% versus 50%).6 Elucidating why community hospital PHM programs are perceived as more sustainable provides an opportunity for future research. Potential reasons may include fewer academic requirements for promotion or an increased connection to a local community.
We also found that the employer model had a statistically significant impact on expected FTE hours per year but not on perception of sustainability. Programs employed by community hospitals worked 8% more hours per year than those employed by noncommunity hospital employers and accepted a higher morning pediatric census. This variation in hours and census level appropriateness is likely multifactorial, potentially from higher nonclinical expectations for promotion (eg, academic or scholarly production) at school of medicine or children’s hospital employed programs versus limited reimbursement for administrative responsibilities within community hospital employment models.
There are several potential next steps for our findings. As our data are the first attempt (to our knowledge) at describing the current practice and expectations exclusively within community hospital programs, this study can be used as a starting point for the development of workload expectation standards. Increasing transparency nationally for individual community programs potentially promotes discussions around burnout and attrition. Having objective data to compare program models may assist in advocating with local hospital leadership for restructuring that better aligns with national norms.
Our study has several limitations. First, our sampling frame was based upon a self-selection of program directors. This may have led to a biased representation of programs with higher workloads motivated to develop a standard to compare with other programs, which may have potentially led to an overestimation of hours. Second, without a registry or database for community-based pediatric hospitalist programs, we do not know the percentage of community-based programs that our sample represents. Although our results cannot speak for all community PHM programs, we attempted to mitigate nonresponse bias through the breadth of programs represented, which spanned 29 states, five geographic regions, and teaching and nonteaching programs. The interview-based method for data collection allowed the research team to clarify questions and responses across sites, thereby improving the quality and consistency of the data for the represented study sample. Finally, other factors possibly contributed to sustainability that we did not address in this study, such as programs that are dependent on billable encounters as part of their salary support.
CONCLUSION
As a newly recognized subspecialty, creating a reference for community-based program leaders to determine and discuss individual models and expectations with hospital administrators may help address programmatic sustainability. It may also allow for the analysis of long-term career satisfaction and longevity within community PHM programs based on workload. Future studies should further explore root causes for workload discrepancies between community and university employed programs along with establishing potential standards for PHM program development.
Acknowledgments
We would like to thank the Stanford School of Medicine Quantitative Sciences Unit staff for their assistance in statistical analysis.
Disclosure
The authors have nothing to disclose.
1. Robert MW, Lee G. Zero to 50,000—The 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Gosdin C, Simmons J, Yau C, Sucharew H, Carlson D, Paciorkowski N. Survey of academic pediatric hospitalist programs in the US: organizational, administrative, and financial factors. J Hosp Med. 2013;8(6):285-291. https://doi.org/10.1002/jhm.2020.
3. Paul DH, Jennifer D, Elizabeth R, et al. Proposed dashboard for pediatric hospital medicine groups. Hosp Pediatr. 2012;2(2):59-68. https://doi.org/10.1542/hpeds.2012-0004
4. Gary LF, Kathryn B, Kamilah N, Indu L. Characteristics of the pediatric hospitalist workforce: its roles and work environment. Pediatrics 2007;120(1):33-39. https://doi.org/10.1542/peds.2007-0304
5. Leyenaar JK, Frintner MP. Graduating pediatric residents entering the hospital medicine workforce. 2006-2015. Acad Pediatr. 2018;18(2):200-207. https://doi.org/10.1016/j.acap.2017.05.001.
6. Fromme HB, Chen CO, Fine BR, Gosdin C, Shaughnessy EE. Pediatric hospitalist workload and sustainability in university-based programs: results from a national interview-based survey. J Hosp Med. 2018;13(10):702-705. https://doi.org/10.12788/jhm.2977.
7. Leyenaar JK, Ralston SL, Shieh MS, Pekow PS, Mangione-Smith R, Lindenauer PK. Epidemiology of pediatric hospitalizations at general hospitals and freestanding children’s hospitals in the United States. J Hosp Med. 2016;11(11):743-749. https://doi.org/10.1002/jhm.2624.
8. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377-381. https://doi.org/10.1016/j.jbi.2008.08.010.
9. Laurie AP, Aisha BD, Mary CO. Association between practice setting and pediatric hospitalist career satisfaction. Hosp Pediatr. 2013;3(3):285-291. https://doi.org/10.1542/hpeds.2012-0085
10. Hinami K, Whelan CT, Wolosin RJ, Miller JA, Wetterneck TB. Worklife and satisfaction of hospitalists: toward flourishing careers. J Gen Intern Med. 2011;27(1):28-36. https://doi.org/10.1007/s11606-011-1780-z.
11. Hinami K, Whelan CT, Miller JA, Wolosin RJ, Wetterneck TB. Job characteristics, satisfaction, and burnout across hospitalist practice models. J Hosp Med. 2012;7(5):402-410. https://doi.org/10.1002/jhm.1907
1. Robert MW, Lee G. Zero to 50,000—The 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Gosdin C, Simmons J, Yau C, Sucharew H, Carlson D, Paciorkowski N. Survey of academic pediatric hospitalist programs in the US: organizational, administrative, and financial factors. J Hosp Med. 2013;8(6):285-291. https://doi.org/10.1002/jhm.2020.
3. Paul DH, Jennifer D, Elizabeth R, et al. Proposed dashboard for pediatric hospital medicine groups. Hosp Pediatr. 2012;2(2):59-68. https://doi.org/10.1542/hpeds.2012-0004
4. Gary LF, Kathryn B, Kamilah N, Indu L. Characteristics of the pediatric hospitalist workforce: its roles and work environment. Pediatrics 2007;120(1):33-39. https://doi.org/10.1542/peds.2007-0304
5. Leyenaar JK, Frintner MP. Graduating pediatric residents entering the hospital medicine workforce. 2006-2015. Acad Pediatr. 2018;18(2):200-207. https://doi.org/10.1016/j.acap.2017.05.001.
6. Fromme HB, Chen CO, Fine BR, Gosdin C, Shaughnessy EE. Pediatric hospitalist workload and sustainability in university-based programs: results from a national interview-based survey. J Hosp Med. 2018;13(10):702-705. https://doi.org/10.12788/jhm.2977.
7. Leyenaar JK, Ralston SL, Shieh MS, Pekow PS, Mangione-Smith R, Lindenauer PK. Epidemiology of pediatric hospitalizations at general hospitals and freestanding children’s hospitals in the United States. J Hosp Med. 2016;11(11):743-749. https://doi.org/10.1002/jhm.2624.
8. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377-381. https://doi.org/10.1016/j.jbi.2008.08.010.
9. Laurie AP, Aisha BD, Mary CO. Association between practice setting and pediatric hospitalist career satisfaction. Hosp Pediatr. 2013;3(3):285-291. https://doi.org/10.1542/hpeds.2012-0085
10. Hinami K, Whelan CT, Wolosin RJ, Miller JA, Wetterneck TB. Worklife and satisfaction of hospitalists: toward flourishing careers. J Gen Intern Med. 2011;27(1):28-36. https://doi.org/10.1007/s11606-011-1780-z.
11. Hinami K, Whelan CT, Miller JA, Wolosin RJ, Wetterneck TB. Job characteristics, satisfaction, and burnout across hospitalist practice models. J Hosp Med. 2012;7(5):402-410. https://doi.org/10.1002/jhm.1907
© 2019 Society of Hospital Medicine