User login
Portable Ultrasound Device Usage and Learning Outcomes Among Internal Medicine Trainees: A Parallel-Group Randomized Trial
Point-of-care ultrasonography (POCUS) can transform healthcare delivery through its diagnostic and therapeutic expediency.1 POCUS has been shown to bolster diagnostic accuracy, reduce procedural complications, decrease inpatient length of stay, and improve patient satisfaction by encouraging the physician to be present at the bedside.2-8
POCUS has become widespread across a variety of clinical settings as more investigations have demonstrated its positive impact on patient care.1,9-12 This includes the use of POCUS by trainees, who are now utilizing this technology as part of their assessments of patients.13,14 However, trainees may be performing these examinations with minimal oversight, and outside of emergency medicine, there are few guidelines on how to effectively teach POCUS or measure competency.13,14 While POCUS is rapidly becoming a part of inpatient care, teaching physicians may have little experience in ultrasound or the expertise to adequately supervise trainees.14 There is a growing need to study what trainees can learn and how this knowledge is acquired.
Previous investigations have demonstrated that inexperienced users can be taught to use POCUS to identify a variety of pathological states.2,3,15-23 Most of these curricula used a single lecture series as their pedagogical vehicle, and they variably included junior medical trainees. More importantly, the investigations did not explore whether personal access to handheld ultrasound devices (HUDs) improved learning. In theory, improved access to POCUS devices increases opportunities for authentic and deliberate practice, which may be needed to improve trainee skill with POCUS beyond the classroom setting.14
This study aimed to address several ongoing gaps in knowledge related to learning POCUS. First, we hypothesized that personal HUD access would improve trainees’ POCUS-related knowledge and interpretive ability as a result of increased practice opportunities. Second, we hypothesized that trainees who receive personal access to HUDs would be more likely to perform POCUS examinations and feel more confident in their interpretations. Finally, we hypothesized that repeated exposure to POCUS-related lectures would result in greater improvements in knowledge as compared with a single lecture series.
METHODS
Participants and Setting
The 2017 intern class (n = 47) at an academic internal medicine residency program participated in the study. Control data were obtained from the 2016 intern class (historical control; n = 50) and the 2018 intern class (contemporaneous control; n = 52). The Stanford University Institutional Review Board approved this study.
Study Design
The 2017 intern class (n = 47) received POCUS didactics from June 2017 to June 2018. To evaluate if increased access to HUDs improved learning outcomes, the 2017 interns were randomized 1:1 to receive their own personal HUD that could be used for patient care and/or self-directed learning (n = 24) vs no-HUD (n = 23; Figure). Learning outcomes were assessed over the course of 1 year (see “Outcomes” below) and were compared with the 2016 and 2018 controls. The 2016 intern class had completed a year of training but had not received formalized POCUS didactics (historical control), whereas the 2018 intern class was assessed at the beginning of their year (contemporaneous control; Figure). In order to make comparisons based on intern experience, baseline data for the 2017 intern class were compared with the 2018 intern class, whereas end-of-study data for 2017 interns were compared with 2016 interns.
Outcomes
The primary outcome was the difference in assessment scores at the end of the study period between interns randomized to receive a HUD and those who were not. Secondary outcomes included differences in HUD usage rates, lecture attendance, and assessment scores. To assess whether repeated lecture exposure resulted in greater amounts of learning, this study evaluated for assessment score improvements after each lecture block. Finally, trainee attitudes toward POCUS and their confidence in their interpretative ability were measured at the beginning and end of the study period.
Curriculum Implementation
The lectures were administered as once-weekly didactics of 1-hour duration to interns rotating on the inpatient wards rotation. This rotation is 4 weeks long, and each intern will experience the rotation two to four times per year. Each lecture contained two parts: (1) 20-30 minutes of didactics via Microsoft PowerPointTM and (2) 30-40 minutes of supervised practice using HUDs on standardized patients. Four lectures were given each month: (1) introduction to POCUS and ultrasound physics, (2) thoracic/lung ultrasound, (3) echocardiography, and (4) abdominal POCUS. The lectures consisted of contrasting cases of normal/abnormal videos and clinical vignettes. These four lectures were repeated each month as new interns rotated on service. Some interns experienced the same content multiple times, which was intentional in order to assess their rates of learning over time. Lecture contents were based on previously published guidelines and expert consensus for teaching POCUS in internal medicine.13, 24-26 Content from the Accreditation Council for Graduate Medical Education (ACGME) and the American College of Emergency Physicians (ACEP) was also incorporated because these organizations had published relevant guidelines for teaching POCUS.13,26 Further development of the lectures occurred through review of previously described POCUS-relevant curricula.27-32
Handheld Ultrasound Devices
This study used the Philips LumifyTM, a United States Food and Drug Administration–approved device. Interns randomized to HUDs received their own device at the start of the rotation. It was at their discretion to use the device outside of the course. All devices were approved for patient use and were encrypted in compliance with our information security office. For privacy reasons, any saved patient images were not reviewed by the researchers. Interns were encouraged to share their findings with supervising physicians during rounds, but actual oversight was not measured. Interns not randomized to HUDs could access a single community device that was shared among all residents and fellows in the hospital. Interns reported the average number of POCUS examinations performed each week via a survey sent during the last week of the rotation.
Assessment Design and Implementation
Assessments evaluating trainee knowledge were administered before, during, and after the study period (Figure). For the 2017 cohort, assessments were also administered at the start and end of the ward month to track knowledge acquisition. Assessment contents were selected from POCUS guidelines for internal medicine and adaptation of the ACGME and ACEP guidelines.13,24,26 Additional content was obtained from major society POCUS tutorials and deidentified images collected by the study authors.13,24,33 In keeping with previously described methodology, the images were shown for approximately 12 seconds, followed by five additional seconds to allow the learner to answer the question.32 Final assessment contents were determined by the authors using the Delphi method.34 A sample assessment can be found in the Appendix Material.
Surveys
Surveys were administered alongside the assessments to the 2016-2018 intern classes. These surveys assessed trainee attitudes toward POCUS and were based on previously validated assessments.27,28,30 Attitudes were measured using 5-point Likert scales.
Statistical Analysis
For the primary outcome, we performed generalized binomial mixed-effect regressions using the survey periods, randomization group, and the interaction of the two as independent variables after adjusting for attendance and controlling of intra-intern correlations. The bivariate unadjusted analysis was performed to display the distribution of overall correctness on the assessments. Wilcoxon signed rank test was used to determine score significance for dependent score variables (R-Statistical Programming Language, Vienna, Austria).
RESULTS
Baseline Characteristics
There were 149 interns who participated in this study (Figure). Assessment/survey completion rates were as follows: 2016 control: 68.0%; 2017 preintervention: 97.9%; 2017 postintervention: 89.4%; and 2018 control: 100%. The 2017 interns reported similar amounts of prior POCUS exposure in medical school (Table 1).
Primary Outcome: Assessment Scores (HUD vs no HUD)
There were no significant differences in assessment scores at the end of the study between interns randomized to personal HUD access vs those to no-HUD access (Table 1). HUD interns reported performing POCUS assessments on patients a mean 6.8 (standard deviation [SD] 2.2) times per week vs 6.4 (SD 2.9) times per week in the no-HUD arm (P = .66). The mean lecture attendance was 75.0% and did not significantly differ between the HUD arms (Table 1).
Secondary Outcomes
Impact of Repeating Lectures
The 2017 interns demonstrated significant increases in preblock vs postblock assessment scores after first-time exposure to the lectures (median preblock score 0.61 [interquartile range (IQR), 0.53-0.70] vs postblock score 0.81 [IQR, 0.72-0.86]; P < .001; Table 2). However, intern performance on the preblock vs postblock assessments after second-time exposure to the curriculum failed to improve (median second preblock score 0.78 [IQR, 0.69-0.83] vs postblock score 0.81 [IQR, 0.64-0.89]; P = .94). Intern performance on individual domains of knowledge for each block is listed in Appendix Table 1.
Intervention Performance vs Controls
The 2016 historical control had significantly higher scores compared with the 2017 preintervention group (P < .001; Appendix Table 2). The year-long lecture series resulted in significant increases in median scores for the 2017 group (median preintervention score 0.55 [0.41-0.61] vs median postintervention score 0.84 [0.71-0.90]; P = .006; Appendix Table 1). At the end of the study, the 2017 postintervention scores were significantly higher across multiple knowledge domains compared with the 2016 historical control (Appendix Table 2).
Survey Results
Notably, the 2017 intern class at the end of the intervention did not have significantly different assessment scores for several disease-specific domains, compared with the 2016 control (Appendix Table 2). Nonetheless, the 2017 intern class reported higher levels of confidence in these same domains despite similar scores (Supplementary Figure). The HUD group seldomly cited a lack of confidence in their abilities as a barrier to performing POCUS examinations (17.6%), compared with the no-HUD group (50.0%), despite nearly identical assessment scores between the two groups (Table 1).
DISCUSSION
Previous guidelines have recommended increased HUD access for learners,13,24,35,36 but there have been few investigations that have evaluated the impact of such access on learning POCUS. One previous investigation found that hospitalists who carried HUDs were more likely to identify heart failure on bedside examination.37 In contrast, our study found no improvement in interpretative ability when randomizing interns to carry HUDs for patient care. Notably, interns did not perform more POCUS examinations when given HUDs. We offer several explanations for this finding. First, time-motion studies have demonstrated that internal medicine interns spend less than 15% of their time toward direct patient care.38 It is possible that the demands of being an intern impeded their ability to perform more POCUS examinations on their patients, regardless of HUD access. Alternatively, the interns randomized to no personal access may have used the community device more frequently as a result of the lecture series. Given the cost of HUDs, further studies are needed to assess the degree to which HUD access will improve trainee interpretive ability, especially as more training programs consider the creation of ultrasound curricula.10,11,24,39,40
This study was unique because it followed interns over a year-long course that repeated the same material to assess rates of learning with repeated exposure. Learners improved their scores after the first, but not second, block. Furthermore, the median scores were nearly identical between the first postblock assessment and second preblock assessment (0.81 vs 0.78), suggesting that knowledge was retained between blocks. Together, these findings suggest there may be limitations of traditional lectures that use standardized patient models for practice. Supplementary pedagogies, such as in-the-moment feedback with actual patients, may be needed to promote mastery.14,35
Despite no formal curriculum, the 2016 intern class (historical control) had learned POCUS to some degree based on their higher assessment scores compared with the 2017 intern class during the preintervention period. Such learning may be informal, and yet, trainees may feel confident in making clinical decisions without formalized training, accreditation, or oversight. As suggested by this study, adding regular didactics or giving trainees HUDs may not immediately solve this issue. For assessment items in which the 2017 interns did not significantly differ from the controls, they nonetheless reported higher confidence in their abilities. Similarly, interns randomized to HUDs less frequently cited a lack of confidence in their abilities, despite similar scores to the no-HUD group. Such confidence may be incongruent with their actual knowledge or ability to safely use POCUS. This phenomenon of misplaced confidence is known as the Dunning–Kruger effect, and it may be common with ultrasound learning.41 While confidence can be part of a holistic definition of competency,14 these results raise the concern that trainees may have difficulty assessing their own competency level with POCUS.35
There are several limitations to this study. It was performed at a single institution with limited sample size. It examined only intern physicians because of funding constraints, which limits the generalizability of these findings among medical trainees. Technical ability assessments (including obtaining and interpreting images) were not included. We were unable to track the timing or location of the devices’ usage, and the interns’ self-reported usage rates may be subject to recall bias. To our knowledge, there were no significant lapses in device availability/functionality. Intern physicians in the HUD arm did not receive formal feedback on personally acquired patient images, which may have limited the intervention’s impact.
In conclusion, internal medicine interns who received personal HUDs were not better at recognizing normal/abnormal findings on image assessments, and they did not report performing more POCUS examinations. Since the minority of a trainee’s time is spent toward direct patient care, offering trainees HUDs without substantial guidance may not be enough to promote mastery. Notably, trainees who received HUDs felt more confident in their abilities, despite no objective increase in their actual skill. Finally, interns who received POCUS-related lectures experienced significant benefit upon first exposure to the material, while repeated exposures did not improve performance. Future investigations should stringently track trainee POCUS usage rates with HUDs and assess whether image acquisition ability improves as a result of personal access.
1. Moore CL, Copel JA. Point-of-care ultrasonography. N Engl J Med. 2011;364(8):749-757. https://doi.org/10.1056/NEJMra0909487.
2. Akkaya A, Yesilaras M, Aksay E, Sever M, Atilla OD. The interrater reliability of ultrasound imaging of the inferior vena cava performed by emergency residents. Am J Emerg Med. 2013;31(10):1509-1511. https://doi.org/10.1016/j.ajem.2013.07.006.
3. Razi R, Estrada JR, Doll J, Spencer KT. Bedside hand-carried ultrasound by internal medicine residents versus traditional clinical assessment for the identification of systolic dysfunction in patients admitted with decompensated heart failure. J Am Soc Echocardiogr. 2011;24(12):1319-1324. https://doi.org/10.1016/j.echo.2011.07.013.
4. Dodge KL, Lynch CA, Moore CL, Biroscak BJ, Evans LV. Use of ultrasound guidance improves central venous catheter insertion success rates among junior residents. J Ultrasound Med. 2012;31(10):1519-1526. https://doi.org/10.7863/jum.2012.31.10.1519.
5. Cavanna L, Mordenti P, Bertè R, et al. Ultrasound guidance reduces pneumothorax rate and improves safety of thoracentesis in malignant pleural effusion: Report on 445 consecutive patients with advanced cancer. World J Surg Oncol. 2014;12:139. https://doi.org/10.1186/1477-7819-12-139.
6. Testa A, Francesconi A, Giannuzzi R, Berardi S, Sbraccia P. Economic analysis of bedside ultrasonography (US) implementation in an Internal Medicine department. Intern Emerg Med. 2015;10(8):1015-1024. https://doi.org/10.1007/s11739-015-1320-7.
7. Howard ZD, Noble VE, Marill KA, et al. Bedside ultrasound maximizes patient satisfaction. J Emerg Med. 2014;46(1):46-53. https://doi.org/10.1016/j.jemermed.2013.05.044.
8. Park YH, Jung RB, Lee YG, et al. Does the use of bedside ultrasonography reduce emergency department length of stay for patients with renal colic? A pilot study. Clin Exp Emerg Med. 2016;3(4):197-203. https://doi.org/10.15441/ceem.15.109.
9. Glomb N, D’Amico B, Rus M, Chen C. Point-of-care ultrasound in resource-limited settings. Clin Pediatr Emerg Med. 2015;16(4):256-261. https://doi.org/10.1016/j.cpem.2015.10.001.
10. Bahner DP, Goldman E, Way D, Royall NA, Liu YT. The state of ultrasound education in U.S. medical schools: results of a national survey. Acad Med. 2014;89(12):1681-1686. https://doi.org/10.1097/ACM.0000000000000414.
11. Hall JWW, Holman H, Bornemann P, et al. Point of care ultrasound in family medicine residency programs: A CERA study. Fam Med. 2015;47(9):706-711.
12. Schnobrich DJ, Gladding S, Olson APJ, Duran-Nelson A. Point-of-care ultrasound in internal medicine: A national survey of educational leadership. J Grad Med Educ. 2013;5(3):498-502. https://doi.org/10.4300/JGME-D-12-00215.1.
13. Stolz LA, Stolz U, Fields JM, et al. Emergency medicine resident assessment of the emergency ultrasound milestones and current training recommendations. Acad Emerg Med. 2017;24(3):353-361. https://doi.org/10.1111/acem.13113.
14. Kumar, A., Jensen, T., Kugler, J. Evaluation of trainee competency with point-of-care ultrasonography (POCUS): A conceptual framework and review of existing assessments. J Gen Intern Med. 2019;34(6):1025-1031. https://doi.org/10.1007/s11606-019-04945-4.
15. Levitov A, Frankel HL, Blaivas M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients—part ii: Cardiac ultrasonography. Crit Care Med. 2016;44(6):1206-1227. https://doi.org/10.1097/CCM.0000000000001847.
16. Kobal SL, Trento L, Baharami S, et al. Comparison of effectiveness of hand-carried ultrasound to bedside cardiovascular physical examination. Am J Cardiol. 2005;96(7):1002-1006. https://doi.org/10.1016/j.amjcard.2005.05.060.
17. Ceriani E, Cogliati C. Update on bedside ultrasound diagnosis of pericardial effusion. Intern Emerg Med. 2016;11(3):477-480. https://doi.org/10.1007/s11739-015-1372-8.
18. Labovitz AJ, Noble VE, Bierig M, et al. Focused cardiac ultrasound in the emergent setting: A consensus statement of the American Society of Echocardiography and American College of Emergency Physicians. J Am Soc Echocardiogr. 2010;23(12):1225-1230. https://doi.org/10.1016/j.echo.2010.10.005.
19. Keil-Ríos D, Terrazas-Solís H, González-Garay A, Sánchez-Ávila JF, García-Juárez I. Pocket ultrasound device as a complement to physical examination for ascites evaluation and guided paracentesis. Intern Emerg Med. 2016;11(3):461-466. https://doi.org/10.1007/s11739-016-1406-x.
20. Riddell J, Case A, Wopat R, et al. Sensitivity of emergency bedside ultrasound to detect hydronephrosis in patients with computed tomography–proven stones. West J Emerg Med. 2014;15(1):96-100. https://doi.org/10.5811/westjem.2013.9.15874.
21. Dalziel PJ, Noble VE. Bedside ultrasound and the assessment of renal colic: A review. Emerg Med J. 2013;30(1):3-8. https://doi.org/10.1136/emermed-2012-201375.
22. Whitson MR, Mayo PH. Ultrasonography in the emergency department. Crit Care. 2016;20(1):227. https://doi.org/10.1186/s13054-016-1399-x.
23. Kumar A, Liu G, Chi J, Kugler J. The role of technology in the bedside encounter. Med Clin North Am. 2018;102(3):443-451. https://doi.org/10.1016/j.mcna.2017.12.006.
24. Ma IWY, Arishenkoff S, Wiseman J, et al. Internal medicine point-of-care ultrasound curriculum: Consensus recommendations from the Canadian Internal Medicine Ultrasound (CIMUS) Group. J Gen Intern Med. 2017;32(9):1052-1057. https://doi.org/10.1007/s11606-017-4071-5.
15. Sabath BF, Singh G. Point-of-care ultrasonography as a training milestone for internal medicine residents: The time is now. J Community Hosp Intern Med Perspect. 2016;6(5):33094. https://doi.org/10.3402/jchimp.v6.33094.
26. American College of Emergency Physicians. Ultrasound guidelines: emergency, point-of-care and clinical ultrasound guidelines in medicine. Ann Emerg Med. 2017;69(5):e27-e54. https://doi.org/10.1016/j.annemergmed.2016.08.457.
27. Ramsingh D, Rinehart J, Kain Z, et al. Impact assessment of perioperative point-of-care ultrasound training on anesthesiology residents. Anesthesiology. 2015;123(3):670-682. https://doi.org/10.1097/ALN.0000000000000776.
28. Keddis MT, Cullen MW, Reed DA, et al. Effectiveness of an ultrasound training module for internal medicine residents. BMC Med Educ. 2011;11:75. https://doi.org/10.1186/1472-6920-11-75.
29. Townsend NT, Kendall J, Barnett C, Robinson T. An effective curriculum for focused assessment diagnostic echocardiography: Establishing the learning curve in surgical residents. J Surg Educ. 2016;73(2):190-196. https://doi.org/10.1016/j.jsurg.2015.10.009.
30. Hoppmann RA, Rao VV, Bell F, et al. The evolution of an integrated ultrasound curriculum (iUSC) for medical students: 9-year experience. Crit Ultrasound J. 2015;7(1):18. https://doi.org/10.1186/s13089-015-0035-3.
31. Skalski JH, Elrashidi M, Reed DA, McDonald FS, Bhagra A. Using standardized patients to teach point-of-care ultrasound–guided physical examination skills to internal medicine residents. J Grad Med Educ. 2015;7(1):95-97. https://doi.org/10.4300/JGME-D-14-00178.1.
32. Chisholm CB, Dodge WR, Balise RR, Williams SR, Gharahbaghian L, Beraud A-S. Focused cardiac ultrasound training: How much is enough? J Emerg Med. 2013;44(4):818-822. https://doi.org/10.1016/j.jemermed.2012.07.092.
33. Schmidt GA, Schraufnagel D. Introduction to ATS seminars: Intensive care ultrasound. Ann Am Thorac Soc. 2013;10(5):538-539. https://doi.org/10.1513/AnnalsATS.201306-203ED.
34. Skaarup SH, Laursen CB, Bjerrum AS, Hilberg O. Objective and structured assessment of lung ultrasound competence. A multispecialty Delphi consensus and construct validity study. Ann Am Thorac Soc. 2017;14(4):555-560. https://doi.org/10.1513/AnnalsATS.201611-894OC.
35. Lucas BP, Tierney DM, Jensen TP, et al. Credentialing of hospitalists in ultrasound-guided bedside procedures: A position statement of the Society of Hospital Medicine. J Hosp Med. 2018;13(2):117-125. https://doi.org/10.12788/jhm.2917.
36. Frankel HL, Kirkpatrick AW, Elbarbary M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients-part i: General ultrasonography. Crit Care Med. 2015;43(11):2479-2502. https://doi.org/10.1097/CCM.0000000000001216.
37. Martin LD, Howell EE, Ziegelstein RC, et al. Hand-carried ultrasound performed by hospitalists: Does it improve the cardiac physical examination? Am J Med. 2009;122(1):35-41. https://doi.org/10.1016/j.amjmed.2008.07.022.
38. Desai SV, Asch DA, Bellini LM, et al. Education outcomes in a duty-hour flexibility trial in internal medicine. N Engl J Med. 2018;378(16):1494-1508. https://doi.org/10.1056/NEJMoa1800965.
39. Baltarowich OH, Di Salvo DN, Scoutt LM, et al. National ultrasound curriculum for medical students. Ultrasound Q. 2014;30(1):13-19. https://doi.org/10.1097/RUQ.0000000000000066.
40. Beal EW, Sigmond BR, Sage-Silski L, Lahey S, Nguyen V, Bahner DP. Point-of-care ultrasound in general surgery residency training: A proposal for milestones in graduate medical education ultrasound. J Ultrasound Med. 2017;36(12):2577-2584. https://doi.org/10.1002/jum.14298.
41. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J Pers Soc Psychol. 1999;77(6):1121-1134. https://doi.org/10.1037//0022-3514.77.6.1121.
Point-of-care ultrasonography (POCUS) can transform healthcare delivery through its diagnostic and therapeutic expediency.1 POCUS has been shown to bolster diagnostic accuracy, reduce procedural complications, decrease inpatient length of stay, and improve patient satisfaction by encouraging the physician to be present at the bedside.2-8
POCUS has become widespread across a variety of clinical settings as more investigations have demonstrated its positive impact on patient care.1,9-12 This includes the use of POCUS by trainees, who are now utilizing this technology as part of their assessments of patients.13,14 However, trainees may be performing these examinations with minimal oversight, and outside of emergency medicine, there are few guidelines on how to effectively teach POCUS or measure competency.13,14 While POCUS is rapidly becoming a part of inpatient care, teaching physicians may have little experience in ultrasound or the expertise to adequately supervise trainees.14 There is a growing need to study what trainees can learn and how this knowledge is acquired.
Previous investigations have demonstrated that inexperienced users can be taught to use POCUS to identify a variety of pathological states.2,3,15-23 Most of these curricula used a single lecture series as their pedagogical vehicle, and they variably included junior medical trainees. More importantly, the investigations did not explore whether personal access to handheld ultrasound devices (HUDs) improved learning. In theory, improved access to POCUS devices increases opportunities for authentic and deliberate practice, which may be needed to improve trainee skill with POCUS beyond the classroom setting.14
This study aimed to address several ongoing gaps in knowledge related to learning POCUS. First, we hypothesized that personal HUD access would improve trainees’ POCUS-related knowledge and interpretive ability as a result of increased practice opportunities. Second, we hypothesized that trainees who receive personal access to HUDs would be more likely to perform POCUS examinations and feel more confident in their interpretations. Finally, we hypothesized that repeated exposure to POCUS-related lectures would result in greater improvements in knowledge as compared with a single lecture series.
METHODS
Participants and Setting
The 2017 intern class (n = 47) at an academic internal medicine residency program participated in the study. Control data were obtained from the 2016 intern class (historical control; n = 50) and the 2018 intern class (contemporaneous control; n = 52). The Stanford University Institutional Review Board approved this study.
Study Design
The 2017 intern class (n = 47) received POCUS didactics from June 2017 to June 2018. To evaluate if increased access to HUDs improved learning outcomes, the 2017 interns were randomized 1:1 to receive their own personal HUD that could be used for patient care and/or self-directed learning (n = 24) vs no-HUD (n = 23; Figure). Learning outcomes were assessed over the course of 1 year (see “Outcomes” below) and were compared with the 2016 and 2018 controls. The 2016 intern class had completed a year of training but had not received formalized POCUS didactics (historical control), whereas the 2018 intern class was assessed at the beginning of their year (contemporaneous control; Figure). In order to make comparisons based on intern experience, baseline data for the 2017 intern class were compared with the 2018 intern class, whereas end-of-study data for 2017 interns were compared with 2016 interns.
Outcomes
The primary outcome was the difference in assessment scores at the end of the study period between interns randomized to receive a HUD and those who were not. Secondary outcomes included differences in HUD usage rates, lecture attendance, and assessment scores. To assess whether repeated lecture exposure resulted in greater amounts of learning, this study evaluated for assessment score improvements after each lecture block. Finally, trainee attitudes toward POCUS and their confidence in their interpretative ability were measured at the beginning and end of the study period.
Curriculum Implementation
The lectures were administered as once-weekly didactics of 1-hour duration to interns rotating on the inpatient wards rotation. This rotation is 4 weeks long, and each intern will experience the rotation two to four times per year. Each lecture contained two parts: (1) 20-30 minutes of didactics via Microsoft PowerPointTM and (2) 30-40 minutes of supervised practice using HUDs on standardized patients. Four lectures were given each month: (1) introduction to POCUS and ultrasound physics, (2) thoracic/lung ultrasound, (3) echocardiography, and (4) abdominal POCUS. The lectures consisted of contrasting cases of normal/abnormal videos and clinical vignettes. These four lectures were repeated each month as new interns rotated on service. Some interns experienced the same content multiple times, which was intentional in order to assess their rates of learning over time. Lecture contents were based on previously published guidelines and expert consensus for teaching POCUS in internal medicine.13, 24-26 Content from the Accreditation Council for Graduate Medical Education (ACGME) and the American College of Emergency Physicians (ACEP) was also incorporated because these organizations had published relevant guidelines for teaching POCUS.13,26 Further development of the lectures occurred through review of previously described POCUS-relevant curricula.27-32
Handheld Ultrasound Devices
This study used the Philips LumifyTM, a United States Food and Drug Administration–approved device. Interns randomized to HUDs received their own device at the start of the rotation. It was at their discretion to use the device outside of the course. All devices were approved for patient use and were encrypted in compliance with our information security office. For privacy reasons, any saved patient images were not reviewed by the researchers. Interns were encouraged to share their findings with supervising physicians during rounds, but actual oversight was not measured. Interns not randomized to HUDs could access a single community device that was shared among all residents and fellows in the hospital. Interns reported the average number of POCUS examinations performed each week via a survey sent during the last week of the rotation.
Assessment Design and Implementation
Assessments evaluating trainee knowledge were administered before, during, and after the study period (Figure). For the 2017 cohort, assessments were also administered at the start and end of the ward month to track knowledge acquisition. Assessment contents were selected from POCUS guidelines for internal medicine and adaptation of the ACGME and ACEP guidelines.13,24,26 Additional content was obtained from major society POCUS tutorials and deidentified images collected by the study authors.13,24,33 In keeping with previously described methodology, the images were shown for approximately 12 seconds, followed by five additional seconds to allow the learner to answer the question.32 Final assessment contents were determined by the authors using the Delphi method.34 A sample assessment can be found in the Appendix Material.
Surveys
Surveys were administered alongside the assessments to the 2016-2018 intern classes. These surveys assessed trainee attitudes toward POCUS and were based on previously validated assessments.27,28,30 Attitudes were measured using 5-point Likert scales.
Statistical Analysis
For the primary outcome, we performed generalized binomial mixed-effect regressions using the survey periods, randomization group, and the interaction of the two as independent variables after adjusting for attendance and controlling of intra-intern correlations. The bivariate unadjusted analysis was performed to display the distribution of overall correctness on the assessments. Wilcoxon signed rank test was used to determine score significance for dependent score variables (R-Statistical Programming Language, Vienna, Austria).
RESULTS
Baseline Characteristics
There were 149 interns who participated in this study (Figure). Assessment/survey completion rates were as follows: 2016 control: 68.0%; 2017 preintervention: 97.9%; 2017 postintervention: 89.4%; and 2018 control: 100%. The 2017 interns reported similar amounts of prior POCUS exposure in medical school (Table 1).
Primary Outcome: Assessment Scores (HUD vs no HUD)
There were no significant differences in assessment scores at the end of the study between interns randomized to personal HUD access vs those to no-HUD access (Table 1). HUD interns reported performing POCUS assessments on patients a mean 6.8 (standard deviation [SD] 2.2) times per week vs 6.4 (SD 2.9) times per week in the no-HUD arm (P = .66). The mean lecture attendance was 75.0% and did not significantly differ between the HUD arms (Table 1).
Secondary Outcomes
Impact of Repeating Lectures
The 2017 interns demonstrated significant increases in preblock vs postblock assessment scores after first-time exposure to the lectures (median preblock score 0.61 [interquartile range (IQR), 0.53-0.70] vs postblock score 0.81 [IQR, 0.72-0.86]; P < .001; Table 2). However, intern performance on the preblock vs postblock assessments after second-time exposure to the curriculum failed to improve (median second preblock score 0.78 [IQR, 0.69-0.83] vs postblock score 0.81 [IQR, 0.64-0.89]; P = .94). Intern performance on individual domains of knowledge for each block is listed in Appendix Table 1.
Intervention Performance vs Controls
The 2016 historical control had significantly higher scores compared with the 2017 preintervention group (P < .001; Appendix Table 2). The year-long lecture series resulted in significant increases in median scores for the 2017 group (median preintervention score 0.55 [0.41-0.61] vs median postintervention score 0.84 [0.71-0.90]; P = .006; Appendix Table 1). At the end of the study, the 2017 postintervention scores were significantly higher across multiple knowledge domains compared with the 2016 historical control (Appendix Table 2).
Survey Results
Notably, the 2017 intern class at the end of the intervention did not have significantly different assessment scores for several disease-specific domains, compared with the 2016 control (Appendix Table 2). Nonetheless, the 2017 intern class reported higher levels of confidence in these same domains despite similar scores (Supplementary Figure). The HUD group seldomly cited a lack of confidence in their abilities as a barrier to performing POCUS examinations (17.6%), compared with the no-HUD group (50.0%), despite nearly identical assessment scores between the two groups (Table 1).
DISCUSSION
Previous guidelines have recommended increased HUD access for learners,13,24,35,36 but there have been few investigations that have evaluated the impact of such access on learning POCUS. One previous investigation found that hospitalists who carried HUDs were more likely to identify heart failure on bedside examination.37 In contrast, our study found no improvement in interpretative ability when randomizing interns to carry HUDs for patient care. Notably, interns did not perform more POCUS examinations when given HUDs. We offer several explanations for this finding. First, time-motion studies have demonstrated that internal medicine interns spend less than 15% of their time toward direct patient care.38 It is possible that the demands of being an intern impeded their ability to perform more POCUS examinations on their patients, regardless of HUD access. Alternatively, the interns randomized to no personal access may have used the community device more frequently as a result of the lecture series. Given the cost of HUDs, further studies are needed to assess the degree to which HUD access will improve trainee interpretive ability, especially as more training programs consider the creation of ultrasound curricula.10,11,24,39,40
This study was unique because it followed interns over a year-long course that repeated the same material to assess rates of learning with repeated exposure. Learners improved their scores after the first, but not second, block. Furthermore, the median scores were nearly identical between the first postblock assessment and second preblock assessment (0.81 vs 0.78), suggesting that knowledge was retained between blocks. Together, these findings suggest there may be limitations of traditional lectures that use standardized patient models for practice. Supplementary pedagogies, such as in-the-moment feedback with actual patients, may be needed to promote mastery.14,35
Despite no formal curriculum, the 2016 intern class (historical control) had learned POCUS to some degree based on their higher assessment scores compared with the 2017 intern class during the preintervention period. Such learning may be informal, and yet, trainees may feel confident in making clinical decisions without formalized training, accreditation, or oversight. As suggested by this study, adding regular didactics or giving trainees HUDs may not immediately solve this issue. For assessment items in which the 2017 interns did not significantly differ from the controls, they nonetheless reported higher confidence in their abilities. Similarly, interns randomized to HUDs less frequently cited a lack of confidence in their abilities, despite similar scores to the no-HUD group. Such confidence may be incongruent with their actual knowledge or ability to safely use POCUS. This phenomenon of misplaced confidence is known as the Dunning–Kruger effect, and it may be common with ultrasound learning.41 While confidence can be part of a holistic definition of competency,14 these results raise the concern that trainees may have difficulty assessing their own competency level with POCUS.35
There are several limitations to this study. It was performed at a single institution with limited sample size. It examined only intern physicians because of funding constraints, which limits the generalizability of these findings among medical trainees. Technical ability assessments (including obtaining and interpreting images) were not included. We were unable to track the timing or location of the devices’ usage, and the interns’ self-reported usage rates may be subject to recall bias. To our knowledge, there were no significant lapses in device availability/functionality. Intern physicians in the HUD arm did not receive formal feedback on personally acquired patient images, which may have limited the intervention’s impact.
In conclusion, internal medicine interns who received personal HUDs were not better at recognizing normal/abnormal findings on image assessments, and they did not report performing more POCUS examinations. Since the minority of a trainee’s time is spent toward direct patient care, offering trainees HUDs without substantial guidance may not be enough to promote mastery. Notably, trainees who received HUDs felt more confident in their abilities, despite no objective increase in their actual skill. Finally, interns who received POCUS-related lectures experienced significant benefit upon first exposure to the material, while repeated exposures did not improve performance. Future investigations should stringently track trainee POCUS usage rates with HUDs and assess whether image acquisition ability improves as a result of personal access.
Point-of-care ultrasonography (POCUS) can transform healthcare delivery through its diagnostic and therapeutic expediency.1 POCUS has been shown to bolster diagnostic accuracy, reduce procedural complications, decrease inpatient length of stay, and improve patient satisfaction by encouraging the physician to be present at the bedside.2-8
POCUS has become widespread across a variety of clinical settings as more investigations have demonstrated its positive impact on patient care.1,9-12 This includes the use of POCUS by trainees, who are now utilizing this technology as part of their assessments of patients.13,14 However, trainees may be performing these examinations with minimal oversight, and outside of emergency medicine, there are few guidelines on how to effectively teach POCUS or measure competency.13,14 While POCUS is rapidly becoming a part of inpatient care, teaching physicians may have little experience in ultrasound or the expertise to adequately supervise trainees.14 There is a growing need to study what trainees can learn and how this knowledge is acquired.
Previous investigations have demonstrated that inexperienced users can be taught to use POCUS to identify a variety of pathological states.2,3,15-23 Most of these curricula used a single lecture series as their pedagogical vehicle, and they variably included junior medical trainees. More importantly, the investigations did not explore whether personal access to handheld ultrasound devices (HUDs) improved learning. In theory, improved access to POCUS devices increases opportunities for authentic and deliberate practice, which may be needed to improve trainee skill with POCUS beyond the classroom setting.14
This study aimed to address several ongoing gaps in knowledge related to learning POCUS. First, we hypothesized that personal HUD access would improve trainees’ POCUS-related knowledge and interpretive ability as a result of increased practice opportunities. Second, we hypothesized that trainees who receive personal access to HUDs would be more likely to perform POCUS examinations and feel more confident in their interpretations. Finally, we hypothesized that repeated exposure to POCUS-related lectures would result in greater improvements in knowledge as compared with a single lecture series.
METHODS
Participants and Setting
The 2017 intern class (n = 47) at an academic internal medicine residency program participated in the study. Control data were obtained from the 2016 intern class (historical control; n = 50) and the 2018 intern class (contemporaneous control; n = 52). The Stanford University Institutional Review Board approved this study.
Study Design
The 2017 intern class (n = 47) received POCUS didactics from June 2017 to June 2018. To evaluate if increased access to HUDs improved learning outcomes, the 2017 interns were randomized 1:1 to receive their own personal HUD that could be used for patient care and/or self-directed learning (n = 24) vs no-HUD (n = 23; Figure). Learning outcomes were assessed over the course of 1 year (see “Outcomes” below) and were compared with the 2016 and 2018 controls. The 2016 intern class had completed a year of training but had not received formalized POCUS didactics (historical control), whereas the 2018 intern class was assessed at the beginning of their year (contemporaneous control; Figure). In order to make comparisons based on intern experience, baseline data for the 2017 intern class were compared with the 2018 intern class, whereas end-of-study data for 2017 interns were compared with 2016 interns.
Outcomes
The primary outcome was the difference in assessment scores at the end of the study period between interns randomized to receive a HUD and those who were not. Secondary outcomes included differences in HUD usage rates, lecture attendance, and assessment scores. To assess whether repeated lecture exposure resulted in greater amounts of learning, this study evaluated for assessment score improvements after each lecture block. Finally, trainee attitudes toward POCUS and their confidence in their interpretative ability were measured at the beginning and end of the study period.
Curriculum Implementation
The lectures were administered as once-weekly didactics of 1-hour duration to interns rotating on the inpatient wards rotation. This rotation is 4 weeks long, and each intern will experience the rotation two to four times per year. Each lecture contained two parts: (1) 20-30 minutes of didactics via Microsoft PowerPointTM and (2) 30-40 minutes of supervised practice using HUDs on standardized patients. Four lectures were given each month: (1) introduction to POCUS and ultrasound physics, (2) thoracic/lung ultrasound, (3) echocardiography, and (4) abdominal POCUS. The lectures consisted of contrasting cases of normal/abnormal videos and clinical vignettes. These four lectures were repeated each month as new interns rotated on service. Some interns experienced the same content multiple times, which was intentional in order to assess their rates of learning over time. Lecture contents were based on previously published guidelines and expert consensus for teaching POCUS in internal medicine.13, 24-26 Content from the Accreditation Council for Graduate Medical Education (ACGME) and the American College of Emergency Physicians (ACEP) was also incorporated because these organizations had published relevant guidelines for teaching POCUS.13,26 Further development of the lectures occurred through review of previously described POCUS-relevant curricula.27-32
Handheld Ultrasound Devices
This study used the Philips LumifyTM, a United States Food and Drug Administration–approved device. Interns randomized to HUDs received their own device at the start of the rotation. It was at their discretion to use the device outside of the course. All devices were approved for patient use and were encrypted in compliance with our information security office. For privacy reasons, any saved patient images were not reviewed by the researchers. Interns were encouraged to share their findings with supervising physicians during rounds, but actual oversight was not measured. Interns not randomized to HUDs could access a single community device that was shared among all residents and fellows in the hospital. Interns reported the average number of POCUS examinations performed each week via a survey sent during the last week of the rotation.
Assessment Design and Implementation
Assessments evaluating trainee knowledge were administered before, during, and after the study period (Figure). For the 2017 cohort, assessments were also administered at the start and end of the ward month to track knowledge acquisition. Assessment contents were selected from POCUS guidelines for internal medicine and adaptation of the ACGME and ACEP guidelines.13,24,26 Additional content was obtained from major society POCUS tutorials and deidentified images collected by the study authors.13,24,33 In keeping with previously described methodology, the images were shown for approximately 12 seconds, followed by five additional seconds to allow the learner to answer the question.32 Final assessment contents were determined by the authors using the Delphi method.34 A sample assessment can be found in the Appendix Material.
Surveys
Surveys were administered alongside the assessments to the 2016-2018 intern classes. These surveys assessed trainee attitudes toward POCUS and were based on previously validated assessments.27,28,30 Attitudes were measured using 5-point Likert scales.
Statistical Analysis
For the primary outcome, we performed generalized binomial mixed-effect regressions using the survey periods, randomization group, and the interaction of the two as independent variables after adjusting for attendance and controlling of intra-intern correlations. The bivariate unadjusted analysis was performed to display the distribution of overall correctness on the assessments. Wilcoxon signed rank test was used to determine score significance for dependent score variables (R-Statistical Programming Language, Vienna, Austria).
RESULTS
Baseline Characteristics
There were 149 interns who participated in this study (Figure). Assessment/survey completion rates were as follows: 2016 control: 68.0%; 2017 preintervention: 97.9%; 2017 postintervention: 89.4%; and 2018 control: 100%. The 2017 interns reported similar amounts of prior POCUS exposure in medical school (Table 1).
Primary Outcome: Assessment Scores (HUD vs no HUD)
There were no significant differences in assessment scores at the end of the study between interns randomized to personal HUD access vs those to no-HUD access (Table 1). HUD interns reported performing POCUS assessments on patients a mean 6.8 (standard deviation [SD] 2.2) times per week vs 6.4 (SD 2.9) times per week in the no-HUD arm (P = .66). The mean lecture attendance was 75.0% and did not significantly differ between the HUD arms (Table 1).
Secondary Outcomes
Impact of Repeating Lectures
The 2017 interns demonstrated significant increases in preblock vs postblock assessment scores after first-time exposure to the lectures (median preblock score 0.61 [interquartile range (IQR), 0.53-0.70] vs postblock score 0.81 [IQR, 0.72-0.86]; P < .001; Table 2). However, intern performance on the preblock vs postblock assessments after second-time exposure to the curriculum failed to improve (median second preblock score 0.78 [IQR, 0.69-0.83] vs postblock score 0.81 [IQR, 0.64-0.89]; P = .94). Intern performance on individual domains of knowledge for each block is listed in Appendix Table 1.
Intervention Performance vs Controls
The 2016 historical control had significantly higher scores compared with the 2017 preintervention group (P < .001; Appendix Table 2). The year-long lecture series resulted in significant increases in median scores for the 2017 group (median preintervention score 0.55 [0.41-0.61] vs median postintervention score 0.84 [0.71-0.90]; P = .006; Appendix Table 1). At the end of the study, the 2017 postintervention scores were significantly higher across multiple knowledge domains compared with the 2016 historical control (Appendix Table 2).
Survey Results
Notably, the 2017 intern class at the end of the intervention did not have significantly different assessment scores for several disease-specific domains, compared with the 2016 control (Appendix Table 2). Nonetheless, the 2017 intern class reported higher levels of confidence in these same domains despite similar scores (Supplementary Figure). The HUD group seldomly cited a lack of confidence in their abilities as a barrier to performing POCUS examinations (17.6%), compared with the no-HUD group (50.0%), despite nearly identical assessment scores between the two groups (Table 1).
DISCUSSION
Previous guidelines have recommended increased HUD access for learners,13,24,35,36 but there have been few investigations that have evaluated the impact of such access on learning POCUS. One previous investigation found that hospitalists who carried HUDs were more likely to identify heart failure on bedside examination.37 In contrast, our study found no improvement in interpretative ability when randomizing interns to carry HUDs for patient care. Notably, interns did not perform more POCUS examinations when given HUDs. We offer several explanations for this finding. First, time-motion studies have demonstrated that internal medicine interns spend less than 15% of their time toward direct patient care.38 It is possible that the demands of being an intern impeded their ability to perform more POCUS examinations on their patients, regardless of HUD access. Alternatively, the interns randomized to no personal access may have used the community device more frequently as a result of the lecture series. Given the cost of HUDs, further studies are needed to assess the degree to which HUD access will improve trainee interpretive ability, especially as more training programs consider the creation of ultrasound curricula.10,11,24,39,40
This study was unique because it followed interns over a year-long course that repeated the same material to assess rates of learning with repeated exposure. Learners improved their scores after the first, but not second, block. Furthermore, the median scores were nearly identical between the first postblock assessment and second preblock assessment (0.81 vs 0.78), suggesting that knowledge was retained between blocks. Together, these findings suggest there may be limitations of traditional lectures that use standardized patient models for practice. Supplementary pedagogies, such as in-the-moment feedback with actual patients, may be needed to promote mastery.14,35
Despite no formal curriculum, the 2016 intern class (historical control) had learned POCUS to some degree based on their higher assessment scores compared with the 2017 intern class during the preintervention period. Such learning may be informal, and yet, trainees may feel confident in making clinical decisions without formalized training, accreditation, or oversight. As suggested by this study, adding regular didactics or giving trainees HUDs may not immediately solve this issue. For assessment items in which the 2017 interns did not significantly differ from the controls, they nonetheless reported higher confidence in their abilities. Similarly, interns randomized to HUDs less frequently cited a lack of confidence in their abilities, despite similar scores to the no-HUD group. Such confidence may be incongruent with their actual knowledge or ability to safely use POCUS. This phenomenon of misplaced confidence is known as the Dunning–Kruger effect, and it may be common with ultrasound learning.41 While confidence can be part of a holistic definition of competency,14 these results raise the concern that trainees may have difficulty assessing their own competency level with POCUS.35
There are several limitations to this study. It was performed at a single institution with limited sample size. It examined only intern physicians because of funding constraints, which limits the generalizability of these findings among medical trainees. Technical ability assessments (including obtaining and interpreting images) were not included. We were unable to track the timing or location of the devices’ usage, and the interns’ self-reported usage rates may be subject to recall bias. To our knowledge, there were no significant lapses in device availability/functionality. Intern physicians in the HUD arm did not receive formal feedback on personally acquired patient images, which may have limited the intervention’s impact.
In conclusion, internal medicine interns who received personal HUDs were not better at recognizing normal/abnormal findings on image assessments, and they did not report performing more POCUS examinations. Since the minority of a trainee’s time is spent toward direct patient care, offering trainees HUDs without substantial guidance may not be enough to promote mastery. Notably, trainees who received HUDs felt more confident in their abilities, despite no objective increase in their actual skill. Finally, interns who received POCUS-related lectures experienced significant benefit upon first exposure to the material, while repeated exposures did not improve performance. Future investigations should stringently track trainee POCUS usage rates with HUDs and assess whether image acquisition ability improves as a result of personal access.
1. Moore CL, Copel JA. Point-of-care ultrasonography. N Engl J Med. 2011;364(8):749-757. https://doi.org/10.1056/NEJMra0909487.
2. Akkaya A, Yesilaras M, Aksay E, Sever M, Atilla OD. The interrater reliability of ultrasound imaging of the inferior vena cava performed by emergency residents. Am J Emerg Med. 2013;31(10):1509-1511. https://doi.org/10.1016/j.ajem.2013.07.006.
3. Razi R, Estrada JR, Doll J, Spencer KT. Bedside hand-carried ultrasound by internal medicine residents versus traditional clinical assessment for the identification of systolic dysfunction in patients admitted with decompensated heart failure. J Am Soc Echocardiogr. 2011;24(12):1319-1324. https://doi.org/10.1016/j.echo.2011.07.013.
4. Dodge KL, Lynch CA, Moore CL, Biroscak BJ, Evans LV. Use of ultrasound guidance improves central venous catheter insertion success rates among junior residents. J Ultrasound Med. 2012;31(10):1519-1526. https://doi.org/10.7863/jum.2012.31.10.1519.
5. Cavanna L, Mordenti P, Bertè R, et al. Ultrasound guidance reduces pneumothorax rate and improves safety of thoracentesis in malignant pleural effusion: Report on 445 consecutive patients with advanced cancer. World J Surg Oncol. 2014;12:139. https://doi.org/10.1186/1477-7819-12-139.
6. Testa A, Francesconi A, Giannuzzi R, Berardi S, Sbraccia P. Economic analysis of bedside ultrasonography (US) implementation in an Internal Medicine department. Intern Emerg Med. 2015;10(8):1015-1024. https://doi.org/10.1007/s11739-015-1320-7.
7. Howard ZD, Noble VE, Marill KA, et al. Bedside ultrasound maximizes patient satisfaction. J Emerg Med. 2014;46(1):46-53. https://doi.org/10.1016/j.jemermed.2013.05.044.
8. Park YH, Jung RB, Lee YG, et al. Does the use of bedside ultrasonography reduce emergency department length of stay for patients with renal colic? A pilot study. Clin Exp Emerg Med. 2016;3(4):197-203. https://doi.org/10.15441/ceem.15.109.
9. Glomb N, D’Amico B, Rus M, Chen C. Point-of-care ultrasound in resource-limited settings. Clin Pediatr Emerg Med. 2015;16(4):256-261. https://doi.org/10.1016/j.cpem.2015.10.001.
10. Bahner DP, Goldman E, Way D, Royall NA, Liu YT. The state of ultrasound education in U.S. medical schools: results of a national survey. Acad Med. 2014;89(12):1681-1686. https://doi.org/10.1097/ACM.0000000000000414.
11. Hall JWW, Holman H, Bornemann P, et al. Point of care ultrasound in family medicine residency programs: A CERA study. Fam Med. 2015;47(9):706-711.
12. Schnobrich DJ, Gladding S, Olson APJ, Duran-Nelson A. Point-of-care ultrasound in internal medicine: A national survey of educational leadership. J Grad Med Educ. 2013;5(3):498-502. https://doi.org/10.4300/JGME-D-12-00215.1.
13. Stolz LA, Stolz U, Fields JM, et al. Emergency medicine resident assessment of the emergency ultrasound milestones and current training recommendations. Acad Emerg Med. 2017;24(3):353-361. https://doi.org/10.1111/acem.13113.
14. Kumar, A., Jensen, T., Kugler, J. Evaluation of trainee competency with point-of-care ultrasonography (POCUS): A conceptual framework and review of existing assessments. J Gen Intern Med. 2019;34(6):1025-1031. https://doi.org/10.1007/s11606-019-04945-4.
15. Levitov A, Frankel HL, Blaivas M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients—part ii: Cardiac ultrasonography. Crit Care Med. 2016;44(6):1206-1227. https://doi.org/10.1097/CCM.0000000000001847.
16. Kobal SL, Trento L, Baharami S, et al. Comparison of effectiveness of hand-carried ultrasound to bedside cardiovascular physical examination. Am J Cardiol. 2005;96(7):1002-1006. https://doi.org/10.1016/j.amjcard.2005.05.060.
17. Ceriani E, Cogliati C. Update on bedside ultrasound diagnosis of pericardial effusion. Intern Emerg Med. 2016;11(3):477-480. https://doi.org/10.1007/s11739-015-1372-8.
18. Labovitz AJ, Noble VE, Bierig M, et al. Focused cardiac ultrasound in the emergent setting: A consensus statement of the American Society of Echocardiography and American College of Emergency Physicians. J Am Soc Echocardiogr. 2010;23(12):1225-1230. https://doi.org/10.1016/j.echo.2010.10.005.
19. Keil-Ríos D, Terrazas-Solís H, González-Garay A, Sánchez-Ávila JF, García-Juárez I. Pocket ultrasound device as a complement to physical examination for ascites evaluation and guided paracentesis. Intern Emerg Med. 2016;11(3):461-466. https://doi.org/10.1007/s11739-016-1406-x.
20. Riddell J, Case A, Wopat R, et al. Sensitivity of emergency bedside ultrasound to detect hydronephrosis in patients with computed tomography–proven stones. West J Emerg Med. 2014;15(1):96-100. https://doi.org/10.5811/westjem.2013.9.15874.
21. Dalziel PJ, Noble VE. Bedside ultrasound and the assessment of renal colic: A review. Emerg Med J. 2013;30(1):3-8. https://doi.org/10.1136/emermed-2012-201375.
22. Whitson MR, Mayo PH. Ultrasonography in the emergency department. Crit Care. 2016;20(1):227. https://doi.org/10.1186/s13054-016-1399-x.
23. Kumar A, Liu G, Chi J, Kugler J. The role of technology in the bedside encounter. Med Clin North Am. 2018;102(3):443-451. https://doi.org/10.1016/j.mcna.2017.12.006.
24. Ma IWY, Arishenkoff S, Wiseman J, et al. Internal medicine point-of-care ultrasound curriculum: Consensus recommendations from the Canadian Internal Medicine Ultrasound (CIMUS) Group. J Gen Intern Med. 2017;32(9):1052-1057. https://doi.org/10.1007/s11606-017-4071-5.
15. Sabath BF, Singh G. Point-of-care ultrasonography as a training milestone for internal medicine residents: The time is now. J Community Hosp Intern Med Perspect. 2016;6(5):33094. https://doi.org/10.3402/jchimp.v6.33094.
26. American College of Emergency Physicians. Ultrasound guidelines: emergency, point-of-care and clinical ultrasound guidelines in medicine. Ann Emerg Med. 2017;69(5):e27-e54. https://doi.org/10.1016/j.annemergmed.2016.08.457.
27. Ramsingh D, Rinehart J, Kain Z, et al. Impact assessment of perioperative point-of-care ultrasound training on anesthesiology residents. Anesthesiology. 2015;123(3):670-682. https://doi.org/10.1097/ALN.0000000000000776.
28. Keddis MT, Cullen MW, Reed DA, et al. Effectiveness of an ultrasound training module for internal medicine residents. BMC Med Educ. 2011;11:75. https://doi.org/10.1186/1472-6920-11-75.
29. Townsend NT, Kendall J, Barnett C, Robinson T. An effective curriculum for focused assessment diagnostic echocardiography: Establishing the learning curve in surgical residents. J Surg Educ. 2016;73(2):190-196. https://doi.org/10.1016/j.jsurg.2015.10.009.
30. Hoppmann RA, Rao VV, Bell F, et al. The evolution of an integrated ultrasound curriculum (iUSC) for medical students: 9-year experience. Crit Ultrasound J. 2015;7(1):18. https://doi.org/10.1186/s13089-015-0035-3.
31. Skalski JH, Elrashidi M, Reed DA, McDonald FS, Bhagra A. Using standardized patients to teach point-of-care ultrasound–guided physical examination skills to internal medicine residents. J Grad Med Educ. 2015;7(1):95-97. https://doi.org/10.4300/JGME-D-14-00178.1.
32. Chisholm CB, Dodge WR, Balise RR, Williams SR, Gharahbaghian L, Beraud A-S. Focused cardiac ultrasound training: How much is enough? J Emerg Med. 2013;44(4):818-822. https://doi.org/10.1016/j.jemermed.2012.07.092.
33. Schmidt GA, Schraufnagel D. Introduction to ATS seminars: Intensive care ultrasound. Ann Am Thorac Soc. 2013;10(5):538-539. https://doi.org/10.1513/AnnalsATS.201306-203ED.
34. Skaarup SH, Laursen CB, Bjerrum AS, Hilberg O. Objective and structured assessment of lung ultrasound competence. A multispecialty Delphi consensus and construct validity study. Ann Am Thorac Soc. 2017;14(4):555-560. https://doi.org/10.1513/AnnalsATS.201611-894OC.
35. Lucas BP, Tierney DM, Jensen TP, et al. Credentialing of hospitalists in ultrasound-guided bedside procedures: A position statement of the Society of Hospital Medicine. J Hosp Med. 2018;13(2):117-125. https://doi.org/10.12788/jhm.2917.
36. Frankel HL, Kirkpatrick AW, Elbarbary M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients-part i: General ultrasonography. Crit Care Med. 2015;43(11):2479-2502. https://doi.org/10.1097/CCM.0000000000001216.
37. Martin LD, Howell EE, Ziegelstein RC, et al. Hand-carried ultrasound performed by hospitalists: Does it improve the cardiac physical examination? Am J Med. 2009;122(1):35-41. https://doi.org/10.1016/j.amjmed.2008.07.022.
38. Desai SV, Asch DA, Bellini LM, et al. Education outcomes in a duty-hour flexibility trial in internal medicine. N Engl J Med. 2018;378(16):1494-1508. https://doi.org/10.1056/NEJMoa1800965.
39. Baltarowich OH, Di Salvo DN, Scoutt LM, et al. National ultrasound curriculum for medical students. Ultrasound Q. 2014;30(1):13-19. https://doi.org/10.1097/RUQ.0000000000000066.
40. Beal EW, Sigmond BR, Sage-Silski L, Lahey S, Nguyen V, Bahner DP. Point-of-care ultrasound in general surgery residency training: A proposal for milestones in graduate medical education ultrasound. J Ultrasound Med. 2017;36(12):2577-2584. https://doi.org/10.1002/jum.14298.
41. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J Pers Soc Psychol. 1999;77(6):1121-1134. https://doi.org/10.1037//0022-3514.77.6.1121.
1. Moore CL, Copel JA. Point-of-care ultrasonography. N Engl J Med. 2011;364(8):749-757. https://doi.org/10.1056/NEJMra0909487.
2. Akkaya A, Yesilaras M, Aksay E, Sever M, Atilla OD. The interrater reliability of ultrasound imaging of the inferior vena cava performed by emergency residents. Am J Emerg Med. 2013;31(10):1509-1511. https://doi.org/10.1016/j.ajem.2013.07.006.
3. Razi R, Estrada JR, Doll J, Spencer KT. Bedside hand-carried ultrasound by internal medicine residents versus traditional clinical assessment for the identification of systolic dysfunction in patients admitted with decompensated heart failure. J Am Soc Echocardiogr. 2011;24(12):1319-1324. https://doi.org/10.1016/j.echo.2011.07.013.
4. Dodge KL, Lynch CA, Moore CL, Biroscak BJ, Evans LV. Use of ultrasound guidance improves central venous catheter insertion success rates among junior residents. J Ultrasound Med. 2012;31(10):1519-1526. https://doi.org/10.7863/jum.2012.31.10.1519.
5. Cavanna L, Mordenti P, Bertè R, et al. Ultrasound guidance reduces pneumothorax rate and improves safety of thoracentesis in malignant pleural effusion: Report on 445 consecutive patients with advanced cancer. World J Surg Oncol. 2014;12:139. https://doi.org/10.1186/1477-7819-12-139.
6. Testa A, Francesconi A, Giannuzzi R, Berardi S, Sbraccia P. Economic analysis of bedside ultrasonography (US) implementation in an Internal Medicine department. Intern Emerg Med. 2015;10(8):1015-1024. https://doi.org/10.1007/s11739-015-1320-7.
7. Howard ZD, Noble VE, Marill KA, et al. Bedside ultrasound maximizes patient satisfaction. J Emerg Med. 2014;46(1):46-53. https://doi.org/10.1016/j.jemermed.2013.05.044.
8. Park YH, Jung RB, Lee YG, et al. Does the use of bedside ultrasonography reduce emergency department length of stay for patients with renal colic? A pilot study. Clin Exp Emerg Med. 2016;3(4):197-203. https://doi.org/10.15441/ceem.15.109.
9. Glomb N, D’Amico B, Rus M, Chen C. Point-of-care ultrasound in resource-limited settings. Clin Pediatr Emerg Med. 2015;16(4):256-261. https://doi.org/10.1016/j.cpem.2015.10.001.
10. Bahner DP, Goldman E, Way D, Royall NA, Liu YT. The state of ultrasound education in U.S. medical schools: results of a national survey. Acad Med. 2014;89(12):1681-1686. https://doi.org/10.1097/ACM.0000000000000414.
11. Hall JWW, Holman H, Bornemann P, et al. Point of care ultrasound in family medicine residency programs: A CERA study. Fam Med. 2015;47(9):706-711.
12. Schnobrich DJ, Gladding S, Olson APJ, Duran-Nelson A. Point-of-care ultrasound in internal medicine: A national survey of educational leadership. J Grad Med Educ. 2013;5(3):498-502. https://doi.org/10.4300/JGME-D-12-00215.1.
13. Stolz LA, Stolz U, Fields JM, et al. Emergency medicine resident assessment of the emergency ultrasound milestones and current training recommendations. Acad Emerg Med. 2017;24(3):353-361. https://doi.org/10.1111/acem.13113.
14. Kumar, A., Jensen, T., Kugler, J. Evaluation of trainee competency with point-of-care ultrasonography (POCUS): A conceptual framework and review of existing assessments. J Gen Intern Med. 2019;34(6):1025-1031. https://doi.org/10.1007/s11606-019-04945-4.
15. Levitov A, Frankel HL, Blaivas M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients—part ii: Cardiac ultrasonography. Crit Care Med. 2016;44(6):1206-1227. https://doi.org/10.1097/CCM.0000000000001847.
16. Kobal SL, Trento L, Baharami S, et al. Comparison of effectiveness of hand-carried ultrasound to bedside cardiovascular physical examination. Am J Cardiol. 2005;96(7):1002-1006. https://doi.org/10.1016/j.amjcard.2005.05.060.
17. Ceriani E, Cogliati C. Update on bedside ultrasound diagnosis of pericardial effusion. Intern Emerg Med. 2016;11(3):477-480. https://doi.org/10.1007/s11739-015-1372-8.
18. Labovitz AJ, Noble VE, Bierig M, et al. Focused cardiac ultrasound in the emergent setting: A consensus statement of the American Society of Echocardiography and American College of Emergency Physicians. J Am Soc Echocardiogr. 2010;23(12):1225-1230. https://doi.org/10.1016/j.echo.2010.10.005.
19. Keil-Ríos D, Terrazas-Solís H, González-Garay A, Sánchez-Ávila JF, García-Juárez I. Pocket ultrasound device as a complement to physical examination for ascites evaluation and guided paracentesis. Intern Emerg Med. 2016;11(3):461-466. https://doi.org/10.1007/s11739-016-1406-x.
20. Riddell J, Case A, Wopat R, et al. Sensitivity of emergency bedside ultrasound to detect hydronephrosis in patients with computed tomography–proven stones. West J Emerg Med. 2014;15(1):96-100. https://doi.org/10.5811/westjem.2013.9.15874.
21. Dalziel PJ, Noble VE. Bedside ultrasound and the assessment of renal colic: A review. Emerg Med J. 2013;30(1):3-8. https://doi.org/10.1136/emermed-2012-201375.
22. Whitson MR, Mayo PH. Ultrasonography in the emergency department. Crit Care. 2016;20(1):227. https://doi.org/10.1186/s13054-016-1399-x.
23. Kumar A, Liu G, Chi J, Kugler J. The role of technology in the bedside encounter. Med Clin North Am. 2018;102(3):443-451. https://doi.org/10.1016/j.mcna.2017.12.006.
24. Ma IWY, Arishenkoff S, Wiseman J, et al. Internal medicine point-of-care ultrasound curriculum: Consensus recommendations from the Canadian Internal Medicine Ultrasound (CIMUS) Group. J Gen Intern Med. 2017;32(9):1052-1057. https://doi.org/10.1007/s11606-017-4071-5.
15. Sabath BF, Singh G. Point-of-care ultrasonography as a training milestone for internal medicine residents: The time is now. J Community Hosp Intern Med Perspect. 2016;6(5):33094. https://doi.org/10.3402/jchimp.v6.33094.
26. American College of Emergency Physicians. Ultrasound guidelines: emergency, point-of-care and clinical ultrasound guidelines in medicine. Ann Emerg Med. 2017;69(5):e27-e54. https://doi.org/10.1016/j.annemergmed.2016.08.457.
27. Ramsingh D, Rinehart J, Kain Z, et al. Impact assessment of perioperative point-of-care ultrasound training on anesthesiology residents. Anesthesiology. 2015;123(3):670-682. https://doi.org/10.1097/ALN.0000000000000776.
28. Keddis MT, Cullen MW, Reed DA, et al. Effectiveness of an ultrasound training module for internal medicine residents. BMC Med Educ. 2011;11:75. https://doi.org/10.1186/1472-6920-11-75.
29. Townsend NT, Kendall J, Barnett C, Robinson T. An effective curriculum for focused assessment diagnostic echocardiography: Establishing the learning curve in surgical residents. J Surg Educ. 2016;73(2):190-196. https://doi.org/10.1016/j.jsurg.2015.10.009.
30. Hoppmann RA, Rao VV, Bell F, et al. The evolution of an integrated ultrasound curriculum (iUSC) for medical students: 9-year experience. Crit Ultrasound J. 2015;7(1):18. https://doi.org/10.1186/s13089-015-0035-3.
31. Skalski JH, Elrashidi M, Reed DA, McDonald FS, Bhagra A. Using standardized patients to teach point-of-care ultrasound–guided physical examination skills to internal medicine residents. J Grad Med Educ. 2015;7(1):95-97. https://doi.org/10.4300/JGME-D-14-00178.1.
32. Chisholm CB, Dodge WR, Balise RR, Williams SR, Gharahbaghian L, Beraud A-S. Focused cardiac ultrasound training: How much is enough? J Emerg Med. 2013;44(4):818-822. https://doi.org/10.1016/j.jemermed.2012.07.092.
33. Schmidt GA, Schraufnagel D. Introduction to ATS seminars: Intensive care ultrasound. Ann Am Thorac Soc. 2013;10(5):538-539. https://doi.org/10.1513/AnnalsATS.201306-203ED.
34. Skaarup SH, Laursen CB, Bjerrum AS, Hilberg O. Objective and structured assessment of lung ultrasound competence. A multispecialty Delphi consensus and construct validity study. Ann Am Thorac Soc. 2017;14(4):555-560. https://doi.org/10.1513/AnnalsATS.201611-894OC.
35. Lucas BP, Tierney DM, Jensen TP, et al. Credentialing of hospitalists in ultrasound-guided bedside procedures: A position statement of the Society of Hospital Medicine. J Hosp Med. 2018;13(2):117-125. https://doi.org/10.12788/jhm.2917.
36. Frankel HL, Kirkpatrick AW, Elbarbary M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients-part i: General ultrasonography. Crit Care Med. 2015;43(11):2479-2502. https://doi.org/10.1097/CCM.0000000000001216.
37. Martin LD, Howell EE, Ziegelstein RC, et al. Hand-carried ultrasound performed by hospitalists: Does it improve the cardiac physical examination? Am J Med. 2009;122(1):35-41. https://doi.org/10.1016/j.amjmed.2008.07.022.
38. Desai SV, Asch DA, Bellini LM, et al. Education outcomes in a duty-hour flexibility trial in internal medicine. N Engl J Med. 2018;378(16):1494-1508. https://doi.org/10.1056/NEJMoa1800965.
39. Baltarowich OH, Di Salvo DN, Scoutt LM, et al. National ultrasound curriculum for medical students. Ultrasound Q. 2014;30(1):13-19. https://doi.org/10.1097/RUQ.0000000000000066.
40. Beal EW, Sigmond BR, Sage-Silski L, Lahey S, Nguyen V, Bahner DP. Point-of-care ultrasound in general surgery residency training: A proposal for milestones in graduate medical education ultrasound. J Ultrasound Med. 2017;36(12):2577-2584. https://doi.org/10.1002/jum.14298.
41. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J Pers Soc Psychol. 1999;77(6):1121-1134. https://doi.org/10.1037//0022-3514.77.6.1121.
© 2020 Society of Hospital Medicine
Lean-Based Redesign of Multidisciplinary Rounds on General Medicine Service
Given that multiple disciplines are often involved in caring for patients admitted to the hospital, timely communication, collaboration, and coordination amongst various disciplines is necessary for safe and effective patient care.1 With the focus on improving patient satisfaction and throughput in hospitals, it is also important to make more accurate predictions of the discharge date and allow time for patients and their families to prepare for discharge.2-4
Multidisciplinary rounds (MDR) are defined as structured daily communication amongst key members of the patient’s care team (eg, nurses, physicians, case managers, social workers, pharmacists, and rehabilitation services). MDR have shown to be a useful strategy for ensuring that all members of the care team are updated on the plan of care for the patient.5 During MDR, a brief “check-in” discussing the patient’s plan of care, pending needs, and barriers to discharge allows all team members, patients, and families to effectively coordinate care and plan and prepare for discharge.
Multiple studies have reported increased collaboration and improved communication between disciplines with the use of such multidisciplinary rounding.2,5-7 Additionally, MDR have been shown to improve patient outcomes8 and reduce adverse events,9 length of stay (LOS),6,8 cost of care,8 and readmissions.1
We redesigned MDR on the general medicine wards at our institution in October 2014 by using Lean management techniques. Lean is defined as a set of philosophies and methods that aim to create transformation in thinking, behavior, and culture in each process, with the goal of maximizing the value for the patients and providers, adding efficiency, and reducing waste and waits.10
In this study, we evaluate whether this new model of MDR was associated with a decrease in the LOS. We also evaluate whether this new model of MDR was associated with an increase in discharges before noon, documentation of estimated discharge date (EDD) in our electronic health record (EHR), and patient satisfaction.
METHODS
Setting, Design, and Patients
The study was conducted on the teaching general medicine service at our institution, an urban, 484-bed academic hospital. The general medicine service has patients on 4 inpatient units (total of 95 beds) and is managed by 5 teaching service teams.
We performed a pre-post study. The preperiod (in which the old model of MDR was followed) included 4000 patients discharged between September 1, 2013, and October 22, 2014. The postperiod (in which the new model of MDR was followed) included 2085 patients discharged between October 23, 2014, and April 30, 2015. We excluded 139 patients that died in the hospital prior to discharge and patients on the nonteaching and/or private practice service.
All data were provided by our institution’s Digital Solutions Department. Our institutional review board issued a letter of determination exempting this study from further review because it was deemed to be a quality improvement initiative.
Use of Lean Management to Redesign our MDR
Our institution has incorporated the Lean management system to continually add value to services through the elimination of waste, thus simultaneously optimizing the quality of patient care, cost, and patient satisfaction.11 Lean, derived from the Toyota Production System, has long been used in manufacturing and in recent decades has spread to healthcare.12 We leveraged the following 3 key Lean techniques to redesign our MDR: (1) value stream management (VSM), (2) rapid process improvement workshops (RPIW), and (3) active daily management (ADM), as detailed in supplementary Appendix 1.
Interventions
Outcomes
The primary outcome was mean LOS. The secondary outcomes were (1) discharges before noon, (2) recording of the EDD in our EHR within 24 hours of admission (as time stamped on our EHR), and (3) patient satisfaction.
Data for patient satisfaction were obtained using the Press Ganey survey. We used data on patient satisfaction scores for the following 2 relevant questions on this survey: (1) extent to which the patient felt ready to be discharged and (2) how well staff worked together to care for the patient. Proportions of the “top-box” (“very good”) were used for the analysis. These survey data were available on 467 patients (11.7%) in the preperiod and 188 patients (9.0%) in the postperiod.
Data Analysis
A sensitivity analysis was conducted on a second cohort that included a subset of patients from the preperiod between November 1, 2013, and April 30, 2014, and a subset of patients from the postperiod between November 1, 2014, and April 1, 2015, to control for the calendar period (supplementary Appendix 2).
All analyses were conducted in R version 3.3.0, with the linear mixed-effects model lme4 statistical package.13,14
RESULTS
Table 3 shows the differences in the outcomes between the pre- and postperiods. There was no change in the LOS or LOS adjusted for CMI. There was a 3.9% increase in discharges before noon in the postperiod compared with the preperiod (95% CI, 2.4% to 5.3%; P < .001). There was a 9.9% increase in the percentage of patients for whom the EDD was recorded in our EHR within 24 hours of admission (95% CI, 7.4% to 12.4%; P < .001). There was no change in the “top-box” patient satisfaction scores.
There were only marginal differences in the results between the entire cohort and a second subset cohort used for sensitivity analysis (supplementary Appendix 2).
DISCUSSION
In our study, there was no change in the mean LOS with the new model of MDR. There was an increase in discharges before noon and in recording of the EDD in our EHR within 24 hours of admission in the postperiod when the Lean-based new model of MDR was utilized. There was no change in patient satisfaction. With no change in staffing, we were able to accommodate the increase in the discharge volume in the postperiod.
We believe our results are attributable to several factors, including clearly defined roles and responsibilities for all participants of MDR, the inclusion of more experienced general medicine attending physician (compared with housestaff), Lean management techniques to identify gaps in the patient’s journey from emergency department to discharge using VSM, the development of appropriate workflows and standard work on how the multidisciplinary teams would work together at RPIWs, and ADM to ensure sustainability and engagement among frontline members and institutional leaders. In order to sustain this, we planned to continue monitoring data in daily, weekly, and monthly forums with senior physician and administrative leaders. Planning for additional interventions is underway, including moving MDR to the bedside, instituting an afternoon “check-in” that would enable more detailed action planning, and addressing barriers in a timely manner for patients ready to discharge the following day.
Our study has a few limitations. First, this is an observational study that cannot determine causation. Second, this is a single-center study conducted on patients only on the general medicine teaching service. Third, there were several concurrent interventions implemented at our institution to improve LOS, throughput, and patient satisfaction in addition to MDR, thus making it difficult to isolate the impact of our intervention. Fourth, in the new model of MDR, rounds took place only 5 days per week, thereby possibly limiting the potential impact on our outcomes. Fifth, while we showed improvements in the discharges before noon and recording of EDD in the post period, we were not able to achieve our target of 25% discharges before noon or 100% recording of EDD in this time period. We believe the limited amount of time between the pre- and postperiods to allow for adoption and learning of the processes might have contributed to the underestimation of the impact of the new model of MDR, thereby limiting our ability to achieve our targets. Sixth, the response rate on the Press Ganey survey was low, and we did not directly survey patients or families for their satisfaction with MDR.
Our study has several strengths. To our knowledge, this is the first study to embed Lean management techniques in the design of MDR in the inpatient setting. While several studies have demonstrated improvements in discharges before noon through the implementation of MDR, they have not incorporated Lean management techniques, which we believe are critical to ensure the sustainability of results.1,3,5,6,8,15 Second, while it was not measured, there was a high level of provider engagement in the process in the new model of MDR. Third, because the MDR were conducted at the nurse’s station on each inpatient unit in the new model instead of in a conference room, it was well attended by all members of the multidisciplinary team. Fourth, the presence of a visibility board allowed for all team members to have easy access to visual feedback throughout the day, even if they were not present at the MDR. Fifth, we believe that there was also more accurate estimation of the date and time of discharge in the new model of MDR because the discussion was facilitated by the case manager, who is experienced in identifying barriers to discharge (compared with the housestaff in the old model of MDR), and included the more experienced attending physician. Finally, the consistent presence of a multidisciplinary team at MDR allowed for the incorporation of everyone’s concerns at one time, thereby limiting the need for paging multiple disciplines throughout the day, which led to quicker resolution of issues and assignment of pending tasks.
In conclusion, our study shows no change in the mean LOS when the Lean-based model of MDR was utilized. Our study demonstrates an increase in discharges before noon and in recording of EDD on our EHR within 24 hours of admission in the post period when the Lean-based model of MDR was utilized. There was no change in patient satisfaction. While this study was conducted at an academic medical center on the general medicine wards, we believe our new model of MDR, which leveraged Lean management techniques, may successfully impact patient flow in all inpatient clinical services and nonteaching hospitals.
Disclosure
The authors report no financial conflicts of interest and have nothing to disclose.
1. Townsend-Gervis M, Cornell P, Vardaman JM. Interdisciplinary Rounds and Structured Communication Reduce Re-Admissions and Improve Some Patient Outcomes. West J Nurs Res. 2014;36(7):917-928. PubMed
2. Vazirani S, Hays RD, Shapiro MF, Cowan M. Effect of a multidisciplinary intervention on communication and collaboration among physicians and nurses. Am J Crit Care. 2005;14(1):71-77. PubMed
3. Wertheimer B, Jacobs RE, Bailey M, et al. Discharge before noon: an achievable hospital goal. J Hosp Med. 2014;9(4):210-214. PubMed
4. Wertheimer B, Jacobs RE, Iturrate E, Bailey M, Hochman K. Discharge before noon: Effect on throughput and sustainability. J Hosp Med. 2015;10(10):664-669. PubMed
5. Halm MA, Gagner S, Goering M, Sabo J, Smith M, Zaccagnini M. Interdisciplinary rounds: impact on patients, families, and staff. Clin Nurse Spec. 2003;17(3):133-142. PubMed
6. O’Mahony S, Mazur E, Charney P, Wang Y, Fine J. Use of multidisciplinary rounds to simultaneously improve quality outcomes, enhance resident education, and shorten length of stay. J Gen Intern Med. 2007;22(8):1073-1079. PubMed
7. Reimer N, Herbener L. Round and round we go: rounding strategies to impact exemplary professional practice. Clin J Oncol Nurs. 2014;18(6):654-660. PubMed
8. Curley C, McEachern JE, Speroff T. A firm trial of interdisciplinary rounds on the inpatient medical wards: an intervention designed using continuous quality improvement. Med Care. 1998;36(8 Suppl):AS4-AS12. PubMed
9. Baggs JG, Ryan SA, Phelps CE, Richeson JF, Johnson JE. The association between interdisciplinary collaboration and patient outcomes in a medical intensive care unit. Heart Lung. 1992;21(1):18-24. PubMed
10. Lawal AK, Rotter T, Kinsman L, et al. Lean management in health care: definition, concepts, methodology and effects reported (systematic review protocol). Syst Rev. 2014;3:103. PubMed
11. Liker JK. Toyota Way: 14 Management Principles from the World’s Greatest Manufacturer. New York, Chicago, San Francisco, Athens, London, Madrid, Mexico City, Milan, New Delhi, Singapore, Sydney, Toronto: McGraw-Hill Education; 2004.
12. Kane M, Chui K, Rimicci J, et al. Lean Manufacturing Improves Emergency Department Throughput and Patient Satisfaction. J Nurs Adm. 2015;45(9):429-434. PubMed
13. R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. 2016. http://www.R-project.org/. Accessed November 7, 2017.
14. Bates D, Mächler M, Bolker B, Walker S. Fitting Linear Mixed-Effects Models Using lme4. J Stat Softw. 2015;67(1):1-48.
15. O’Leary KJ, Buck R, Fligiel HM, et al. Structured interdisciplinary rounds in a medical teaching unit: improving patient safety. Arch Intern Med. 2011;171(7):678-684. PubMed
Given that multiple disciplines are often involved in caring for patients admitted to the hospital, timely communication, collaboration, and coordination amongst various disciplines is necessary for safe and effective patient care.1 With the focus on improving patient satisfaction and throughput in hospitals, it is also important to make more accurate predictions of the discharge date and allow time for patients and their families to prepare for discharge.2-4
Multidisciplinary rounds (MDR) are defined as structured daily communication amongst key members of the patient’s care team (eg, nurses, physicians, case managers, social workers, pharmacists, and rehabilitation services). MDR have shown to be a useful strategy for ensuring that all members of the care team are updated on the plan of care for the patient.5 During MDR, a brief “check-in” discussing the patient’s plan of care, pending needs, and barriers to discharge allows all team members, patients, and families to effectively coordinate care and plan and prepare for discharge.
Multiple studies have reported increased collaboration and improved communication between disciplines with the use of such multidisciplinary rounding.2,5-7 Additionally, MDR have been shown to improve patient outcomes8 and reduce adverse events,9 length of stay (LOS),6,8 cost of care,8 and readmissions.1
We redesigned MDR on the general medicine wards at our institution in October 2014 by using Lean management techniques. Lean is defined as a set of philosophies and methods that aim to create transformation in thinking, behavior, and culture in each process, with the goal of maximizing the value for the patients and providers, adding efficiency, and reducing waste and waits.10
In this study, we evaluate whether this new model of MDR was associated with a decrease in the LOS. We also evaluate whether this new model of MDR was associated with an increase in discharges before noon, documentation of estimated discharge date (EDD) in our electronic health record (EHR), and patient satisfaction.
METHODS
Setting, Design, and Patients
The study was conducted on the teaching general medicine service at our institution, an urban, 484-bed academic hospital. The general medicine service has patients on 4 inpatient units (total of 95 beds) and is managed by 5 teaching service teams.
We performed a pre-post study. The preperiod (in which the old model of MDR was followed) included 4000 patients discharged between September 1, 2013, and October 22, 2014. The postperiod (in which the new model of MDR was followed) included 2085 patients discharged between October 23, 2014, and April 30, 2015. We excluded 139 patients that died in the hospital prior to discharge and patients on the nonteaching and/or private practice service.
All data were provided by our institution’s Digital Solutions Department. Our institutional review board issued a letter of determination exempting this study from further review because it was deemed to be a quality improvement initiative.
Use of Lean Management to Redesign our MDR
Our institution has incorporated the Lean management system to continually add value to services through the elimination of waste, thus simultaneously optimizing the quality of patient care, cost, and patient satisfaction.11 Lean, derived from the Toyota Production System, has long been used in manufacturing and in recent decades has spread to healthcare.12 We leveraged the following 3 key Lean techniques to redesign our MDR: (1) value stream management (VSM), (2) rapid process improvement workshops (RPIW), and (3) active daily management (ADM), as detailed in supplementary Appendix 1.
Interventions
Outcomes
The primary outcome was mean LOS. The secondary outcomes were (1) discharges before noon, (2) recording of the EDD in our EHR within 24 hours of admission (as time stamped on our EHR), and (3) patient satisfaction.
Data for patient satisfaction were obtained using the Press Ganey survey. We used data on patient satisfaction scores for the following 2 relevant questions on this survey: (1) extent to which the patient felt ready to be discharged and (2) how well staff worked together to care for the patient. Proportions of the “top-box” (“very good”) were used for the analysis. These survey data were available on 467 patients (11.7%) in the preperiod and 188 patients (9.0%) in the postperiod.
Data Analysis
A sensitivity analysis was conducted on a second cohort that included a subset of patients from the preperiod between November 1, 2013, and April 30, 2014, and a subset of patients from the postperiod between November 1, 2014, and April 1, 2015, to control for the calendar period (supplementary Appendix 2).
All analyses were conducted in R version 3.3.0, with the linear mixed-effects model lme4 statistical package.13,14
RESULTS
Table 3 shows the differences in the outcomes between the pre- and postperiods. There was no change in the LOS or LOS adjusted for CMI. There was a 3.9% increase in discharges before noon in the postperiod compared with the preperiod (95% CI, 2.4% to 5.3%; P < .001). There was a 9.9% increase in the percentage of patients for whom the EDD was recorded in our EHR within 24 hours of admission (95% CI, 7.4% to 12.4%; P < .001). There was no change in the “top-box” patient satisfaction scores.
There were only marginal differences in the results between the entire cohort and a second subset cohort used for sensitivity analysis (supplementary Appendix 2).
DISCUSSION
In our study, there was no change in the mean LOS with the new model of MDR. There was an increase in discharges before noon and in recording of the EDD in our EHR within 24 hours of admission in the postperiod when the Lean-based new model of MDR was utilized. There was no change in patient satisfaction. With no change in staffing, we were able to accommodate the increase in the discharge volume in the postperiod.
We believe our results are attributable to several factors, including clearly defined roles and responsibilities for all participants of MDR, the inclusion of more experienced general medicine attending physician (compared with housestaff), Lean management techniques to identify gaps in the patient’s journey from emergency department to discharge using VSM, the development of appropriate workflows and standard work on how the multidisciplinary teams would work together at RPIWs, and ADM to ensure sustainability and engagement among frontline members and institutional leaders. In order to sustain this, we planned to continue monitoring data in daily, weekly, and monthly forums with senior physician and administrative leaders. Planning for additional interventions is underway, including moving MDR to the bedside, instituting an afternoon “check-in” that would enable more detailed action planning, and addressing barriers in a timely manner for patients ready to discharge the following day.
Our study has a few limitations. First, this is an observational study that cannot determine causation. Second, this is a single-center study conducted on patients only on the general medicine teaching service. Third, there were several concurrent interventions implemented at our institution to improve LOS, throughput, and patient satisfaction in addition to MDR, thus making it difficult to isolate the impact of our intervention. Fourth, in the new model of MDR, rounds took place only 5 days per week, thereby possibly limiting the potential impact on our outcomes. Fifth, while we showed improvements in the discharges before noon and recording of EDD in the post period, we were not able to achieve our target of 25% discharges before noon or 100% recording of EDD in this time period. We believe the limited amount of time between the pre- and postperiods to allow for adoption and learning of the processes might have contributed to the underestimation of the impact of the new model of MDR, thereby limiting our ability to achieve our targets. Sixth, the response rate on the Press Ganey survey was low, and we did not directly survey patients or families for their satisfaction with MDR.
Our study has several strengths. To our knowledge, this is the first study to embed Lean management techniques in the design of MDR in the inpatient setting. While several studies have demonstrated improvements in discharges before noon through the implementation of MDR, they have not incorporated Lean management techniques, which we believe are critical to ensure the sustainability of results.1,3,5,6,8,15 Second, while it was not measured, there was a high level of provider engagement in the process in the new model of MDR. Third, because the MDR were conducted at the nurse’s station on each inpatient unit in the new model instead of in a conference room, it was well attended by all members of the multidisciplinary team. Fourth, the presence of a visibility board allowed for all team members to have easy access to visual feedback throughout the day, even if they were not present at the MDR. Fifth, we believe that there was also more accurate estimation of the date and time of discharge in the new model of MDR because the discussion was facilitated by the case manager, who is experienced in identifying barriers to discharge (compared with the housestaff in the old model of MDR), and included the more experienced attending physician. Finally, the consistent presence of a multidisciplinary team at MDR allowed for the incorporation of everyone’s concerns at one time, thereby limiting the need for paging multiple disciplines throughout the day, which led to quicker resolution of issues and assignment of pending tasks.
In conclusion, our study shows no change in the mean LOS when the Lean-based model of MDR was utilized. Our study demonstrates an increase in discharges before noon and in recording of EDD on our EHR within 24 hours of admission in the post period when the Lean-based model of MDR was utilized. There was no change in patient satisfaction. While this study was conducted at an academic medical center on the general medicine wards, we believe our new model of MDR, which leveraged Lean management techniques, may successfully impact patient flow in all inpatient clinical services and nonteaching hospitals.
Disclosure
The authors report no financial conflicts of interest and have nothing to disclose.
Given that multiple disciplines are often involved in caring for patients admitted to the hospital, timely communication, collaboration, and coordination amongst various disciplines is necessary for safe and effective patient care.1 With the focus on improving patient satisfaction and throughput in hospitals, it is also important to make more accurate predictions of the discharge date and allow time for patients and their families to prepare for discharge.2-4
Multidisciplinary rounds (MDR) are defined as structured daily communication amongst key members of the patient’s care team (eg, nurses, physicians, case managers, social workers, pharmacists, and rehabilitation services). MDR have shown to be a useful strategy for ensuring that all members of the care team are updated on the plan of care for the patient.5 During MDR, a brief “check-in” discussing the patient’s plan of care, pending needs, and barriers to discharge allows all team members, patients, and families to effectively coordinate care and plan and prepare for discharge.
Multiple studies have reported increased collaboration and improved communication between disciplines with the use of such multidisciplinary rounding.2,5-7 Additionally, MDR have been shown to improve patient outcomes8 and reduce adverse events,9 length of stay (LOS),6,8 cost of care,8 and readmissions.1
We redesigned MDR on the general medicine wards at our institution in October 2014 by using Lean management techniques. Lean is defined as a set of philosophies and methods that aim to create transformation in thinking, behavior, and culture in each process, with the goal of maximizing the value for the patients and providers, adding efficiency, and reducing waste and waits.10
In this study, we evaluate whether this new model of MDR was associated with a decrease in the LOS. We also evaluate whether this new model of MDR was associated with an increase in discharges before noon, documentation of estimated discharge date (EDD) in our electronic health record (EHR), and patient satisfaction.
METHODS
Setting, Design, and Patients
The study was conducted on the teaching general medicine service at our institution, an urban, 484-bed academic hospital. The general medicine service has patients on 4 inpatient units (total of 95 beds) and is managed by 5 teaching service teams.
We performed a pre-post study. The preperiod (in which the old model of MDR was followed) included 4000 patients discharged between September 1, 2013, and October 22, 2014. The postperiod (in which the new model of MDR was followed) included 2085 patients discharged between October 23, 2014, and April 30, 2015. We excluded 139 patients that died in the hospital prior to discharge and patients on the nonteaching and/or private practice service.
All data were provided by our institution’s Digital Solutions Department. Our institutional review board issued a letter of determination exempting this study from further review because it was deemed to be a quality improvement initiative.
Use of Lean Management to Redesign our MDR
Our institution has incorporated the Lean management system to continually add value to services through the elimination of waste, thus simultaneously optimizing the quality of patient care, cost, and patient satisfaction.11 Lean, derived from the Toyota Production System, has long been used in manufacturing and in recent decades has spread to healthcare.12 We leveraged the following 3 key Lean techniques to redesign our MDR: (1) value stream management (VSM), (2) rapid process improvement workshops (RPIW), and (3) active daily management (ADM), as detailed in supplementary Appendix 1.
Interventions
Outcomes
The primary outcome was mean LOS. The secondary outcomes were (1) discharges before noon, (2) recording of the EDD in our EHR within 24 hours of admission (as time stamped on our EHR), and (3) patient satisfaction.
Data for patient satisfaction were obtained using the Press Ganey survey. We used data on patient satisfaction scores for the following 2 relevant questions on this survey: (1) extent to which the patient felt ready to be discharged and (2) how well staff worked together to care for the patient. Proportions of the “top-box” (“very good”) were used for the analysis. These survey data were available on 467 patients (11.7%) in the preperiod and 188 patients (9.0%) in the postperiod.
Data Analysis
A sensitivity analysis was conducted on a second cohort that included a subset of patients from the preperiod between November 1, 2013, and April 30, 2014, and a subset of patients from the postperiod between November 1, 2014, and April 1, 2015, to control for the calendar period (supplementary Appendix 2).
All analyses were conducted in R version 3.3.0, with the linear mixed-effects model lme4 statistical package.13,14
RESULTS
Table 3 shows the differences in the outcomes between the pre- and postperiods. There was no change in the LOS or LOS adjusted for CMI. There was a 3.9% increase in discharges before noon in the postperiod compared with the preperiod (95% CI, 2.4% to 5.3%; P < .001). There was a 9.9% increase in the percentage of patients for whom the EDD was recorded in our EHR within 24 hours of admission (95% CI, 7.4% to 12.4%; P < .001). There was no change in the “top-box” patient satisfaction scores.
There were only marginal differences in the results between the entire cohort and a second subset cohort used for sensitivity analysis (supplementary Appendix 2).
DISCUSSION
In our study, there was no change in the mean LOS with the new model of MDR. There was an increase in discharges before noon and in recording of the EDD in our EHR within 24 hours of admission in the postperiod when the Lean-based new model of MDR was utilized. There was no change in patient satisfaction. With no change in staffing, we were able to accommodate the increase in the discharge volume in the postperiod.
We believe our results are attributable to several factors, including clearly defined roles and responsibilities for all participants of MDR, the inclusion of more experienced general medicine attending physician (compared with housestaff), Lean management techniques to identify gaps in the patient’s journey from emergency department to discharge using VSM, the development of appropriate workflows and standard work on how the multidisciplinary teams would work together at RPIWs, and ADM to ensure sustainability and engagement among frontline members and institutional leaders. In order to sustain this, we planned to continue monitoring data in daily, weekly, and monthly forums with senior physician and administrative leaders. Planning for additional interventions is underway, including moving MDR to the bedside, instituting an afternoon “check-in” that would enable more detailed action planning, and addressing barriers in a timely manner for patients ready to discharge the following day.
Our study has a few limitations. First, this is an observational study that cannot determine causation. Second, this is a single-center study conducted on patients only on the general medicine teaching service. Third, there were several concurrent interventions implemented at our institution to improve LOS, throughput, and patient satisfaction in addition to MDR, thus making it difficult to isolate the impact of our intervention. Fourth, in the new model of MDR, rounds took place only 5 days per week, thereby possibly limiting the potential impact on our outcomes. Fifth, while we showed improvements in the discharges before noon and recording of EDD in the post period, we were not able to achieve our target of 25% discharges before noon or 100% recording of EDD in this time period. We believe the limited amount of time between the pre- and postperiods to allow for adoption and learning of the processes might have contributed to the underestimation of the impact of the new model of MDR, thereby limiting our ability to achieve our targets. Sixth, the response rate on the Press Ganey survey was low, and we did not directly survey patients or families for their satisfaction with MDR.
Our study has several strengths. To our knowledge, this is the first study to embed Lean management techniques in the design of MDR in the inpatient setting. While several studies have demonstrated improvements in discharges before noon through the implementation of MDR, they have not incorporated Lean management techniques, which we believe are critical to ensure the sustainability of results.1,3,5,6,8,15 Second, while it was not measured, there was a high level of provider engagement in the process in the new model of MDR. Third, because the MDR were conducted at the nurse’s station on each inpatient unit in the new model instead of in a conference room, it was well attended by all members of the multidisciplinary team. Fourth, the presence of a visibility board allowed for all team members to have easy access to visual feedback throughout the day, even if they were not present at the MDR. Fifth, we believe that there was also more accurate estimation of the date and time of discharge in the new model of MDR because the discussion was facilitated by the case manager, who is experienced in identifying barriers to discharge (compared with the housestaff in the old model of MDR), and included the more experienced attending physician. Finally, the consistent presence of a multidisciplinary team at MDR allowed for the incorporation of everyone’s concerns at one time, thereby limiting the need for paging multiple disciplines throughout the day, which led to quicker resolution of issues and assignment of pending tasks.
In conclusion, our study shows no change in the mean LOS when the Lean-based model of MDR was utilized. Our study demonstrates an increase in discharges before noon and in recording of EDD on our EHR within 24 hours of admission in the post period when the Lean-based model of MDR was utilized. There was no change in patient satisfaction. While this study was conducted at an academic medical center on the general medicine wards, we believe our new model of MDR, which leveraged Lean management techniques, may successfully impact patient flow in all inpatient clinical services and nonteaching hospitals.
Disclosure
The authors report no financial conflicts of interest and have nothing to disclose.
1. Townsend-Gervis M, Cornell P, Vardaman JM. Interdisciplinary Rounds and Structured Communication Reduce Re-Admissions and Improve Some Patient Outcomes. West J Nurs Res. 2014;36(7):917-928. PubMed
2. Vazirani S, Hays RD, Shapiro MF, Cowan M. Effect of a multidisciplinary intervention on communication and collaboration among physicians and nurses. Am J Crit Care. 2005;14(1):71-77. PubMed
3. Wertheimer B, Jacobs RE, Bailey M, et al. Discharge before noon: an achievable hospital goal. J Hosp Med. 2014;9(4):210-214. PubMed
4. Wertheimer B, Jacobs RE, Iturrate E, Bailey M, Hochman K. Discharge before noon: Effect on throughput and sustainability. J Hosp Med. 2015;10(10):664-669. PubMed
5. Halm MA, Gagner S, Goering M, Sabo J, Smith M, Zaccagnini M. Interdisciplinary rounds: impact on patients, families, and staff. Clin Nurse Spec. 2003;17(3):133-142. PubMed
6. O’Mahony S, Mazur E, Charney P, Wang Y, Fine J. Use of multidisciplinary rounds to simultaneously improve quality outcomes, enhance resident education, and shorten length of stay. J Gen Intern Med. 2007;22(8):1073-1079. PubMed
7. Reimer N, Herbener L. Round and round we go: rounding strategies to impact exemplary professional practice. Clin J Oncol Nurs. 2014;18(6):654-660. PubMed
8. Curley C, McEachern JE, Speroff T. A firm trial of interdisciplinary rounds on the inpatient medical wards: an intervention designed using continuous quality improvement. Med Care. 1998;36(8 Suppl):AS4-AS12. PubMed
9. Baggs JG, Ryan SA, Phelps CE, Richeson JF, Johnson JE. The association between interdisciplinary collaboration and patient outcomes in a medical intensive care unit. Heart Lung. 1992;21(1):18-24. PubMed
10. Lawal AK, Rotter T, Kinsman L, et al. Lean management in health care: definition, concepts, methodology and effects reported (systematic review protocol). Syst Rev. 2014;3:103. PubMed
11. Liker JK. Toyota Way: 14 Management Principles from the World’s Greatest Manufacturer. New York, Chicago, San Francisco, Athens, London, Madrid, Mexico City, Milan, New Delhi, Singapore, Sydney, Toronto: McGraw-Hill Education; 2004.
12. Kane M, Chui K, Rimicci J, et al. Lean Manufacturing Improves Emergency Department Throughput and Patient Satisfaction. J Nurs Adm. 2015;45(9):429-434. PubMed
13. R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. 2016. http://www.R-project.org/. Accessed November 7, 2017.
14. Bates D, Mächler M, Bolker B, Walker S. Fitting Linear Mixed-Effects Models Using lme4. J Stat Softw. 2015;67(1):1-48.
15. O’Leary KJ, Buck R, Fligiel HM, et al. Structured interdisciplinary rounds in a medical teaching unit: improving patient safety. Arch Intern Med. 2011;171(7):678-684. PubMed
1. Townsend-Gervis M, Cornell P, Vardaman JM. Interdisciplinary Rounds and Structured Communication Reduce Re-Admissions and Improve Some Patient Outcomes. West J Nurs Res. 2014;36(7):917-928. PubMed
2. Vazirani S, Hays RD, Shapiro MF, Cowan M. Effect of a multidisciplinary intervention on communication and collaboration among physicians and nurses. Am J Crit Care. 2005;14(1):71-77. PubMed
3. Wertheimer B, Jacobs RE, Bailey M, et al. Discharge before noon: an achievable hospital goal. J Hosp Med. 2014;9(4):210-214. PubMed
4. Wertheimer B, Jacobs RE, Iturrate E, Bailey M, Hochman K. Discharge before noon: Effect on throughput and sustainability. J Hosp Med. 2015;10(10):664-669. PubMed
5. Halm MA, Gagner S, Goering M, Sabo J, Smith M, Zaccagnini M. Interdisciplinary rounds: impact on patients, families, and staff. Clin Nurse Spec. 2003;17(3):133-142. PubMed
6. O’Mahony S, Mazur E, Charney P, Wang Y, Fine J. Use of multidisciplinary rounds to simultaneously improve quality outcomes, enhance resident education, and shorten length of stay. J Gen Intern Med. 2007;22(8):1073-1079. PubMed
7. Reimer N, Herbener L. Round and round we go: rounding strategies to impact exemplary professional practice. Clin J Oncol Nurs. 2014;18(6):654-660. PubMed
8. Curley C, McEachern JE, Speroff T. A firm trial of interdisciplinary rounds on the inpatient medical wards: an intervention designed using continuous quality improvement. Med Care. 1998;36(8 Suppl):AS4-AS12. PubMed
9. Baggs JG, Ryan SA, Phelps CE, Richeson JF, Johnson JE. The association between interdisciplinary collaboration and patient outcomes in a medical intensive care unit. Heart Lung. 1992;21(1):18-24. PubMed
10. Lawal AK, Rotter T, Kinsman L, et al. Lean management in health care: definition, concepts, methodology and effects reported (systematic review protocol). Syst Rev. 2014;3:103. PubMed
11. Liker JK. Toyota Way: 14 Management Principles from the World’s Greatest Manufacturer. New York, Chicago, San Francisco, Athens, London, Madrid, Mexico City, Milan, New Delhi, Singapore, Sydney, Toronto: McGraw-Hill Education; 2004.
12. Kane M, Chui K, Rimicci J, et al. Lean Manufacturing Improves Emergency Department Throughput and Patient Satisfaction. J Nurs Adm. 2015;45(9):429-434. PubMed
13. R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. 2016. http://www.R-project.org/. Accessed November 7, 2017.
14. Bates D, Mächler M, Bolker B, Walker S. Fitting Linear Mixed-Effects Models Using lme4. J Stat Softw. 2015;67(1):1-48.
15. O’Leary KJ, Buck R, Fligiel HM, et al. Structured interdisciplinary rounds in a medical teaching unit: improving patient safety. Arch Intern Med. 2011;171(7):678-684. PubMed
© 2018 Society of Hospital Medicine
Magnitude of Potentially Inappropriate Thrombophilia Testing in the Inpatient Hospital Setting
Venous thromboembolism (VTE) affects more than 1 million patients and costs the US healthcare system more than $1.5 billion annually.1 Inherited and acquired thrombophilias have been perceived as important risk factors in assessing the risk of VTE recurrence and guiding the duration of anticoagulation.
Thrombophilias increase the risk of a first thrombotic event, but existing data have failed to demonstrate the usefulness of routine thrombophilia screening on subsequent management.2,3 Moreover, thrombophilia testing ordered in the context of an inpatient hospitalization is limited by confounding factors, especially during an acute thrombotic event or in the setting of concurrent anticoagulation.4
Recognizing the costliness of routine thrombophilia testing, The American Society of Hematology introduced its Choosing Wisely campaign in 2013 in an effort to reduce test ordering in the setting of provoked VTEs with a major transient risk factor.5 In order to define current practice behavior at our institution, we conducted a retrospective study to determine the magnitude and financial impact of potentially inappropriate thrombophilia testing in the inpatient setting.
METHODS
We performed a retrospective analysis of thrombophilia testing across all inpatient services at a large, quaternary-care academic institution over a 2-year period. Electronic medical record data containing all thrombophilia tests ordered on inpatients from June 2013 to June 2015 were obtained. This study was exempt from institutional review board approval.
Inclusion criteria included any inpatient for which thrombophilia testing occurred. Patients were excluded if testing was ordered in the absence of VTE or arterial thrombosis or if it was ordered as part of a work-up for another medical condition (see Supplementary Material).
Thrombophilia testing was defined as any of the following: inherited thrombophilias (Factor V Leiden or prothrombin 20210 gene mutations, antithrombin, or protein C or S activity levels) or acquired thrombophilias (lupus anticoagulant [Testing refers to the activated partial thromboplastin time lupus assay.], beta-2 glycoprotein 1 immunoglobulins M and G, anticardiolipin immunoglobulins M and G, dilute Russell’s viper venom time, or JAK2 V617F mutations).
Extracted data included patient age, sex, type of thrombophilia test ordered, ordering primary service, admission diagnosis, and objective confirmation of thrombotic events. The indication for test ordering was determined via medical record review of the patient’s corresponding hospitalization. Each test was evaluated in the context of the patient’s presenting history, hospital course, active medications, accompanying laboratory and radiographic studies, and consultant recommendations to arrive at a conclusion regarding both the test’s reason for ordering and whether its indication was “inappropriate,” “appropriate,” or “equivocal.” Cost data were obtained through the Centers for Medicare & Medicaid Services (CMS) Clinical Laboratory Fee Schedule for 2016 (see Supplementary Material).6
The criteria for defining test appropriateness were formulated by utilizing a combination of major society guidelines and literature review.5,7-10 The criteria placed emphasis upon the ordered tests’ clinical relevance and reliability and were subsequently reviewed by a senior hematologist with specific expertise in thrombosis (see Supplementary Material).
Two internal medicine resident physician data reviewers independently evaluated the ordered tests. To ensure consistency between reviewers, a sample of identical test orders was compared for concordance, and a Cohen’s kappa coefficient was calculated. For purposes of analysis, equivocal orders were included under the appropriate category, as this study focused on the quantification of potentially inappropriate ordering practices. Pearson chi-square testing was performed in order to compare ordering practices between services using Stata.11
RESULTS
In total, we reviewed 2179 individual tests, of which 362 (16.6%) were excluded. The remaining 1817 tests involved 299 patients across 26 primary specialties. Fifty-two (2.9% of orders) were ultimately deemed equivocal. The Table illustrates the overall proportion and cost of inappropriate test ordering as well as testing characteristics of the most commonly encountered thrombotic diagnoses. The Figure illustrates the proportion of potentially inappropriate test ordering with its associated cost by test type.
Orders for Factor V Leiden, prothrombin 20210, and protein C and S activity levels were most commonly deemed inappropriate due to the test results’ failure to alter clinical management (97.3%, 99.2%, 99.4% of their inappropriate orders, respectively). Antithrombin testing (59.4%) was deemed inappropriate most commonly in the setting of acute thrombosis. The lupus anticoagulant (82.8%) was inappropriately ordered most frequently in the setting of concurrent anticoagulation.
Ordering practices were then compared between nonteaching and teaching inpatient general medicine services. We observed a higher proportion of inappropriate tests ordered by the nonteaching services as compared to the teaching services (120 of 173 orders [69.4%] versus 125 of 320 [39.1%], respectively; P < 0.001).
The interreviewer kappa coefficient was 0.82 (P < 0.0001).
DISCUSSION
This retrospective analysis represents one of the largest examinations of inpatient thrombophilia testing practices to date. Our results illustrate the high prevalence and significant financial impact of potentially inappropriate thrombophilia testing conducted in the inpatient setting. The data confirm that, per our defined criteria, more than 90% of inherited thrombophilia testing was potentially inappropriate while the majority of acquired thrombophilia testing was appropriate, with the exception of the lupus anticoagulant.
Even when appropriately ordered, studies suggest that positive thrombophilia screening results fail to impact outcomes in most patients with VTE. In an effort to evaluate positive results’ potential to provide a basis from which to extend the duration of anticoagulation, and therefore reduce the risk of a recurrent VTE, a case-control analysis was performed on a series of patients with a first-VTE event (Multiple Environmental and Genetic Assessment of risk factors for venous thrombosis [MEGA] study).3 In examining the odds ratio (OR) for recurrence between patients who did or did not undergo testing for Factor V Leiden, antithrombin, or protein C or S activity, the data failed to show an impact of testing on the risk of VTE recurrence (OR 1.2; confidence interval, 0.8-1.8). In fact, decision making has increasingly relied on patients’ clinical characteristics rather than thrombophilia test results to guide anticoagulation duration after incident VTEs. A 2017 study illustrated that when using a clinical decision rule (Clinical Decision Rule Validation Study to Predict Low Recurrent Risk in Patients With Unprovoked Venous Thromboembolism [REVERSE criteria]) in patients with a first, unprovoked VTE, routine thrombophilia screening added little to determining the need for prolonged anticoagulation.12 These findings support the limited clinical utility of current test ordering practices for the prediction and management of recurrent venous thrombosis.
Regarding the acquired thrombophilias, antiphospholipid antibody testing was predominantly ordered in a justified manner, which is consistent with the notion that test results could affect clinical management, such as anticoagulation duration or choice of anticoagulant.13 However, the validity of lupus anticoagulant testing was limited by the frequency of patients on concurrent anticoagulation.
Financially, the cumulative cost associated with inappropriate ordering was substantial, regardless of the thrombotic event in question. Moreover, our calculated costs are derived from CMS reimbursement rates and likely underestimate the true financial impact of errant testing given that commercial laboratories frequently charge at rates several-fold higher. On a national scale, prior analyses have suggested that the annual cost of thrombophilia testing, based on typical commercial rates, ranges from $300 million to $672 million.14
Researchers in prior studies have similarly examined the frequency of inappropriate thrombophilia testing and methods to reduce it. Researchers in a 2014 study demonstrated initially high rates of inappropriate inherited thrombophilia testing, and then showed marked reductions in testing and cost savings across multiple specialties following the introduction of a flowchart on a preprinted order form.15 Our findings provide motivation to perform similar endeavors.
The proportional difference of inappropriate ordering observed between nonteaching- and teaching-medicine services indicates a potential role for educational interventions. We recently completed a series of lectures on high-value thrombophilia ordering for residents and are actively analyzing its impact on subsequent ordering practices. We are also piloting an electronic best practice advisory for thrombophilia test ordering. Though the advisory may be overridden, providers are asked to provide justification for doing so on a voluntary basis. We plan to evaluate its effect on our findings reported in this study.
We acknowledge that our exclusion criteria resulted in the omission of testing across a spectrum of nonthrombotic clinical conditions, raising the question of selection bias. Because there are no established guidelines to determine the appropriateness of testing in these scenarios, we chose to limit the analysis of errant ordering to the context of thrombotic events. Other limitations of this study include the analysis of equivocal orders as appropriate. However, because equivocal ordering represented less than 3% of all analyzed orders, including these as inappropriate would not have significantly altered our findings.
CONCLUSIONS
A review of thrombophilia testing practices at our institution demonstrated that inappropriate testing in the inpatient setting is a frequent phenomenon associated with a significant financial impact. This effect was more pronounced in inherited versus acquired thrombophilia testing. Testing was frequently confounded and often failed to impact patients’ short- or long-term clinical management, regardless of the result.
These findings serve as a strong impetus to reduce the burden of routine thrombophilia testing during hospital admissions. Our data demonstrate a need for institution-wide changes such as implementing best practice advisories, introducing ordering restrictions, and conducting educational interventions in order to reduce unnecessary expenditures and improve patient care.
Disclosure
The authors have nothing to disclose.
1. Dobesh PP. Economic burden of venous thromboembolism in hospitalized patients. Pharmacotherapy. 2009;29(8):943-953. PubMed
2. Cohn DM, Vansenne F, de Borgie CA, Middeldorp S. Thrombophilia testing for prevention of recurrent venous thromboembolism. Cochrane Database Syst Rev. 2012;12:Cd007069. PubMed
3. Coppens M, Reijnders JH, Middeldorp S, Doggen CJ, Rosendaal FR. Testing for inherited thrombophilia does not reduce the recurrence of venous thrombosis. J Thromb Haemost. 2008;6(9):1474-1477. PubMed
4. Somma J, Sussman, II, Rand JH. An evaluation of thrombophilia screening in an urban tertiary care medical center: A “real world” experience. Am J Clin Pathol. 2006;126(1):120-127. PubMed
5. Hicks LK, Bering H, Carson KR, et al. The ASH Choosing Wisely® campaign: five hematologic tests and treatments to question. Blood. 2013;122(24):3879-3883. PubMed
6. Centers for Medicare & Medicaid Services: Clinical Laboratory Fee Schedule Files. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/ClinicalLabFeeSched/Clinical-Laboratory-Fee-Schedule-Files.html. Accessed October 2016
7. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed
8. Moll S. Thrombophilia: clinical-practical aspects. J Thromb Thrombolysis. 2015;39(3):367-378. PubMed
9. Kearon C, Akl EA, Ornelas J, et al. Antithrombotic therapy for vte disease: Chest guideline and expert panel report. Chest. 2016;149(2):315-352. PubMed
10. Baglin T, Gray E, Greaves M, et al. Clinical guidelines for testing for heritable thrombophilia. Br J Haematol. 2010;149(2):209-220. PubMed
11. Stata Statistical Software [computer program]. Version Release 14. College Station, TX: StataCorp LP; 2015.
12. Garcia-Horton A, Kovacs MJ, Abdulrehman J, Taylor JE, Sharma S, Lazo-Langner A. Impact of thrombophilia screening on venous thromboembolism management practices. Thromb Res.149:76-80. PubMed
13. Schulman S, Svenungsson E, Granqvist S. Anticardiolipin antibodies predict early recurrence of thromboembolism and death among patients with venous thromboembolism following anticoagulant therapy. Duration of Anticoagulation Study Group. Am J Med. 1998;104(4):332-338. PubMed
14. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
15. Smith TW, Pi D, Hudoba M, Lee AY. Reducing inpatient heritable thrombophilia testing using a clinical decision-making tool. J Clin Pathol. 2014;67(4):345-349. PubMed
Venous thromboembolism (VTE) affects more than 1 million patients and costs the US healthcare system more than $1.5 billion annually.1 Inherited and acquired thrombophilias have been perceived as important risk factors in assessing the risk of VTE recurrence and guiding the duration of anticoagulation.
Thrombophilias increase the risk of a first thrombotic event, but existing data have failed to demonstrate the usefulness of routine thrombophilia screening on subsequent management.2,3 Moreover, thrombophilia testing ordered in the context of an inpatient hospitalization is limited by confounding factors, especially during an acute thrombotic event or in the setting of concurrent anticoagulation.4
Recognizing the costliness of routine thrombophilia testing, The American Society of Hematology introduced its Choosing Wisely campaign in 2013 in an effort to reduce test ordering in the setting of provoked VTEs with a major transient risk factor.5 In order to define current practice behavior at our institution, we conducted a retrospective study to determine the magnitude and financial impact of potentially inappropriate thrombophilia testing in the inpatient setting.
METHODS
We performed a retrospective analysis of thrombophilia testing across all inpatient services at a large, quaternary-care academic institution over a 2-year period. Electronic medical record data containing all thrombophilia tests ordered on inpatients from June 2013 to June 2015 were obtained. This study was exempt from institutional review board approval.
Inclusion criteria included any inpatient for which thrombophilia testing occurred. Patients were excluded if testing was ordered in the absence of VTE or arterial thrombosis or if it was ordered as part of a work-up for another medical condition (see Supplementary Material).
Thrombophilia testing was defined as any of the following: inherited thrombophilias (Factor V Leiden or prothrombin 20210 gene mutations, antithrombin, or protein C or S activity levels) or acquired thrombophilias (lupus anticoagulant [Testing refers to the activated partial thromboplastin time lupus assay.], beta-2 glycoprotein 1 immunoglobulins M and G, anticardiolipin immunoglobulins M and G, dilute Russell’s viper venom time, or JAK2 V617F mutations).
Extracted data included patient age, sex, type of thrombophilia test ordered, ordering primary service, admission diagnosis, and objective confirmation of thrombotic events. The indication for test ordering was determined via medical record review of the patient’s corresponding hospitalization. Each test was evaluated in the context of the patient’s presenting history, hospital course, active medications, accompanying laboratory and radiographic studies, and consultant recommendations to arrive at a conclusion regarding both the test’s reason for ordering and whether its indication was “inappropriate,” “appropriate,” or “equivocal.” Cost data were obtained through the Centers for Medicare & Medicaid Services (CMS) Clinical Laboratory Fee Schedule for 2016 (see Supplementary Material).6
The criteria for defining test appropriateness were formulated by utilizing a combination of major society guidelines and literature review.5,7-10 The criteria placed emphasis upon the ordered tests’ clinical relevance and reliability and were subsequently reviewed by a senior hematologist with specific expertise in thrombosis (see Supplementary Material).
Two internal medicine resident physician data reviewers independently evaluated the ordered tests. To ensure consistency between reviewers, a sample of identical test orders was compared for concordance, and a Cohen’s kappa coefficient was calculated. For purposes of analysis, equivocal orders were included under the appropriate category, as this study focused on the quantification of potentially inappropriate ordering practices. Pearson chi-square testing was performed in order to compare ordering practices between services using Stata.11
RESULTS
In total, we reviewed 2179 individual tests, of which 362 (16.6%) were excluded. The remaining 1817 tests involved 299 patients across 26 primary specialties. Fifty-two (2.9% of orders) were ultimately deemed equivocal. The Table illustrates the overall proportion and cost of inappropriate test ordering as well as testing characteristics of the most commonly encountered thrombotic diagnoses. The Figure illustrates the proportion of potentially inappropriate test ordering with its associated cost by test type.
Orders for Factor V Leiden, prothrombin 20210, and protein C and S activity levels were most commonly deemed inappropriate due to the test results’ failure to alter clinical management (97.3%, 99.2%, 99.4% of their inappropriate orders, respectively). Antithrombin testing (59.4%) was deemed inappropriate most commonly in the setting of acute thrombosis. The lupus anticoagulant (82.8%) was inappropriately ordered most frequently in the setting of concurrent anticoagulation.
Ordering practices were then compared between nonteaching and teaching inpatient general medicine services. We observed a higher proportion of inappropriate tests ordered by the nonteaching services as compared to the teaching services (120 of 173 orders [69.4%] versus 125 of 320 [39.1%], respectively; P < 0.001).
The interreviewer kappa coefficient was 0.82 (P < 0.0001).
DISCUSSION
This retrospective analysis represents one of the largest examinations of inpatient thrombophilia testing practices to date. Our results illustrate the high prevalence and significant financial impact of potentially inappropriate thrombophilia testing conducted in the inpatient setting. The data confirm that, per our defined criteria, more than 90% of inherited thrombophilia testing was potentially inappropriate while the majority of acquired thrombophilia testing was appropriate, with the exception of the lupus anticoagulant.
Even when appropriately ordered, studies suggest that positive thrombophilia screening results fail to impact outcomes in most patients with VTE. In an effort to evaluate positive results’ potential to provide a basis from which to extend the duration of anticoagulation, and therefore reduce the risk of a recurrent VTE, a case-control analysis was performed on a series of patients with a first-VTE event (Multiple Environmental and Genetic Assessment of risk factors for venous thrombosis [MEGA] study).3 In examining the odds ratio (OR) for recurrence between patients who did or did not undergo testing for Factor V Leiden, antithrombin, or protein C or S activity, the data failed to show an impact of testing on the risk of VTE recurrence (OR 1.2; confidence interval, 0.8-1.8). In fact, decision making has increasingly relied on patients’ clinical characteristics rather than thrombophilia test results to guide anticoagulation duration after incident VTEs. A 2017 study illustrated that when using a clinical decision rule (Clinical Decision Rule Validation Study to Predict Low Recurrent Risk in Patients With Unprovoked Venous Thromboembolism [REVERSE criteria]) in patients with a first, unprovoked VTE, routine thrombophilia screening added little to determining the need for prolonged anticoagulation.12 These findings support the limited clinical utility of current test ordering practices for the prediction and management of recurrent venous thrombosis.
Regarding the acquired thrombophilias, antiphospholipid antibody testing was predominantly ordered in a justified manner, which is consistent with the notion that test results could affect clinical management, such as anticoagulation duration or choice of anticoagulant.13 However, the validity of lupus anticoagulant testing was limited by the frequency of patients on concurrent anticoagulation.
Financially, the cumulative cost associated with inappropriate ordering was substantial, regardless of the thrombotic event in question. Moreover, our calculated costs are derived from CMS reimbursement rates and likely underestimate the true financial impact of errant testing given that commercial laboratories frequently charge at rates several-fold higher. On a national scale, prior analyses have suggested that the annual cost of thrombophilia testing, based on typical commercial rates, ranges from $300 million to $672 million.14
Researchers in prior studies have similarly examined the frequency of inappropriate thrombophilia testing and methods to reduce it. Researchers in a 2014 study demonstrated initially high rates of inappropriate inherited thrombophilia testing, and then showed marked reductions in testing and cost savings across multiple specialties following the introduction of a flowchart on a preprinted order form.15 Our findings provide motivation to perform similar endeavors.
The proportional difference of inappropriate ordering observed between nonteaching- and teaching-medicine services indicates a potential role for educational interventions. We recently completed a series of lectures on high-value thrombophilia ordering for residents and are actively analyzing its impact on subsequent ordering practices. We are also piloting an electronic best practice advisory for thrombophilia test ordering. Though the advisory may be overridden, providers are asked to provide justification for doing so on a voluntary basis. We plan to evaluate its effect on our findings reported in this study.
We acknowledge that our exclusion criteria resulted in the omission of testing across a spectrum of nonthrombotic clinical conditions, raising the question of selection bias. Because there are no established guidelines to determine the appropriateness of testing in these scenarios, we chose to limit the analysis of errant ordering to the context of thrombotic events. Other limitations of this study include the analysis of equivocal orders as appropriate. However, because equivocal ordering represented less than 3% of all analyzed orders, including these as inappropriate would not have significantly altered our findings.
CONCLUSIONS
A review of thrombophilia testing practices at our institution demonstrated that inappropriate testing in the inpatient setting is a frequent phenomenon associated with a significant financial impact. This effect was more pronounced in inherited versus acquired thrombophilia testing. Testing was frequently confounded and often failed to impact patients’ short- or long-term clinical management, regardless of the result.
These findings serve as a strong impetus to reduce the burden of routine thrombophilia testing during hospital admissions. Our data demonstrate a need for institution-wide changes such as implementing best practice advisories, introducing ordering restrictions, and conducting educational interventions in order to reduce unnecessary expenditures and improve patient care.
Disclosure
The authors have nothing to disclose.
Venous thromboembolism (VTE) affects more than 1 million patients and costs the US healthcare system more than $1.5 billion annually.1 Inherited and acquired thrombophilias have been perceived as important risk factors in assessing the risk of VTE recurrence and guiding the duration of anticoagulation.
Thrombophilias increase the risk of a first thrombotic event, but existing data have failed to demonstrate the usefulness of routine thrombophilia screening on subsequent management.2,3 Moreover, thrombophilia testing ordered in the context of an inpatient hospitalization is limited by confounding factors, especially during an acute thrombotic event or in the setting of concurrent anticoagulation.4
Recognizing the costliness of routine thrombophilia testing, The American Society of Hematology introduced its Choosing Wisely campaign in 2013 in an effort to reduce test ordering in the setting of provoked VTEs with a major transient risk factor.5 In order to define current practice behavior at our institution, we conducted a retrospective study to determine the magnitude and financial impact of potentially inappropriate thrombophilia testing in the inpatient setting.
METHODS
We performed a retrospective analysis of thrombophilia testing across all inpatient services at a large, quaternary-care academic institution over a 2-year period. Electronic medical record data containing all thrombophilia tests ordered on inpatients from June 2013 to June 2015 were obtained. This study was exempt from institutional review board approval.
Inclusion criteria included any inpatient for which thrombophilia testing occurred. Patients were excluded if testing was ordered in the absence of VTE or arterial thrombosis or if it was ordered as part of a work-up for another medical condition (see Supplementary Material).
Thrombophilia testing was defined as any of the following: inherited thrombophilias (Factor V Leiden or prothrombin 20210 gene mutations, antithrombin, or protein C or S activity levels) or acquired thrombophilias (lupus anticoagulant [Testing refers to the activated partial thromboplastin time lupus assay.], beta-2 glycoprotein 1 immunoglobulins M and G, anticardiolipin immunoglobulins M and G, dilute Russell’s viper venom time, or JAK2 V617F mutations).
Extracted data included patient age, sex, type of thrombophilia test ordered, ordering primary service, admission diagnosis, and objective confirmation of thrombotic events. The indication for test ordering was determined via medical record review of the patient’s corresponding hospitalization. Each test was evaluated in the context of the patient’s presenting history, hospital course, active medications, accompanying laboratory and radiographic studies, and consultant recommendations to arrive at a conclusion regarding both the test’s reason for ordering and whether its indication was “inappropriate,” “appropriate,” or “equivocal.” Cost data were obtained through the Centers for Medicare & Medicaid Services (CMS) Clinical Laboratory Fee Schedule for 2016 (see Supplementary Material).6
The criteria for defining test appropriateness were formulated by utilizing a combination of major society guidelines and literature review.5,7-10 The criteria placed emphasis upon the ordered tests’ clinical relevance and reliability and were subsequently reviewed by a senior hematologist with specific expertise in thrombosis (see Supplementary Material).
Two internal medicine resident physician data reviewers independently evaluated the ordered tests. To ensure consistency between reviewers, a sample of identical test orders was compared for concordance, and a Cohen’s kappa coefficient was calculated. For purposes of analysis, equivocal orders were included under the appropriate category, as this study focused on the quantification of potentially inappropriate ordering practices. Pearson chi-square testing was performed in order to compare ordering practices between services using Stata.11
RESULTS
In total, we reviewed 2179 individual tests, of which 362 (16.6%) were excluded. The remaining 1817 tests involved 299 patients across 26 primary specialties. Fifty-two (2.9% of orders) were ultimately deemed equivocal. The Table illustrates the overall proportion and cost of inappropriate test ordering as well as testing characteristics of the most commonly encountered thrombotic diagnoses. The Figure illustrates the proportion of potentially inappropriate test ordering with its associated cost by test type.
Orders for Factor V Leiden, prothrombin 20210, and protein C and S activity levels were most commonly deemed inappropriate due to the test results’ failure to alter clinical management (97.3%, 99.2%, 99.4% of their inappropriate orders, respectively). Antithrombin testing (59.4%) was deemed inappropriate most commonly in the setting of acute thrombosis. The lupus anticoagulant (82.8%) was inappropriately ordered most frequently in the setting of concurrent anticoagulation.
Ordering practices were then compared between nonteaching and teaching inpatient general medicine services. We observed a higher proportion of inappropriate tests ordered by the nonteaching services as compared to the teaching services (120 of 173 orders [69.4%] versus 125 of 320 [39.1%], respectively; P < 0.001).
The interreviewer kappa coefficient was 0.82 (P < 0.0001).
DISCUSSION
This retrospective analysis represents one of the largest examinations of inpatient thrombophilia testing practices to date. Our results illustrate the high prevalence and significant financial impact of potentially inappropriate thrombophilia testing conducted in the inpatient setting. The data confirm that, per our defined criteria, more than 90% of inherited thrombophilia testing was potentially inappropriate while the majority of acquired thrombophilia testing was appropriate, with the exception of the lupus anticoagulant.
Even when appropriately ordered, studies suggest that positive thrombophilia screening results fail to impact outcomes in most patients with VTE. In an effort to evaluate positive results’ potential to provide a basis from which to extend the duration of anticoagulation, and therefore reduce the risk of a recurrent VTE, a case-control analysis was performed on a series of patients with a first-VTE event (Multiple Environmental and Genetic Assessment of risk factors for venous thrombosis [MEGA] study).3 In examining the odds ratio (OR) for recurrence between patients who did or did not undergo testing for Factor V Leiden, antithrombin, or protein C or S activity, the data failed to show an impact of testing on the risk of VTE recurrence (OR 1.2; confidence interval, 0.8-1.8). In fact, decision making has increasingly relied on patients’ clinical characteristics rather than thrombophilia test results to guide anticoagulation duration after incident VTEs. A 2017 study illustrated that when using a clinical decision rule (Clinical Decision Rule Validation Study to Predict Low Recurrent Risk in Patients With Unprovoked Venous Thromboembolism [REVERSE criteria]) in patients with a first, unprovoked VTE, routine thrombophilia screening added little to determining the need for prolonged anticoagulation.12 These findings support the limited clinical utility of current test ordering practices for the prediction and management of recurrent venous thrombosis.
Regarding the acquired thrombophilias, antiphospholipid antibody testing was predominantly ordered in a justified manner, which is consistent with the notion that test results could affect clinical management, such as anticoagulation duration or choice of anticoagulant.13 However, the validity of lupus anticoagulant testing was limited by the frequency of patients on concurrent anticoagulation.
Financially, the cumulative cost associated with inappropriate ordering was substantial, regardless of the thrombotic event in question. Moreover, our calculated costs are derived from CMS reimbursement rates and likely underestimate the true financial impact of errant testing given that commercial laboratories frequently charge at rates several-fold higher. On a national scale, prior analyses have suggested that the annual cost of thrombophilia testing, based on typical commercial rates, ranges from $300 million to $672 million.14
Researchers in prior studies have similarly examined the frequency of inappropriate thrombophilia testing and methods to reduce it. Researchers in a 2014 study demonstrated initially high rates of inappropriate inherited thrombophilia testing, and then showed marked reductions in testing and cost savings across multiple specialties following the introduction of a flowchart on a preprinted order form.15 Our findings provide motivation to perform similar endeavors.
The proportional difference of inappropriate ordering observed between nonteaching- and teaching-medicine services indicates a potential role for educational interventions. We recently completed a series of lectures on high-value thrombophilia ordering for residents and are actively analyzing its impact on subsequent ordering practices. We are also piloting an electronic best practice advisory for thrombophilia test ordering. Though the advisory may be overridden, providers are asked to provide justification for doing so on a voluntary basis. We plan to evaluate its effect on our findings reported in this study.
We acknowledge that our exclusion criteria resulted in the omission of testing across a spectrum of nonthrombotic clinical conditions, raising the question of selection bias. Because there are no established guidelines to determine the appropriateness of testing in these scenarios, we chose to limit the analysis of errant ordering to the context of thrombotic events. Other limitations of this study include the analysis of equivocal orders as appropriate. However, because equivocal ordering represented less than 3% of all analyzed orders, including these as inappropriate would not have significantly altered our findings.
CONCLUSIONS
A review of thrombophilia testing practices at our institution demonstrated that inappropriate testing in the inpatient setting is a frequent phenomenon associated with a significant financial impact. This effect was more pronounced in inherited versus acquired thrombophilia testing. Testing was frequently confounded and often failed to impact patients’ short- or long-term clinical management, regardless of the result.
These findings serve as a strong impetus to reduce the burden of routine thrombophilia testing during hospital admissions. Our data demonstrate a need for institution-wide changes such as implementing best practice advisories, introducing ordering restrictions, and conducting educational interventions in order to reduce unnecessary expenditures and improve patient care.
Disclosure
The authors have nothing to disclose.
1. Dobesh PP. Economic burden of venous thromboembolism in hospitalized patients. Pharmacotherapy. 2009;29(8):943-953. PubMed
2. Cohn DM, Vansenne F, de Borgie CA, Middeldorp S. Thrombophilia testing for prevention of recurrent venous thromboembolism. Cochrane Database Syst Rev. 2012;12:Cd007069. PubMed
3. Coppens M, Reijnders JH, Middeldorp S, Doggen CJ, Rosendaal FR. Testing for inherited thrombophilia does not reduce the recurrence of venous thrombosis. J Thromb Haemost. 2008;6(9):1474-1477. PubMed
4. Somma J, Sussman, II, Rand JH. An evaluation of thrombophilia screening in an urban tertiary care medical center: A “real world” experience. Am J Clin Pathol. 2006;126(1):120-127. PubMed
5. Hicks LK, Bering H, Carson KR, et al. The ASH Choosing Wisely® campaign: five hematologic tests and treatments to question. Blood. 2013;122(24):3879-3883. PubMed
6. Centers for Medicare & Medicaid Services: Clinical Laboratory Fee Schedule Files. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/ClinicalLabFeeSched/Clinical-Laboratory-Fee-Schedule-Files.html. Accessed October 2016
7. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed
8. Moll S. Thrombophilia: clinical-practical aspects. J Thromb Thrombolysis. 2015;39(3):367-378. PubMed
9. Kearon C, Akl EA, Ornelas J, et al. Antithrombotic therapy for vte disease: Chest guideline and expert panel report. Chest. 2016;149(2):315-352. PubMed
10. Baglin T, Gray E, Greaves M, et al. Clinical guidelines for testing for heritable thrombophilia. Br J Haematol. 2010;149(2):209-220. PubMed
11. Stata Statistical Software [computer program]. Version Release 14. College Station, TX: StataCorp LP; 2015.
12. Garcia-Horton A, Kovacs MJ, Abdulrehman J, Taylor JE, Sharma S, Lazo-Langner A. Impact of thrombophilia screening on venous thromboembolism management practices. Thromb Res.149:76-80. PubMed
13. Schulman S, Svenungsson E, Granqvist S. Anticardiolipin antibodies predict early recurrence of thromboembolism and death among patients with venous thromboembolism following anticoagulant therapy. Duration of Anticoagulation Study Group. Am J Med. 1998;104(4):332-338. PubMed
14. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
15. Smith TW, Pi D, Hudoba M, Lee AY. Reducing inpatient heritable thrombophilia testing using a clinical decision-making tool. J Clin Pathol. 2014;67(4):345-349. PubMed
1. Dobesh PP. Economic burden of venous thromboembolism in hospitalized patients. Pharmacotherapy. 2009;29(8):943-953. PubMed
2. Cohn DM, Vansenne F, de Borgie CA, Middeldorp S. Thrombophilia testing for prevention of recurrent venous thromboembolism. Cochrane Database Syst Rev. 2012;12:Cd007069. PubMed
3. Coppens M, Reijnders JH, Middeldorp S, Doggen CJ, Rosendaal FR. Testing for inherited thrombophilia does not reduce the recurrence of venous thrombosis. J Thromb Haemost. 2008;6(9):1474-1477. PubMed
4. Somma J, Sussman, II, Rand JH. An evaluation of thrombophilia screening in an urban tertiary care medical center: A “real world” experience. Am J Clin Pathol. 2006;126(1):120-127. PubMed
5. Hicks LK, Bering H, Carson KR, et al. The ASH Choosing Wisely® campaign: five hematologic tests and treatments to question. Blood. 2013;122(24):3879-3883. PubMed
6. Centers for Medicare & Medicaid Services: Clinical Laboratory Fee Schedule Files. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/ClinicalLabFeeSched/Clinical-Laboratory-Fee-Schedule-Files.html. Accessed October 2016
7. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed
8. Moll S. Thrombophilia: clinical-practical aspects. J Thromb Thrombolysis. 2015;39(3):367-378. PubMed
9. Kearon C, Akl EA, Ornelas J, et al. Antithrombotic therapy for vte disease: Chest guideline and expert panel report. Chest. 2016;149(2):315-352. PubMed
10. Baglin T, Gray E, Greaves M, et al. Clinical guidelines for testing for heritable thrombophilia. Br J Haematol. 2010;149(2):209-220. PubMed
11. Stata Statistical Software [computer program]. Version Release 14. College Station, TX: StataCorp LP; 2015.
12. Garcia-Horton A, Kovacs MJ, Abdulrehman J, Taylor JE, Sharma S, Lazo-Langner A. Impact of thrombophilia screening on venous thromboembolism management practices. Thromb Res.149:76-80. PubMed
13. Schulman S, Svenungsson E, Granqvist S. Anticardiolipin antibodies predict early recurrence of thromboembolism and death among patients with venous thromboembolism following anticoagulant therapy. Duration of Anticoagulation Study Group. Am J Med. 1998;104(4):332-338. PubMed
14. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
15. Smith TW, Pi D, Hudoba M, Lee AY. Reducing inpatient heritable thrombophilia testing using a clinical decision-making tool. J Clin Pathol. 2014;67(4):345-349. PubMed
© 2017 Society of Hospital Medicine
Telemetry Use for LOS and Cost Reduction
Inpatient hospital services are a major component of total US civilian noninstitutionalized healthcare expenses, accounting for 29.3% of spending in 2009[1] when the average cost per stay was $9700.[2] Telemetry monitoring, a widely used resource for the identification of life‐threatening arrhythmias, contributes to these costs. In 1998, Sivaram et al. estimated the cost per patient at $683; in 2010, Ivonye et al. published the cost difference between a telemetry bed and a nonmonitored bed in their inner‐city public teaching facility reached $800.[3, 4]
In 1991, the American College of Cardiology published guidelines for telemetry use, which were later revised by the American Heart Association in 2004.[5, 6] Notably, the guidelines are based on expert opinion and on research data in electrocardiography.[7] The guidelines divide patients into 3 classes based on clinical condition: recommending telemetry monitoring for almost all class I patients, stating possible benefit in class II patients, and discouraging cardiac monitoring for the low‐risk class III patients.[5, 6] The Choosing Wisely campaign, an initiative of the American Board of Internal Medicine and the Society of Hospital Medicine, highlights telemetry monitoring as 1 of the top 5 interventions that physicians and patients should question when determining tests and procedures.[8] Choosing Wisely suggests using a protocol to govern continuation of telemetry outside of the intensive care unit (ICU), as inappropriate monitoring increases care costs and may result in patient harm.[8] The Joint Commission 2014 National Patient Safety Goals notes that numerous alarm signals and the resulting noise and displayed information tends to desensitize staff and cause them to miss or ignore alarm signals or even disable them.[9]
Few studies have examined implementation methods for improved telemetry bed utilization. One study evaluated the impact of a multispecialty telemetry policy with enforcement by an outside cardiologist and nurse team, noting improved cardiac monitoring bed utilization and decreased academic hospital closure, which previously resulted in inability to accept new patients or procedure cancellation.[10] Another study provided an orientation handout discussed by the chief resident and telemetry indication reviews during multidisciplinary rounds 3 times a week.[11]
Our study is one the first to demonstrate a model for a hospitalist‐led approach to guide appropriate telemetry use. We investigated the impact of a multipronged approach to guide telemetry usage: (1) a hospitalist‐led, daily review of bed utilization during attending rounds, (2) a hospitalist attending‐driven, trainee‐focused education module on telemetry utilization, (3) quarterly feedback on telemetry bed utilization rates, and (4) financial incentives. We analyzed pre‐ and post‐evaluation results from the education module to measure impact on knowledge, skills, and attitudes. Additionally, we evaluated the effect of the intervention on length of stay (LOS) and bed utilization costs, while monitoring case mix index (CMI) and overall mortality.
METHODS
Setting
This study took place at Stanford Hospital and Clinics, a teaching academic center in Stanford, California. Stanford Hospital is a 444‐bed, urban medical center with 114 telemetry intermediate ICU beds, and 66 ICU beds. The 264 medicalsurgical beds lack telemetry monitoring, which can only be completed in the intermediate and full ICU. All patients on telemetry units receive both cardiac monitoring and increased nursing ratios. Transfer orders are placed in the electronic medical record to shift patients between care levels. Bed control attempts to transfer patients as soon as an open bed in the appropriate care level exists.
The study included all 5 housestaff inpatient general internal medicine wards teams (which excludes cardiology, pulmonary hypertension, hematology, oncology, and post‐transplant patients). Hospitalists and nonhospitalists attend on the wards for 1‐ to 2‐week blocks. Teaching teams are staffed by 1 to 2 medical students, 2 interns, 1 resident, and 1 attending. The university institutional review board notice of determination waived review for this study because it was classified as quality improvement.
Participants
Ten full‐ and part‐time hospitalist physicians participated in the standardized telemetry teaching. Fifty‐six of the approximately 80 medical students and housestaff on hospitalists' teams completed the educational evaluation. Both hospitalist and nonhospitalist teams participated in daily multidisciplinary rounds, focusing on barriers to discharge including telemetry use. Twelve nonhospitalists served on the wards during the intervention period. Hospitalists covered 72% of the internal medicine wards during the intervention period.
Study Design
We investigated the impact of a multipronged approach to guide telemetry usage from January 2013 to August 2013 (intervention period).
Hospitalist‐Led Daily Review of Bed Utilization
Hospitalists were encouraged to discuss the need of telemetry on daily attending rounds and review indications for telemetry while on service. Prior to starting a ward block, attendings were emailed the teaching module with a reminder to discuss the need for telemetry on attending rounds. Reminders to discuss telemetry utilization were also provided during every‐other‐week hospitalist meetings. Compliance of daily discussion was not tracked.
Hospitalist‐Driven, Trainee‐Focused, Education Module on Telemetry Utilization
The educational module was taught during teaching sessions only by the hospitalists. Trainees on nonhospitalist teams did not receive dedicated teaching about telemetry usage. The module was given to learners only once. The module was a 10‐slide, Microsoft PowerPoint (Microsoft Corp., Redmond, WA) presentation that reviewed the history of telemetry, the American College of Cardiology and the American Heart Association guidelines, the cost difference between telemetry and nonmonitored beds, and the perceived barriers to discontinuation. The presentation was accompanied by a pre‐ and post‐evaluation to elicit knowledge, skills, and attitudes of telemetry use (see Supporting Information, Appendix A, in the online version of this article). The pre‐ and post‐evaluations were created through consensus with a multidisciplinary, expert panel after reviewing the evidence‐based literature.
Quarterly Feedback on Telemetry Bed Utilization Rates
Hospital beduse and CMI data were obtained from the Stanford finance department for the intervention period and for the baseline period, which was the year prior to the study, January 1, 2012 to December 31, 2012. Hospital beduse data included the number of days patients were on telemetry units versus medicalsurgical units (nontelemetry units), differentiated by hospitalists and nonhospitalists. Cost savings were calculated by the Stanford finance department that used Stanford‐specific, internal cost accounting data to determine the impact of the intervention. These data were reviewed at hospitalist meetings on a quarterly basis. We also obtained the University Healthsystem Consortium mortality index (observed to expected) for the general internal medicine service during the baseline and intervention periods.
To measure sustainment of telemetry reduction in the postintervention period, we measured telemetry LOS from September 2014 to March 2015 (extension period).
Financial Incentives
Hospitalists were provided a $2000 bonus at the end of fiscal year 2013 if the group showed a decrease in telemetry bed use in comparison to the baseline period.
Statistical Analysis of Clinical Outcome Measures
Continuous outcomes were tested using 2‐tailed t tests. Comparison of continuous outcome included differences in telemetry and nontelemetry LOS and CMI. Pairwise comparisons were made for various time periods. A P value of <0.05 was considered statistically significant. Statistical analyses were performed using Stata 12.0 software (StataCorp, College Station, TX).
RESULTS
Clinical and Value Outcomes
Baseline (January 2012December 2012) Versus Intervention Period (January 2013August 2013)
LOS for telemetry beds was significantly reduced over the intervention period (2.75 days vs 2.13 days, P=0.005) for hospitalists. Notably, there was no significant difference in mean LOS between baseline and intervention periods for nontelemetry beds (2.84 days vs 2.72 days, P=0.32) for hospitalists. In comparison, for nonhospitalists, there was no difference in mean LOS for telemetry beds between baseline and intervention periods (2.75 days vs 2.46 days, P=0.33) and nontelemetry beds (2.64 days vs 2.89 days, P=0.26) (Table 1).
Baseline Period | Intervention Period | P Value | Extension Period | P Value | |
---|---|---|---|---|---|
| |||||
Length of stay | |||||
Hospitalists | |||||
Telemetry beds | 2.75 | 2.13 | 0.005 | 1.93 | 0.09 |
Nontelemetry beds | 2.84 | 2.72 | 0.324 | 2.44 | 0.21 |
Nonhospitalists | |||||
Telemetry beds | 2.75 | 2.46 | 0.331 | 2.22 | 0.43 |
Nontelemetry beds | 2.64 | 2.89 | 0.261 | 2.26 | 0.05 |
Case mix index | |||||
Hospitalists | 1.44 | 1.45 | 0.68 | 1.40 | 0.21 |
Nonhospitalists | 1.46 | 1.40 | 0.53 | 1.53 | 0.18 |
Costs of hospital stay were also reduced in the multipronged, hospitalist‐driven intervention group. Expenditures for telemetry beds were reduced by 22.5% over the intervention period for hospitalists (Table 2).
Baseline to Intervention Period | Intervention to Extension Period | |
---|---|---|
| ||
Hospitalists | ||
Telemetry beds | 22.55% | 9.55% |
Nontelemetry beds | 4.23% | 10.14% |
Nonhospitalists | ||
Telemetry beds | 10.55% | 9.89% |
Nontelemetry beds | 9.47% | 21.84% |
The mean CMI of the patient cohort managed by the hospitalists in the baseline and intervention periods was not significantly different (1.44 vs 1.45, P=0.68). The mean CMI of the patients managed by the nonhospitalists in the baseline and intervention periods was also not significantly different (1.46 vs 1.40, P=0.53) (Table 1). Mortality index during the baseline and intervention periods was not significantly different (0.770.22 vs 0.660.23, P=0.54), as during the intervention and extension periods (0.660.23 vs 0.650.15, P=0.95).
Intervention Period (January 2013August 2013) Versus Extension Period (September 2014‐March 2015)
The decreased telemetry LOS for hospitalists was sustained from the intervention period to the extension period, from 2.13 to 1.93 (P=0.09). There was no significant change in the nontelemetry LOS in the intervention period compared to the extension period (2.72 vs 2.44, P=0.21). There was no change in the telemetry LOS for nonhospitalists from the intervention period to the extension period (2.46 vs 2.22, P=0.43).
The mean CMI in the hospitalist group was not significantly different in the intervention period compared to the extension period (1.45 to 1.40, P=0.21). The mean CMI in the nonhospitalist group did not change from the intervention period to the extension period (1.40 vs 1.53, P=0.18) (Table 1).
Education Outcomes
Out of the 56 participants completing the education module and survey, 28.6% were medical students, 53.6% were interns, 12.5% were second‐year residents, and 5.4% were third‐year residents. Several findings were seen at baseline via pretest. In evaluating patterns of current telemetry use, 32.2% of participants reported evaluating the necessity of telemetry for patients on admission only, 26.3% during transitions of care, 5.1% after discharge plans were cemented, 33.1% on a daily basis, and 3.4% rarely. When asked which member of the care team was most likely to encourage use of appropriate telemetry, 20.8% identified another resident, 13.9% nursing, 37.5% attending physician, 20.8% self, 4.2% the team as a whole, and 2.8% as not any.
Figure 1 shows premodule results regarding the trainees perceived percentage of patient encounters during which a participant's team discussed their patient's need for telemetry.
In assessing perception of current telemetry utilization, 1.8% of participants thought 0% to 10% of patients were currently on telemetry, 19.6% thought 11% to 20%, 42.9% thought 21% to 31%, 30.4% thought 31% to 40%, and 3.6% thought 41% to 50%.
Two areas were assessed at both baseline and after the intervention: knowledge of indications of telemetry use and cost related to telemetry use. We saw increased awareness of cost‐saving actions. To assess current knowledge of the indications of proper telemetry use according to American Heart Association guidelines, participants were presented with a list of 5 patients with different clinical indications for telemetry use and asked which patient required telemetry the most. Of the participants, 54.5% identified the correct answer in the pretest and 61.8% identified the correct answer in the post‐test. To assess knowledge of the costs of telemetry relative to other patient care, participants were presented with a patient case and asked to identify the most and least cost‐saving actions to safely care for the patient. When asked to identify the most cost‐saving action, 20.3% identified the correct answer in the pretest and 61.0% identified the correct answer in the post‐test. Of those who answered incorrectly in the pretest, 51.1% answered correctly in the post‐test (P=0.002). When asked to identify the least cost‐saving action, 23.7% identified the correct answer in the pretest and 50.9% identified the correct answer in the posttest. Of those who answered incorrectly in the pretest, 60.0% answered correctly in the post‐test (P=0.003).
In the post‐test, when asked about the importance of appropriate telemetry usage in providing cost‐conscious care and assuring appropriate hospital resource management, 76.8% of participants found the need very important, 21.4% somewhat important, and 1.8% as not applicable. The most commonly perceived barriers impeding discontinuation of telemetry, as reported by participants via post‐test, were nursing desires and time. Figure 2 shows all perceived barriers.
DISCUSSION
Our study is one of the first to our knowledge to demonstrate reductions in telemetry LOS by a hospitalist intervention for telemetry utilization. Others[10, 11] have studied the impact of an orientation handout by chief residents or a multispecialty telemetry policy with enforcement by an outside cardiologist and nurse team. Dressler et al. later sustained a 70% reduction in telemetry use without adversely affecting patient safety, as assessed through numbers of rapid response activations, codes, and deaths, through integrating the AHA guidelines into their electronic ordering system.[12] However, our study has the advantage of the primary team, who knows the patient and clinical scenario best, driving the change during attending rounds. In an era where cost consciousness intersects the practice of medicine, any intervention in patient care that demonstrates cost savings without an adverse impact on patient care and resource utilization must be emphasized. This is particularly important in academic institutions, where residents and medical students are learning to integrate the principles of patient safety and quality improvement into their clinical practice.[13] We actually showed sustained telemetry LOS reductions into the extension period after our intervention. We believe this may be due to telemetry triage being integrated into our attending and resident rounding practices. Future work should include integration of telemetry triage into clinical decision support in the electronic medical record and multidisciplinary rounds to disseminate telemetry triage hospital‐wide in both the academic and community settings.
Our study also revealed that nearly half of participants were not aware of the criteria for appropriate utilization of telemetry before our intervention; in the preintervention period, there were many anecdotal and objective findings of inappropriate utilization of telemetry as well as prolonged continuation beyond the clinical needs in both the hospitalist and nonhospitalist group. For the hospitalist group (ie, the group receiving guideline‐based education on appropriate indications for telemetry utilization), there was an assessment of both appropriate usage and timely discontinuation of telemetry in the postintervention period, which we attribute in large part to adherence to the education provided to this group.
We were able to show increased knowledge of cost‐saving actions among trainees with our educational module. We believe it is imperative to educate our providers (physicians, nurses, case managers, and students within these disciplines) on the appropriate indications for telemetry use, not only to help with cost savings and resource availability (ie, allowing telemetry beds to be available for patients who need them most), but also to instill consistent expectations among our patients. For the hospitalist group (ie, the group receiving guideline‐based education on appropriate indications for telemetry utilization), there was an assessment of both appropriate usage and timely discontinuation of telemetry in the postintervention period, which we attribute in large part to adherence to the education provided to this group.
Additionally, we feel it is important to consider the impacts of inappropriate use of telemetry from a patient's perspective: it is physically restrictive/emnconvenient, alarms are disruptive, it can be a barrier for other treatments such as physical therapy, it may increase the time it takes for imaging studies, a nurse may be required to accompany patients on telemetry, and poses additional costs to their medical bill.
We believe our success is due to several strategies. First, at the start of the fiscal year when quality improvement metrics are established, this particular metric (improving the appropriate utilization and timely discontinuation of telemetry) was deemed important by all hospitalists, engendering group buy‐in prior to the intervention. Our hospitalists received a detailed and interactive tutorial session in person at the beginning of the study. This tutorial provided the hospitalists with a comprehensive understanding of the appropriate (and inappropriate) indications for telemetry monitoring, hence facilitating guideline‐directed utilization. Email reminders and the tutorial tool were provided each time a hospitalist attended on the wards, and hospitalists received a small financial incentive to comply with appropriate telemetry utilization.
Our study has several strengths. First, the time frame of our study was long enough (8 months) to allow consistent trends to emerge and to optimize exposure of housestaff and medical students to this quality‐improvement initiative. Second, our cost savings came from 2 factors, direct reduction of inappropriate telemetry use and reduction in length of stay, highlighting the dual impact of appropriate telemetry utilization on cost. The overall reductions in telemetry utilization for the intervention group were a result of both reductions in initial placement on telemetry for patients who did not meet criteria for such monitoring as well as timely discontinuation of telemetry during the patient's hospitalization. Third, our study demonstrates that physicians can be effective in driving appropriate telemetry usage by participating in the clinical decision making regarding necessity and educating providers, trainees/students, and patients on appropriate indications. Finally, we show sustainment of our intervention in the extension period, suggesting telemetry triage integration into rounding practice.
Our study has limitations as well. First, our sample size is relatively small at a single academic center. Second, due to complexities in our faculty scheduling, we were unable to completely randomize patients to a hospitalist versus nonhospitalist team. However, we believe that despite the inability to randomize, our study does show the benefit of a hospitalist attending to reduce telemetry LOS given there was no change in nonhospitalist telemetry LOS despite all of the other hospital‐wide interventions (multidisciplinary rounds, similar housestaff). Third, our study was limited in that the CMI was used as a proxy for patient complexity, and the mortality index was used as the overall marker of safety. Further studies should monitor frequency and outcomes of arrhythmic events of patients transferred from telemetry monitoring to medicalsurgical beds. Finally, as the intervention was multipronged, we are unable to determine which component led to the reductions in telemetry utilization. Each component, however, remains easily transferrable to outside institutions. We demonstrated both a reduction in initiation of telemetry as well as timely discontinuation; however, due to the complexity in capturing this accurately, we were unable to numerically quantify these individual outcomes.
Additionally, there were approximately 10 nonhospitalist attendings who also staffed the wards during the intervention time period of our study; these attendings did not undergo the telemetry tutorial/orientation. This difference, along with the Hawthorne effect for the hospitalist attendings, also likely contributed to the difference in outcomes between the 2 attending cohorts in the intervention period.
CONCLUSIONS
Our results demonstrate that a multipronged hospitalist‐driven intervention to improve appropriate use of telemetry reduces telemetry LOS and cost. Hence, we believe that targeted, education‐driven interventions with monitoring of progress can have demonstrable impacts on changing practice. Physicians will need to make trade‐offs in clinical practice to balance efficient resource utilization with the patient's evolving condition in the inpatient setting, the complexities of clinical workflow, and the patient's expectations.[14] Appropriate telemetry utilization is a prime example of what needs to be done well in the future for high‐value care.
Acknowledgements
The authors acknowledge the hospitalists who participated in the intervention: Jeffrey Chi, Willliam Daines, Sumbul Desai, Poonam Hosamani, John Kugler, Charles Liao, Errol Ozdalga, and Sang Hoon Woo. The authors also acknowledge Joan Hendershott in the Finance Department and Joseph Hopkins in the Quality Department.
Disclosures: All coauthors have seen and agree with the contents of the article; submission (aside from abstracts) was not under review by any other publication. The authors report no disclosures of financial support from, or equity positions in, manufacturers of drugs or products mentioned in the article.
- 2009. Statistical brief 355. 2012. Agency for Healthcare Research and Quality, Rockville, MD. , National health care expenses in the U.S. civilian noninstitutionalized population,
- Costs for hospital stays in the United States, 2010. Statistical brief 146. 2013. Agency for Healthcare Research and Quality, Rockville, MD. , ,
- Telemetry outside critical care units: patterns of utilization and influence on management decisions. Clin Cardiol. 1998;21(7):503–505. , ,
- Evaluation of telemetry utilization, policy, and outcomes in an inner‐city academic medical center. J Natl Med Assoc. 2010;102(7):598–604. , , , et al.
- Recommended guidelines for in‐hospital cardiac monitoring of adults for detection of arrhythmia. Emergency Cardiac Care Committee members. J Am Coll Cardiol. 1991;18(6):1431–1433. , ,
- Practice standards for electrocardiographic monitoring in hospital settings: an American Heart Association scientific statement from the Councils on Cardiovascular Nursing, Clinical Cardiology, and Cardiovascular Disease in the Young: endorsed by the International Society of Computerized Electrocardiology and the American Association of Critical‐Care Nurses. Circulation. 2004;110(17):2721–2746. , , , et al.
- Is telemetry overused? Is it as helpful as thought? Cleve Clin J Med. 2009;76(6):368–372. , , , , ,
- Society of Hospital Medicine. Adult Hospital Medicine. Five things physicians and patients should question. Available at: http://www.choosingwisely.org/societies/society‐of‐hospital‐medicine‐adult. Published February 21, 2013. Accessed October 5, 2014.
- Joint Commission on Accreditation of Healthcare Organizations. The Joint Commission announces 2014 national patient safety goal. Jt Comm Perspect. 2013;33(7):1–4.
- Optimizing telemetry utilization in an academic medical center. J Clin Outcomes Manage. 2008;15(9):435–440. , , , ,
- Improving utilization of telemetry in a university hospital. J Clin Outcomes Manage. 2005;12(10):519–522. ,
- Altering overuse of cardiac telemetry in non‐intensive care unit settings by hardwiring the use of American Heart Association guidelines. JAMA Intern Med. 2014;174:1852–1854. , , , ,
- "Innovation" institutes in academic health centers: enhancing value through leadership, education, engagement, and scholarship. Acad Med. 2014;89(9):1204–1206. , ,
- Controlling health costs: physician responses to patient expectations for medical care. J Gen Intern Med. 2014;29(9):1234–1241. , , , , ,
Inpatient hospital services are a major component of total US civilian noninstitutionalized healthcare expenses, accounting for 29.3% of spending in 2009[1] when the average cost per stay was $9700.[2] Telemetry monitoring, a widely used resource for the identification of life‐threatening arrhythmias, contributes to these costs. In 1998, Sivaram et al. estimated the cost per patient at $683; in 2010, Ivonye et al. published the cost difference between a telemetry bed and a nonmonitored bed in their inner‐city public teaching facility reached $800.[3, 4]
In 1991, the American College of Cardiology published guidelines for telemetry use, which were later revised by the American Heart Association in 2004.[5, 6] Notably, the guidelines are based on expert opinion and on research data in electrocardiography.[7] The guidelines divide patients into 3 classes based on clinical condition: recommending telemetry monitoring for almost all class I patients, stating possible benefit in class II patients, and discouraging cardiac monitoring for the low‐risk class III patients.[5, 6] The Choosing Wisely campaign, an initiative of the American Board of Internal Medicine and the Society of Hospital Medicine, highlights telemetry monitoring as 1 of the top 5 interventions that physicians and patients should question when determining tests and procedures.[8] Choosing Wisely suggests using a protocol to govern continuation of telemetry outside of the intensive care unit (ICU), as inappropriate monitoring increases care costs and may result in patient harm.[8] The Joint Commission 2014 National Patient Safety Goals notes that numerous alarm signals and the resulting noise and displayed information tends to desensitize staff and cause them to miss or ignore alarm signals or even disable them.[9]
Few studies have examined implementation methods for improved telemetry bed utilization. One study evaluated the impact of a multispecialty telemetry policy with enforcement by an outside cardiologist and nurse team, noting improved cardiac monitoring bed utilization and decreased academic hospital closure, which previously resulted in inability to accept new patients or procedure cancellation.[10] Another study provided an orientation handout discussed by the chief resident and telemetry indication reviews during multidisciplinary rounds 3 times a week.[11]
Our study is one the first to demonstrate a model for a hospitalist‐led approach to guide appropriate telemetry use. We investigated the impact of a multipronged approach to guide telemetry usage: (1) a hospitalist‐led, daily review of bed utilization during attending rounds, (2) a hospitalist attending‐driven, trainee‐focused education module on telemetry utilization, (3) quarterly feedback on telemetry bed utilization rates, and (4) financial incentives. We analyzed pre‐ and post‐evaluation results from the education module to measure impact on knowledge, skills, and attitudes. Additionally, we evaluated the effect of the intervention on length of stay (LOS) and bed utilization costs, while monitoring case mix index (CMI) and overall mortality.
METHODS
Setting
This study took place at Stanford Hospital and Clinics, a teaching academic center in Stanford, California. Stanford Hospital is a 444‐bed, urban medical center with 114 telemetry intermediate ICU beds, and 66 ICU beds. The 264 medicalsurgical beds lack telemetry monitoring, which can only be completed in the intermediate and full ICU. All patients on telemetry units receive both cardiac monitoring and increased nursing ratios. Transfer orders are placed in the electronic medical record to shift patients between care levels. Bed control attempts to transfer patients as soon as an open bed in the appropriate care level exists.
The study included all 5 housestaff inpatient general internal medicine wards teams (which excludes cardiology, pulmonary hypertension, hematology, oncology, and post‐transplant patients). Hospitalists and nonhospitalists attend on the wards for 1‐ to 2‐week blocks. Teaching teams are staffed by 1 to 2 medical students, 2 interns, 1 resident, and 1 attending. The university institutional review board notice of determination waived review for this study because it was classified as quality improvement.
Participants
Ten full‐ and part‐time hospitalist physicians participated in the standardized telemetry teaching. Fifty‐six of the approximately 80 medical students and housestaff on hospitalists' teams completed the educational evaluation. Both hospitalist and nonhospitalist teams participated in daily multidisciplinary rounds, focusing on barriers to discharge including telemetry use. Twelve nonhospitalists served on the wards during the intervention period. Hospitalists covered 72% of the internal medicine wards during the intervention period.
Study Design
We investigated the impact of a multipronged approach to guide telemetry usage from January 2013 to August 2013 (intervention period).
Hospitalist‐Led Daily Review of Bed Utilization
Hospitalists were encouraged to discuss the need of telemetry on daily attending rounds and review indications for telemetry while on service. Prior to starting a ward block, attendings were emailed the teaching module with a reminder to discuss the need for telemetry on attending rounds. Reminders to discuss telemetry utilization were also provided during every‐other‐week hospitalist meetings. Compliance of daily discussion was not tracked.
Hospitalist‐Driven, Trainee‐Focused, Education Module on Telemetry Utilization
The educational module was taught during teaching sessions only by the hospitalists. Trainees on nonhospitalist teams did not receive dedicated teaching about telemetry usage. The module was given to learners only once. The module was a 10‐slide, Microsoft PowerPoint (Microsoft Corp., Redmond, WA) presentation that reviewed the history of telemetry, the American College of Cardiology and the American Heart Association guidelines, the cost difference between telemetry and nonmonitored beds, and the perceived barriers to discontinuation. The presentation was accompanied by a pre‐ and post‐evaluation to elicit knowledge, skills, and attitudes of telemetry use (see Supporting Information, Appendix A, in the online version of this article). The pre‐ and post‐evaluations were created through consensus with a multidisciplinary, expert panel after reviewing the evidence‐based literature.
Quarterly Feedback on Telemetry Bed Utilization Rates
Hospital beduse and CMI data were obtained from the Stanford finance department for the intervention period and for the baseline period, which was the year prior to the study, January 1, 2012 to December 31, 2012. Hospital beduse data included the number of days patients were on telemetry units versus medicalsurgical units (nontelemetry units), differentiated by hospitalists and nonhospitalists. Cost savings were calculated by the Stanford finance department that used Stanford‐specific, internal cost accounting data to determine the impact of the intervention. These data were reviewed at hospitalist meetings on a quarterly basis. We also obtained the University Healthsystem Consortium mortality index (observed to expected) for the general internal medicine service during the baseline and intervention periods.
To measure sustainment of telemetry reduction in the postintervention period, we measured telemetry LOS from September 2014 to March 2015 (extension period).
Financial Incentives
Hospitalists were provided a $2000 bonus at the end of fiscal year 2013 if the group showed a decrease in telemetry bed use in comparison to the baseline period.
Statistical Analysis of Clinical Outcome Measures
Continuous outcomes were tested using 2‐tailed t tests. Comparison of continuous outcome included differences in telemetry and nontelemetry LOS and CMI. Pairwise comparisons were made for various time periods. A P value of <0.05 was considered statistically significant. Statistical analyses were performed using Stata 12.0 software (StataCorp, College Station, TX).
RESULTS
Clinical and Value Outcomes
Baseline (January 2012December 2012) Versus Intervention Period (January 2013August 2013)
LOS for telemetry beds was significantly reduced over the intervention period (2.75 days vs 2.13 days, P=0.005) for hospitalists. Notably, there was no significant difference in mean LOS between baseline and intervention periods for nontelemetry beds (2.84 days vs 2.72 days, P=0.32) for hospitalists. In comparison, for nonhospitalists, there was no difference in mean LOS for telemetry beds between baseline and intervention periods (2.75 days vs 2.46 days, P=0.33) and nontelemetry beds (2.64 days vs 2.89 days, P=0.26) (Table 1).
Baseline Period | Intervention Period | P Value | Extension Period | P Value | |
---|---|---|---|---|---|
| |||||
Length of stay | |||||
Hospitalists | |||||
Telemetry beds | 2.75 | 2.13 | 0.005 | 1.93 | 0.09 |
Nontelemetry beds | 2.84 | 2.72 | 0.324 | 2.44 | 0.21 |
Nonhospitalists | |||||
Telemetry beds | 2.75 | 2.46 | 0.331 | 2.22 | 0.43 |
Nontelemetry beds | 2.64 | 2.89 | 0.261 | 2.26 | 0.05 |
Case mix index | |||||
Hospitalists | 1.44 | 1.45 | 0.68 | 1.40 | 0.21 |
Nonhospitalists | 1.46 | 1.40 | 0.53 | 1.53 | 0.18 |
Costs of hospital stay were also reduced in the multipronged, hospitalist‐driven intervention group. Expenditures for telemetry beds were reduced by 22.5% over the intervention period for hospitalists (Table 2).
Baseline to Intervention Period | Intervention to Extension Period | |
---|---|---|
| ||
Hospitalists | ||
Telemetry beds | 22.55% | 9.55% |
Nontelemetry beds | 4.23% | 10.14% |
Nonhospitalists | ||
Telemetry beds | 10.55% | 9.89% |
Nontelemetry beds | 9.47% | 21.84% |
The mean CMI of the patient cohort managed by the hospitalists in the baseline and intervention periods was not significantly different (1.44 vs 1.45, P=0.68). The mean CMI of the patients managed by the nonhospitalists in the baseline and intervention periods was also not significantly different (1.46 vs 1.40, P=0.53) (Table 1). Mortality index during the baseline and intervention periods was not significantly different (0.770.22 vs 0.660.23, P=0.54), as during the intervention and extension periods (0.660.23 vs 0.650.15, P=0.95).
Intervention Period (January 2013August 2013) Versus Extension Period (September 2014‐March 2015)
The decreased telemetry LOS for hospitalists was sustained from the intervention period to the extension period, from 2.13 to 1.93 (P=0.09). There was no significant change in the nontelemetry LOS in the intervention period compared to the extension period (2.72 vs 2.44, P=0.21). There was no change in the telemetry LOS for nonhospitalists from the intervention period to the extension period (2.46 vs 2.22, P=0.43).
The mean CMI in the hospitalist group was not significantly different in the intervention period compared to the extension period (1.45 to 1.40, P=0.21). The mean CMI in the nonhospitalist group did not change from the intervention period to the extension period (1.40 vs 1.53, P=0.18) (Table 1).
Education Outcomes
Out of the 56 participants completing the education module and survey, 28.6% were medical students, 53.6% were interns, 12.5% were second‐year residents, and 5.4% were third‐year residents. Several findings were seen at baseline via pretest. In evaluating patterns of current telemetry use, 32.2% of participants reported evaluating the necessity of telemetry for patients on admission only, 26.3% during transitions of care, 5.1% after discharge plans were cemented, 33.1% on a daily basis, and 3.4% rarely. When asked which member of the care team was most likely to encourage use of appropriate telemetry, 20.8% identified another resident, 13.9% nursing, 37.5% attending physician, 20.8% self, 4.2% the team as a whole, and 2.8% as not any.
Figure 1 shows premodule results regarding the trainees perceived percentage of patient encounters during which a participant's team discussed their patient's need for telemetry.
In assessing perception of current telemetry utilization, 1.8% of participants thought 0% to 10% of patients were currently on telemetry, 19.6% thought 11% to 20%, 42.9% thought 21% to 31%, 30.4% thought 31% to 40%, and 3.6% thought 41% to 50%.
Two areas were assessed at both baseline and after the intervention: knowledge of indications of telemetry use and cost related to telemetry use. We saw increased awareness of cost‐saving actions. To assess current knowledge of the indications of proper telemetry use according to American Heart Association guidelines, participants were presented with a list of 5 patients with different clinical indications for telemetry use and asked which patient required telemetry the most. Of the participants, 54.5% identified the correct answer in the pretest and 61.8% identified the correct answer in the post‐test. To assess knowledge of the costs of telemetry relative to other patient care, participants were presented with a patient case and asked to identify the most and least cost‐saving actions to safely care for the patient. When asked to identify the most cost‐saving action, 20.3% identified the correct answer in the pretest and 61.0% identified the correct answer in the post‐test. Of those who answered incorrectly in the pretest, 51.1% answered correctly in the post‐test (P=0.002). When asked to identify the least cost‐saving action, 23.7% identified the correct answer in the pretest and 50.9% identified the correct answer in the posttest. Of those who answered incorrectly in the pretest, 60.0% answered correctly in the post‐test (P=0.003).
In the post‐test, when asked about the importance of appropriate telemetry usage in providing cost‐conscious care and assuring appropriate hospital resource management, 76.8% of participants found the need very important, 21.4% somewhat important, and 1.8% as not applicable. The most commonly perceived barriers impeding discontinuation of telemetry, as reported by participants via post‐test, were nursing desires and time. Figure 2 shows all perceived barriers.
DISCUSSION
Our study is one of the first to our knowledge to demonstrate reductions in telemetry LOS by a hospitalist intervention for telemetry utilization. Others[10, 11] have studied the impact of an orientation handout by chief residents or a multispecialty telemetry policy with enforcement by an outside cardiologist and nurse team. Dressler et al. later sustained a 70% reduction in telemetry use without adversely affecting patient safety, as assessed through numbers of rapid response activations, codes, and deaths, through integrating the AHA guidelines into their electronic ordering system.[12] However, our study has the advantage of the primary team, who knows the patient and clinical scenario best, driving the change during attending rounds. In an era where cost consciousness intersects the practice of medicine, any intervention in patient care that demonstrates cost savings without an adverse impact on patient care and resource utilization must be emphasized. This is particularly important in academic institutions, where residents and medical students are learning to integrate the principles of patient safety and quality improvement into their clinical practice.[13] We actually showed sustained telemetry LOS reductions into the extension period after our intervention. We believe this may be due to telemetry triage being integrated into our attending and resident rounding practices. Future work should include integration of telemetry triage into clinical decision support in the electronic medical record and multidisciplinary rounds to disseminate telemetry triage hospital‐wide in both the academic and community settings.
Our study also revealed that nearly half of participants were not aware of the criteria for appropriate utilization of telemetry before our intervention; in the preintervention period, there were many anecdotal and objective findings of inappropriate utilization of telemetry as well as prolonged continuation beyond the clinical needs in both the hospitalist and nonhospitalist group. For the hospitalist group (ie, the group receiving guideline‐based education on appropriate indications for telemetry utilization), there was an assessment of both appropriate usage and timely discontinuation of telemetry in the postintervention period, which we attribute in large part to adherence to the education provided to this group.
We were able to show increased knowledge of cost‐saving actions among trainees with our educational module. We believe it is imperative to educate our providers (physicians, nurses, case managers, and students within these disciplines) on the appropriate indications for telemetry use, not only to help with cost savings and resource availability (ie, allowing telemetry beds to be available for patients who need them most), but also to instill consistent expectations among our patients. For the hospitalist group (ie, the group receiving guideline‐based education on appropriate indications for telemetry utilization), there was an assessment of both appropriate usage and timely discontinuation of telemetry in the postintervention period, which we attribute in large part to adherence to the education provided to this group.
Additionally, we feel it is important to consider the impacts of inappropriate use of telemetry from a patient's perspective: it is physically restrictive/emnconvenient, alarms are disruptive, it can be a barrier for other treatments such as physical therapy, it may increase the time it takes for imaging studies, a nurse may be required to accompany patients on telemetry, and poses additional costs to their medical bill.
We believe our success is due to several strategies. First, at the start of the fiscal year when quality improvement metrics are established, this particular metric (improving the appropriate utilization and timely discontinuation of telemetry) was deemed important by all hospitalists, engendering group buy‐in prior to the intervention. Our hospitalists received a detailed and interactive tutorial session in person at the beginning of the study. This tutorial provided the hospitalists with a comprehensive understanding of the appropriate (and inappropriate) indications for telemetry monitoring, hence facilitating guideline‐directed utilization. Email reminders and the tutorial tool were provided each time a hospitalist attended on the wards, and hospitalists received a small financial incentive to comply with appropriate telemetry utilization.
Our study has several strengths. First, the time frame of our study was long enough (8 months) to allow consistent trends to emerge and to optimize exposure of housestaff and medical students to this quality‐improvement initiative. Second, our cost savings came from 2 factors, direct reduction of inappropriate telemetry use and reduction in length of stay, highlighting the dual impact of appropriate telemetry utilization on cost. The overall reductions in telemetry utilization for the intervention group were a result of both reductions in initial placement on telemetry for patients who did not meet criteria for such monitoring as well as timely discontinuation of telemetry during the patient's hospitalization. Third, our study demonstrates that physicians can be effective in driving appropriate telemetry usage by participating in the clinical decision making regarding necessity and educating providers, trainees/students, and patients on appropriate indications. Finally, we show sustainment of our intervention in the extension period, suggesting telemetry triage integration into rounding practice.
Our study has limitations as well. First, our sample size is relatively small at a single academic center. Second, due to complexities in our faculty scheduling, we were unable to completely randomize patients to a hospitalist versus nonhospitalist team. However, we believe that despite the inability to randomize, our study does show the benefit of a hospitalist attending to reduce telemetry LOS given there was no change in nonhospitalist telemetry LOS despite all of the other hospital‐wide interventions (multidisciplinary rounds, similar housestaff). Third, our study was limited in that the CMI was used as a proxy for patient complexity, and the mortality index was used as the overall marker of safety. Further studies should monitor frequency and outcomes of arrhythmic events of patients transferred from telemetry monitoring to medicalsurgical beds. Finally, as the intervention was multipronged, we are unable to determine which component led to the reductions in telemetry utilization. Each component, however, remains easily transferrable to outside institutions. We demonstrated both a reduction in initiation of telemetry as well as timely discontinuation; however, due to the complexity in capturing this accurately, we were unable to numerically quantify these individual outcomes.
Additionally, there were approximately 10 nonhospitalist attendings who also staffed the wards during the intervention time period of our study; these attendings did not undergo the telemetry tutorial/orientation. This difference, along with the Hawthorne effect for the hospitalist attendings, also likely contributed to the difference in outcomes between the 2 attending cohorts in the intervention period.
CONCLUSIONS
Our results demonstrate that a multipronged hospitalist‐driven intervention to improve appropriate use of telemetry reduces telemetry LOS and cost. Hence, we believe that targeted, education‐driven interventions with monitoring of progress can have demonstrable impacts on changing practice. Physicians will need to make trade‐offs in clinical practice to balance efficient resource utilization with the patient's evolving condition in the inpatient setting, the complexities of clinical workflow, and the patient's expectations.[14] Appropriate telemetry utilization is a prime example of what needs to be done well in the future for high‐value care.
Acknowledgements
The authors acknowledge the hospitalists who participated in the intervention: Jeffrey Chi, Willliam Daines, Sumbul Desai, Poonam Hosamani, John Kugler, Charles Liao, Errol Ozdalga, and Sang Hoon Woo. The authors also acknowledge Joan Hendershott in the Finance Department and Joseph Hopkins in the Quality Department.
Disclosures: All coauthors have seen and agree with the contents of the article; submission (aside from abstracts) was not under review by any other publication. The authors report no disclosures of financial support from, or equity positions in, manufacturers of drugs or products mentioned in the article.
Inpatient hospital services are a major component of total US civilian noninstitutionalized healthcare expenses, accounting for 29.3% of spending in 2009[1] when the average cost per stay was $9700.[2] Telemetry monitoring, a widely used resource for the identification of life‐threatening arrhythmias, contributes to these costs. In 1998, Sivaram et al. estimated the cost per patient at $683; in 2010, Ivonye et al. published the cost difference between a telemetry bed and a nonmonitored bed in their inner‐city public teaching facility reached $800.[3, 4]
In 1991, the American College of Cardiology published guidelines for telemetry use, which were later revised by the American Heart Association in 2004.[5, 6] Notably, the guidelines are based on expert opinion and on research data in electrocardiography.[7] The guidelines divide patients into 3 classes based on clinical condition: recommending telemetry monitoring for almost all class I patients, stating possible benefit in class II patients, and discouraging cardiac monitoring for the low‐risk class III patients.[5, 6] The Choosing Wisely campaign, an initiative of the American Board of Internal Medicine and the Society of Hospital Medicine, highlights telemetry monitoring as 1 of the top 5 interventions that physicians and patients should question when determining tests and procedures.[8] Choosing Wisely suggests using a protocol to govern continuation of telemetry outside of the intensive care unit (ICU), as inappropriate monitoring increases care costs and may result in patient harm.[8] The Joint Commission 2014 National Patient Safety Goals notes that numerous alarm signals and the resulting noise and displayed information tends to desensitize staff and cause them to miss or ignore alarm signals or even disable them.[9]
Few studies have examined implementation methods for improved telemetry bed utilization. One study evaluated the impact of a multispecialty telemetry policy with enforcement by an outside cardiologist and nurse team, noting improved cardiac monitoring bed utilization and decreased academic hospital closure, which previously resulted in inability to accept new patients or procedure cancellation.[10] Another study provided an orientation handout discussed by the chief resident and telemetry indication reviews during multidisciplinary rounds 3 times a week.[11]
Our study is one the first to demonstrate a model for a hospitalist‐led approach to guide appropriate telemetry use. We investigated the impact of a multipronged approach to guide telemetry usage: (1) a hospitalist‐led, daily review of bed utilization during attending rounds, (2) a hospitalist attending‐driven, trainee‐focused education module on telemetry utilization, (3) quarterly feedback on telemetry bed utilization rates, and (4) financial incentives. We analyzed pre‐ and post‐evaluation results from the education module to measure impact on knowledge, skills, and attitudes. Additionally, we evaluated the effect of the intervention on length of stay (LOS) and bed utilization costs, while monitoring case mix index (CMI) and overall mortality.
METHODS
Setting
This study took place at Stanford Hospital and Clinics, a teaching academic center in Stanford, California. Stanford Hospital is a 444‐bed, urban medical center with 114 telemetry intermediate ICU beds, and 66 ICU beds. The 264 medicalsurgical beds lack telemetry monitoring, which can only be completed in the intermediate and full ICU. All patients on telemetry units receive both cardiac monitoring and increased nursing ratios. Transfer orders are placed in the electronic medical record to shift patients between care levels. Bed control attempts to transfer patients as soon as an open bed in the appropriate care level exists.
The study included all 5 housestaff inpatient general internal medicine wards teams (which excludes cardiology, pulmonary hypertension, hematology, oncology, and post‐transplant patients). Hospitalists and nonhospitalists attend on the wards for 1‐ to 2‐week blocks. Teaching teams are staffed by 1 to 2 medical students, 2 interns, 1 resident, and 1 attending. The university institutional review board notice of determination waived review for this study because it was classified as quality improvement.
Participants
Ten full‐ and part‐time hospitalist physicians participated in the standardized telemetry teaching. Fifty‐six of the approximately 80 medical students and housestaff on hospitalists' teams completed the educational evaluation. Both hospitalist and nonhospitalist teams participated in daily multidisciplinary rounds, focusing on barriers to discharge including telemetry use. Twelve nonhospitalists served on the wards during the intervention period. Hospitalists covered 72% of the internal medicine wards during the intervention period.
Study Design
We investigated the impact of a multipronged approach to guide telemetry usage from January 2013 to August 2013 (intervention period).
Hospitalist‐Led Daily Review of Bed Utilization
Hospitalists were encouraged to discuss the need of telemetry on daily attending rounds and review indications for telemetry while on service. Prior to starting a ward block, attendings were emailed the teaching module with a reminder to discuss the need for telemetry on attending rounds. Reminders to discuss telemetry utilization were also provided during every‐other‐week hospitalist meetings. Compliance of daily discussion was not tracked.
Hospitalist‐Driven, Trainee‐Focused, Education Module on Telemetry Utilization
The educational module was taught during teaching sessions only by the hospitalists. Trainees on nonhospitalist teams did not receive dedicated teaching about telemetry usage. The module was given to learners only once. The module was a 10‐slide, Microsoft PowerPoint (Microsoft Corp., Redmond, WA) presentation that reviewed the history of telemetry, the American College of Cardiology and the American Heart Association guidelines, the cost difference between telemetry and nonmonitored beds, and the perceived barriers to discontinuation. The presentation was accompanied by a pre‐ and post‐evaluation to elicit knowledge, skills, and attitudes of telemetry use (see Supporting Information, Appendix A, in the online version of this article). The pre‐ and post‐evaluations were created through consensus with a multidisciplinary, expert panel after reviewing the evidence‐based literature.
Quarterly Feedback on Telemetry Bed Utilization Rates
Hospital beduse and CMI data were obtained from the Stanford finance department for the intervention period and for the baseline period, which was the year prior to the study, January 1, 2012 to December 31, 2012. Hospital beduse data included the number of days patients were on telemetry units versus medicalsurgical units (nontelemetry units), differentiated by hospitalists and nonhospitalists. Cost savings were calculated by the Stanford finance department that used Stanford‐specific, internal cost accounting data to determine the impact of the intervention. These data were reviewed at hospitalist meetings on a quarterly basis. We also obtained the University Healthsystem Consortium mortality index (observed to expected) for the general internal medicine service during the baseline and intervention periods.
To measure sustainment of telemetry reduction in the postintervention period, we measured telemetry LOS from September 2014 to March 2015 (extension period).
Financial Incentives
Hospitalists were provided a $2000 bonus at the end of fiscal year 2013 if the group showed a decrease in telemetry bed use in comparison to the baseline period.
Statistical Analysis of Clinical Outcome Measures
Continuous outcomes were tested using 2‐tailed t tests. Comparison of continuous outcome included differences in telemetry and nontelemetry LOS and CMI. Pairwise comparisons were made for various time periods. A P value of <0.05 was considered statistically significant. Statistical analyses were performed using Stata 12.0 software (StataCorp, College Station, TX).
RESULTS
Clinical and Value Outcomes
Baseline (January 2012December 2012) Versus Intervention Period (January 2013August 2013)
LOS for telemetry beds was significantly reduced over the intervention period (2.75 days vs 2.13 days, P=0.005) for hospitalists. Notably, there was no significant difference in mean LOS between baseline and intervention periods for nontelemetry beds (2.84 days vs 2.72 days, P=0.32) for hospitalists. In comparison, for nonhospitalists, there was no difference in mean LOS for telemetry beds between baseline and intervention periods (2.75 days vs 2.46 days, P=0.33) and nontelemetry beds (2.64 days vs 2.89 days, P=0.26) (Table 1).
Baseline Period | Intervention Period | P Value | Extension Period | P Value | |
---|---|---|---|---|---|
| |||||
Length of stay | |||||
Hospitalists | |||||
Telemetry beds | 2.75 | 2.13 | 0.005 | 1.93 | 0.09 |
Nontelemetry beds | 2.84 | 2.72 | 0.324 | 2.44 | 0.21 |
Nonhospitalists | |||||
Telemetry beds | 2.75 | 2.46 | 0.331 | 2.22 | 0.43 |
Nontelemetry beds | 2.64 | 2.89 | 0.261 | 2.26 | 0.05 |
Case mix index | |||||
Hospitalists | 1.44 | 1.45 | 0.68 | 1.40 | 0.21 |
Nonhospitalists | 1.46 | 1.40 | 0.53 | 1.53 | 0.18 |
Costs of hospital stay were also reduced in the multipronged, hospitalist‐driven intervention group. Expenditures for telemetry beds were reduced by 22.5% over the intervention period for hospitalists (Table 2).
Baseline to Intervention Period | Intervention to Extension Period | |
---|---|---|
| ||
Hospitalists | ||
Telemetry beds | 22.55% | 9.55% |
Nontelemetry beds | 4.23% | 10.14% |
Nonhospitalists | ||
Telemetry beds | 10.55% | 9.89% |
Nontelemetry beds | 9.47% | 21.84% |
The mean CMI of the patient cohort managed by the hospitalists in the baseline and intervention periods was not significantly different (1.44 vs 1.45, P=0.68). The mean CMI of the patients managed by the nonhospitalists in the baseline and intervention periods was also not significantly different (1.46 vs 1.40, P=0.53) (Table 1). Mortality index during the baseline and intervention periods was not significantly different (0.770.22 vs 0.660.23, P=0.54), as during the intervention and extension periods (0.660.23 vs 0.650.15, P=0.95).
Intervention Period (January 2013August 2013) Versus Extension Period (September 2014‐March 2015)
The decreased telemetry LOS for hospitalists was sustained from the intervention period to the extension period, from 2.13 to 1.93 (P=0.09). There was no significant change in the nontelemetry LOS in the intervention period compared to the extension period (2.72 vs 2.44, P=0.21). There was no change in the telemetry LOS for nonhospitalists from the intervention period to the extension period (2.46 vs 2.22, P=0.43).
The mean CMI in the hospitalist group was not significantly different in the intervention period compared to the extension period (1.45 to 1.40, P=0.21). The mean CMI in the nonhospitalist group did not change from the intervention period to the extension period (1.40 vs 1.53, P=0.18) (Table 1).
Education Outcomes
Out of the 56 participants completing the education module and survey, 28.6% were medical students, 53.6% were interns, 12.5% were second‐year residents, and 5.4% were third‐year residents. Several findings were seen at baseline via pretest. In evaluating patterns of current telemetry use, 32.2% of participants reported evaluating the necessity of telemetry for patients on admission only, 26.3% during transitions of care, 5.1% after discharge plans were cemented, 33.1% on a daily basis, and 3.4% rarely. When asked which member of the care team was most likely to encourage use of appropriate telemetry, 20.8% identified another resident, 13.9% nursing, 37.5% attending physician, 20.8% self, 4.2% the team as a whole, and 2.8% as not any.
Figure 1 shows premodule results regarding the trainees perceived percentage of patient encounters during which a participant's team discussed their patient's need for telemetry.
In assessing perception of current telemetry utilization, 1.8% of participants thought 0% to 10% of patients were currently on telemetry, 19.6% thought 11% to 20%, 42.9% thought 21% to 31%, 30.4% thought 31% to 40%, and 3.6% thought 41% to 50%.
Two areas were assessed at both baseline and after the intervention: knowledge of indications of telemetry use and cost related to telemetry use. We saw increased awareness of cost‐saving actions. To assess current knowledge of the indications of proper telemetry use according to American Heart Association guidelines, participants were presented with a list of 5 patients with different clinical indications for telemetry use and asked which patient required telemetry the most. Of the participants, 54.5% identified the correct answer in the pretest and 61.8% identified the correct answer in the post‐test. To assess knowledge of the costs of telemetry relative to other patient care, participants were presented with a patient case and asked to identify the most and least cost‐saving actions to safely care for the patient. When asked to identify the most cost‐saving action, 20.3% identified the correct answer in the pretest and 61.0% identified the correct answer in the post‐test. Of those who answered incorrectly in the pretest, 51.1% answered correctly in the post‐test (P=0.002). When asked to identify the least cost‐saving action, 23.7% identified the correct answer in the pretest and 50.9% identified the correct answer in the posttest. Of those who answered incorrectly in the pretest, 60.0% answered correctly in the post‐test (P=0.003).
In the post‐test, when asked about the importance of appropriate telemetry usage in providing cost‐conscious care and assuring appropriate hospital resource management, 76.8% of participants found the need very important, 21.4% somewhat important, and 1.8% as not applicable. The most commonly perceived barriers impeding discontinuation of telemetry, as reported by participants via post‐test, were nursing desires and time. Figure 2 shows all perceived barriers.
DISCUSSION
Our study is one of the first to our knowledge to demonstrate reductions in telemetry LOS by a hospitalist intervention for telemetry utilization. Others[10, 11] have studied the impact of an orientation handout by chief residents or a multispecialty telemetry policy with enforcement by an outside cardiologist and nurse team. Dressler et al. later sustained a 70% reduction in telemetry use without adversely affecting patient safety, as assessed through numbers of rapid response activations, codes, and deaths, through integrating the AHA guidelines into their electronic ordering system.[12] However, our study has the advantage of the primary team, who knows the patient and clinical scenario best, driving the change during attending rounds. In an era where cost consciousness intersects the practice of medicine, any intervention in patient care that demonstrates cost savings without an adverse impact on patient care and resource utilization must be emphasized. This is particularly important in academic institutions, where residents and medical students are learning to integrate the principles of patient safety and quality improvement into their clinical practice.[13] We actually showed sustained telemetry LOS reductions into the extension period after our intervention. We believe this may be due to telemetry triage being integrated into our attending and resident rounding practices. Future work should include integration of telemetry triage into clinical decision support in the electronic medical record and multidisciplinary rounds to disseminate telemetry triage hospital‐wide in both the academic and community settings.
Our study also revealed that nearly half of participants were not aware of the criteria for appropriate utilization of telemetry before our intervention; in the preintervention period, there were many anecdotal and objective findings of inappropriate utilization of telemetry as well as prolonged continuation beyond the clinical needs in both the hospitalist and nonhospitalist group. For the hospitalist group (ie, the group receiving guideline‐based education on appropriate indications for telemetry utilization), there was an assessment of both appropriate usage and timely discontinuation of telemetry in the postintervention period, which we attribute in large part to adherence to the education provided to this group.
We were able to show increased knowledge of cost‐saving actions among trainees with our educational module. We believe it is imperative to educate our providers (physicians, nurses, case managers, and students within these disciplines) on the appropriate indications for telemetry use, not only to help with cost savings and resource availability (ie, allowing telemetry beds to be available for patients who need them most), but also to instill consistent expectations among our patients. For the hospitalist group (ie, the group receiving guideline‐based education on appropriate indications for telemetry utilization), there was an assessment of both appropriate usage and timely discontinuation of telemetry in the postintervention period, which we attribute in large part to adherence to the education provided to this group.
Additionally, we feel it is important to consider the impacts of inappropriate use of telemetry from a patient's perspective: it is physically restrictive/emnconvenient, alarms are disruptive, it can be a barrier for other treatments such as physical therapy, it may increase the time it takes for imaging studies, a nurse may be required to accompany patients on telemetry, and poses additional costs to their medical bill.
We believe our success is due to several strategies. First, at the start of the fiscal year when quality improvement metrics are established, this particular metric (improving the appropriate utilization and timely discontinuation of telemetry) was deemed important by all hospitalists, engendering group buy‐in prior to the intervention. Our hospitalists received a detailed and interactive tutorial session in person at the beginning of the study. This tutorial provided the hospitalists with a comprehensive understanding of the appropriate (and inappropriate) indications for telemetry monitoring, hence facilitating guideline‐directed utilization. Email reminders and the tutorial tool were provided each time a hospitalist attended on the wards, and hospitalists received a small financial incentive to comply with appropriate telemetry utilization.
Our study has several strengths. First, the time frame of our study was long enough (8 months) to allow consistent trends to emerge and to optimize exposure of housestaff and medical students to this quality‐improvement initiative. Second, our cost savings came from 2 factors, direct reduction of inappropriate telemetry use and reduction in length of stay, highlighting the dual impact of appropriate telemetry utilization on cost. The overall reductions in telemetry utilization for the intervention group were a result of both reductions in initial placement on telemetry for patients who did not meet criteria for such monitoring as well as timely discontinuation of telemetry during the patient's hospitalization. Third, our study demonstrates that physicians can be effective in driving appropriate telemetry usage by participating in the clinical decision making regarding necessity and educating providers, trainees/students, and patients on appropriate indications. Finally, we show sustainment of our intervention in the extension period, suggesting telemetry triage integration into rounding practice.
Our study has limitations as well. First, our sample size is relatively small at a single academic center. Second, due to complexities in our faculty scheduling, we were unable to completely randomize patients to a hospitalist versus nonhospitalist team. However, we believe that despite the inability to randomize, our study does show the benefit of a hospitalist attending to reduce telemetry LOS given there was no change in nonhospitalist telemetry LOS despite all of the other hospital‐wide interventions (multidisciplinary rounds, similar housestaff). Third, our study was limited in that the CMI was used as a proxy for patient complexity, and the mortality index was used as the overall marker of safety. Further studies should monitor frequency and outcomes of arrhythmic events of patients transferred from telemetry monitoring to medicalsurgical beds. Finally, as the intervention was multipronged, we are unable to determine which component led to the reductions in telemetry utilization. Each component, however, remains easily transferrable to outside institutions. We demonstrated both a reduction in initiation of telemetry as well as timely discontinuation; however, due to the complexity in capturing this accurately, we were unable to numerically quantify these individual outcomes.
Additionally, there were approximately 10 nonhospitalist attendings who also staffed the wards during the intervention time period of our study; these attendings did not undergo the telemetry tutorial/orientation. This difference, along with the Hawthorne effect for the hospitalist attendings, also likely contributed to the difference in outcomes between the 2 attending cohorts in the intervention period.
CONCLUSIONS
Our results demonstrate that a multipronged hospitalist‐driven intervention to improve appropriate use of telemetry reduces telemetry LOS and cost. Hence, we believe that targeted, education‐driven interventions with monitoring of progress can have demonstrable impacts on changing practice. Physicians will need to make trade‐offs in clinical practice to balance efficient resource utilization with the patient's evolving condition in the inpatient setting, the complexities of clinical workflow, and the patient's expectations.[14] Appropriate telemetry utilization is a prime example of what needs to be done well in the future for high‐value care.
Acknowledgements
The authors acknowledge the hospitalists who participated in the intervention: Jeffrey Chi, Willliam Daines, Sumbul Desai, Poonam Hosamani, John Kugler, Charles Liao, Errol Ozdalga, and Sang Hoon Woo. The authors also acknowledge Joan Hendershott in the Finance Department and Joseph Hopkins in the Quality Department.
Disclosures: All coauthors have seen and agree with the contents of the article; submission (aside from abstracts) was not under review by any other publication. The authors report no disclosures of financial support from, or equity positions in, manufacturers of drugs or products mentioned in the article.
- 2009. Statistical brief 355. 2012. Agency for Healthcare Research and Quality, Rockville, MD. , National health care expenses in the U.S. civilian noninstitutionalized population,
- Costs for hospital stays in the United States, 2010. Statistical brief 146. 2013. Agency for Healthcare Research and Quality, Rockville, MD. , ,
- Telemetry outside critical care units: patterns of utilization and influence on management decisions. Clin Cardiol. 1998;21(7):503–505. , ,
- Evaluation of telemetry utilization, policy, and outcomes in an inner‐city academic medical center. J Natl Med Assoc. 2010;102(7):598–604. , , , et al.
- Recommended guidelines for in‐hospital cardiac monitoring of adults for detection of arrhythmia. Emergency Cardiac Care Committee members. J Am Coll Cardiol. 1991;18(6):1431–1433. , ,
- Practice standards for electrocardiographic monitoring in hospital settings: an American Heart Association scientific statement from the Councils on Cardiovascular Nursing, Clinical Cardiology, and Cardiovascular Disease in the Young: endorsed by the International Society of Computerized Electrocardiology and the American Association of Critical‐Care Nurses. Circulation. 2004;110(17):2721–2746. , , , et al.
- Is telemetry overused? Is it as helpful as thought? Cleve Clin J Med. 2009;76(6):368–372. , , , , ,
- Society of Hospital Medicine. Adult Hospital Medicine. Five things physicians and patients should question. Available at: http://www.choosingwisely.org/societies/society‐of‐hospital‐medicine‐adult. Published February 21, 2013. Accessed October 5, 2014.
- Joint Commission on Accreditation of Healthcare Organizations. The Joint Commission announces 2014 national patient safety goal. Jt Comm Perspect. 2013;33(7):1–4.
- Optimizing telemetry utilization in an academic medical center. J Clin Outcomes Manage. 2008;15(9):435–440. , , , ,
- Improving utilization of telemetry in a university hospital. J Clin Outcomes Manage. 2005;12(10):519–522. ,
- Altering overuse of cardiac telemetry in non‐intensive care unit settings by hardwiring the use of American Heart Association guidelines. JAMA Intern Med. 2014;174:1852–1854. , , , ,
- "Innovation" institutes in academic health centers: enhancing value through leadership, education, engagement, and scholarship. Acad Med. 2014;89(9):1204–1206. , ,
- Controlling health costs: physician responses to patient expectations for medical care. J Gen Intern Med. 2014;29(9):1234–1241. , , , , ,
- 2009. Statistical brief 355. 2012. Agency for Healthcare Research and Quality, Rockville, MD. , National health care expenses in the U.S. civilian noninstitutionalized population,
- Costs for hospital stays in the United States, 2010. Statistical brief 146. 2013. Agency for Healthcare Research and Quality, Rockville, MD. , ,
- Telemetry outside critical care units: patterns of utilization and influence on management decisions. Clin Cardiol. 1998;21(7):503–505. , ,
- Evaluation of telemetry utilization, policy, and outcomes in an inner‐city academic medical center. J Natl Med Assoc. 2010;102(7):598–604. , , , et al.
- Recommended guidelines for in‐hospital cardiac monitoring of adults for detection of arrhythmia. Emergency Cardiac Care Committee members. J Am Coll Cardiol. 1991;18(6):1431–1433. , ,
- Practice standards for electrocardiographic monitoring in hospital settings: an American Heart Association scientific statement from the Councils on Cardiovascular Nursing, Clinical Cardiology, and Cardiovascular Disease in the Young: endorsed by the International Society of Computerized Electrocardiology and the American Association of Critical‐Care Nurses. Circulation. 2004;110(17):2721–2746. , , , et al.
- Is telemetry overused? Is it as helpful as thought? Cleve Clin J Med. 2009;76(6):368–372. , , , , ,
- Society of Hospital Medicine. Adult Hospital Medicine. Five things physicians and patients should question. Available at: http://www.choosingwisely.org/societies/society‐of‐hospital‐medicine‐adult. Published February 21, 2013. Accessed October 5, 2014.
- Joint Commission on Accreditation of Healthcare Organizations. The Joint Commission announces 2014 national patient safety goal. Jt Comm Perspect. 2013;33(7):1–4.
- Optimizing telemetry utilization in an academic medical center. J Clin Outcomes Manage. 2008;15(9):435–440. , , , ,
- Improving utilization of telemetry in a university hospital. J Clin Outcomes Manage. 2005;12(10):519–522. ,
- Altering overuse of cardiac telemetry in non‐intensive care unit settings by hardwiring the use of American Heart Association guidelines. JAMA Intern Med. 2014;174:1852–1854. , , , ,
- "Innovation" institutes in academic health centers: enhancing value through leadership, education, engagement, and scholarship. Acad Med. 2014;89(9):1204–1206. , ,
- Controlling health costs: physician responses to patient expectations for medical care. J Gen Intern Med. 2014;29(9):1234–1241. , , , , ,