Internal Medicine Resident Engagement with a Laboratory Utilization Dashboard: Mixed Methods Study

Article Type
Changed
Fri, 12/14/2018 - 08:01

Recent efforts to reduce waste and overuse in healthcare include reforms, such as merit-based physician reimbursement for efficient resource use1 and the inclusion of cost-effective care as a competency for physician trainees.2 Focusing on resource use in physician training and reimbursement presumes that teaching and feedback about utilization can alter physician behavior. Early studies of social comparison feedback observed considerable variation in effectiveness, depending on the behavior targeted and how feedback was provided to physicians.3-5 The widespread adoption of electronic medical record (EMR) software enables the design of feedback interventions that provide continuous feedback in real-time via EMR-based practice dashboards. Currently, little is known about physician engagement with practice dashboards and, in particular, about trainee engagement with dashboards aimed to improve cost-effective care.

To inform future efforts in using social comparison feedback to teach cost-effective care in residency, we measured internal medicine resident engagement with an EMR-based utilization dashboard that provides feedback on their use of routine laboratory tests on an inpatient medicine service. Routine labs are often overused in the inpatient setting. In fact, one study reported that 68% of laboratory tests ordered in an academic hospital did not contribute to improving patient outcomes.6 To understand resident perceptions of the dashboards and identify barriers to their use, we conducted a mixed methods study tracking resident utilization of the dashboard over time and collecting qualitative data from 3 focus groups about resident attitudes toward the dashboards.

METHODS

From January 2016 to June 2016, resident-specific rates of routine lab orders (eg, complete blood count, basic metabolic panel, complete metabolic panel, liver function panel, and common coagulation tests) were synthesized continuously in a web-based dashboard. Laboratory orders could be placed either individually on a day-to-day basis or ordered on a recurrent basis (eg, daily morning labs ordered on admission). The dashboard contained an interactive graph, which plotted the average number of labs per patient-day ordered by each resident over the past week, along with an overall graph for all services for comparison (Appendix Figure). Residents could click on an individual day on the graph to review the labs they ordered for each patient. The dashboard also allowed the user to look up each patient’s medical record to obtain more detailed information.

All residents received an e-mail describing the study, including the purpose of the intervention, basic description of the feedback intervention (dashboard and e-mail), potential risks and benefits, duration and scope of data collection, and contact information of the principal investigator. One hundred and ninety-eight resident-blocks on 6 general medicine services at the Hospital of the University of Pennsylvania were cluster-randomized with an equal probability to 1 of 2 arms: (1) those e-mailed a snapshot of the personalized dashboard, a link to the online dashboard, and text containing resident and service utilization averages, and (2) those who did not receive the feedback intervention. Postgraduate year (PGY) 1 residents were attributed only orders by that resident. PGY2 and PGY3 residents were attributed orders for all patients assigned to the resident’s team.

The initial e-mails were timed to arrive in the middle of each resident’s 2-week service to allow for a baseline and follow-up period. The e-mail contained an attachment of a snapshot of the personalized graphic dashboard (Appendix Figure), a link to the online dashboard, and a few sentences summarizing the resident utilization average compared to the general medicine service overall, for the same time interval. They were followed by a reminder e-mail 24 hours later containing only the link to the report card. We measured resident engagement with the utilization dashboard by using e-mail read-receipts and a web-based tracking platform that recorded when the dashboard was opened and who logged on.

Following completion of the intervention, 3-hour-long focus groups were conducted with residents. These focus groups were guided with prescripted questions to prompt discussion on the advantages and drawbacks of the study intervention and the usage of dashboards in general. These sessions were digitally recorded and transcribed. The transcripts were reviewed by 2 authors (KR and GK) and analyzed to identify common themes by using a grounded theory approach.7 First, the transcripts were reviewed independently by each author, who each generated a broad list of themes across 3 domains: dashboard usability, barriers to use, and suggestions for the future. Next, the codebook was refined through an iterative series of discussions and transcript review, resulting in a unified codebook. Lastly, all transcripts were reviewed by using the final codebook definitions, resulting in a list of exemplary quotes and suggestions.

The study was approved by the University of Pennsylvania Institutional Review Board and registered on clinicaltrials.gov (NCT02330289).

 

 

RESULTS

Eighty unique residents participated in the intervention, including 51 PGY1s (64%) and 29 PGY2- or PGY3-level (36%) residents. Of these, 19/80 (24%) physicians participated more than once. 74% of participants opened the e-mail and 21% opened the link to the dashboard. The average elapsed time from receiving the initial e-mail to logging into the dashboard was 28.5 hours (standard deviation [SD] = 25.7, median = 25.5, interquartile range [IQR] = 40.5). On average, residents deviated from the service mean by 0.54 laboratory test orders (SD = 0.49, median = 0.40, IQR = 0.60). The mean baseline rate of targeted labs was 1.30 (SD 1.77) labs per physician per patient-day.8

Table 1 shows the associations between dashboard use and participant characteristics. Participants who deviated from the service average by 1 SD of labs per patient-day had higher odds of opening the link to the dashboard (odds ratio [OR]: 1.48; 95% confidence interval [CI], 1.01-2.17; P = 0.047). Associations with other characteristics (direction of deviation from the mean, PGY level, first occurrence of intervention, weeks since the start of intervention, and other team members opening the link) were not significant.

We did not observe a statistically significant difference in routine laboratory ordering by dashboard use, although residents who opened the link to the dashboard ordered 0.26 fewer labs per doctor-patient-day than those who did not (95% CI, −0.77-0.25; P = 0.31). The greatest difference was observed on day 2 after the intervention, when lab orders were lower among dashboard users by 0.59 labs per doc-patient-day (95% CI, −1.41-0.24; P = 0.16) when compared with the residents who did not open the dashboard.

Table 2 displays the main themes generated from the resident focus groups and provides representative quotes. Focus groups were open to all residents, including those who were not randomized to receive the study intervention. A total of 23 residents participated in the focus groups. First, residents commented on the advantages of the dashboard intervention about test utilization. Specifically, they felt positively that it raised awareness about overuse, appreciated receiving individualized feedback about their own practice, and liked that the data could be reviewed quickly. However, residents also expressed concerns about the design and implementation of the dashboard, including a lack of adjustment for patient complexity, small sample size, and time constraints limiting detailed dashboard exploration. Second, participants questioned the practicality of using such data-driven individualized feedback for training purposes in general, considering the low patient volume assigned to trainees and the sense that such feedback is too simplistic. For example, 1 participant commented, “…it really takes all of the thinking out of it and just is glossing over the numbers, which I think could be a little bit frustrating.”

Third, participants identified barriers to using dashboards during training, including time constraints, insufficient patient volume, possible unanticipated consequences, and concerns regarding punitive action by the hospital administration or teaching supervisors. Suggestions to improve the uptake of practice feedback via dashboards included additional guidance for interpreting the data, exclusion of outlier cases or risk-adjustment, and ensuring ease of access to the data.

Last, participants also expressed enthusiasm toward receiving other types of individualized feedback data, including patient satisfaction, timing of discharges, readmission rates, utilization of consulting services, length of stay, antibiotic stewardship practices, costs and utilization data, and mortality or intensive care unit transfer rates (data not shown).

DISCUSSION

Overall, the engagement rates of internal medicine trainees with the online dashboard were low. Most residents did open the e-mails containing the link and basic information about their utilization rates, but less than a quarter of them accessed the dashboard containing real-time data. Additionally, on average, it took them more than a day to do so. However, there is some indication that residents who deviated further from the mean in either direction, which was described in the body of the e-mail, were more motivated to investigate further and click the link to access the dashboard. This suggests that providing practice feedback in this manner may be effective for a subset of residents who deviate from the “typical practice,” and as such, dashboards may represent a potential educational tool that could be aligned with practice-based learning competencies.

The focus groups provided important context about residents’ attitudes toward EMR-based dashboards. Overall, residents were enthusiastic about receiving information regarding their personal laboratory ordering, both in terms of preventing iatrogenic harm and waste of resources. This supports previous research that found that both medical students and residents overwhelmingly believe that the overuse of labs is a problem and that there may be insufficient focus on cost-conscious care during training.9,10 However, many residents questioned several aspects of the specific intervention used in this study and suggested that significant improvements would need to be made to future dashboards to increase their utility.

To our knowledge, this is the first attempt to evaluate resident engagement and attitudes toward receiving practice-based feedback via an EMR-based online dashboard. Previous efforts to influence resident laboratory ordering behavior have primarily focused on didactic sessions, financial incentives, price transparency, and repeated e-mail messaging containing summary statistics about ordering practices and peer comparisons.11-14 While some prior studies observed success in decreasing unnecessary use of laboratory tests, such efforts are challenging to implement routinely on a teaching service with multiple rotating providers and may be difficult to replicate. Future iterations of dashboards that incorporate focused curriculum design and active participation of teaching attendings require further study.

This study has several limitations. The sample size of physicians is relatively small and consists of residents at a single institution. This may limit the generalizability of the results. Additionally, the dashboard captured laboratory-ordering rates during a 2-week block on an inpatient medicine service and was not adjusted for factors such as patient case mix. However, the rates were adjusted for patient volume. In future iterations of utilization dashboards, residents’ concerns about small sample size and variability in clinical severity could be addressed through the adoption of risk-adjustment methodologies to balance out patient burden. This could be accomplished using currently available EMR data, such as diagnosis related groups or diagnoses codes to adjust for clinical complexity or report expected length of stay as a surrogate indicator of complexity.

Because residents are expected to be responsive to feedback, their use of the dashboards may represent an upper bound on physician responsiveness to social comparison feedback regarding utilization. However, e-mails alone may not be an effective way to provide feedback in areas that require additional engagement by the learner, especially given the volume of e-mails and alerts physicians receive. Future efforts to improve care efficiency may try to better capture baseline ordering rates, follow resident ordering over a longer period of time, encourage hospital staff to review utilization information with trainees, integrate dashboard information into regular performance reviews by the attendings, and provide more concrete feedback from attendings or senior residents for how this information can be used to adjust behavior.

 

 

Disclosure

Dr. Ryskina’s work on this study was supported by the Ruth L. Kirschstein National Research Service Award (T32-HP10026) and the NIA Career Development Award (K08AG052572). Dr. Patel reports board membership on the advisory board of and owning stock/stock options for Healthmine Services, and serving as a consultant and owning stock/stock options for Catalyst Health LLC. The authors declare no conflict of interest.

Files
References

1. Clough JD, McClellan M. Implementing MACRA: Implications for Physicians and for Physician Leadership. JAMA. 2016;315(22):2397-2398. PubMed
2. The Internal Medicine Subspecialty Milestones Project. A Joint Initiative of the Accrediation Council for Graduate Medical Education and The American Board of Internal Medicine. http://www.acgme.org/portals/0/pdfs/milestones/internalmedicinesubspecialtymilestoint.pdf. Accessed July 6, 2016.
3. Meeker D, Linder JA, Fox CR, et al. Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices: A Randomized Clinical Trial. JAMA. 2016;315(6):562-570. PubMed
4. Jamtvedt G, Young JM, Kristoffersen DT, O’Brien MA, Oxman AD. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2006;2(2):CD000259. PubMed
5. Navathe AS, Emanuel EJ. Physician Peer Comparisons as a Nonfinancial Strategy to Improve the Value of Care. JAMA. 2016;316(17)1759-1760. PubMed
6. Miyakis S, Karamanof G, Liontos M, Mountokalakis TD. Factors contributing to inappropriate ordering of tests in an academic medical department and the effect of an educational feedback strategy. Postgrad Med J. 2006;82(974):823-829. PubMed
7. Glaser B, Strauss A. The Discovery of Grounded Theory. London: Weidenfeld and Nicholson; 1967. 
8. Ryskina K, Dine J, Gitelman Y, et al. Effect of norms on laboratory and imaging testing (ENLITen): A Randomized Controlled Trial. Abstract presented at the Society of General Internal Medicine Conference; April 20, 2017; Washington, DC. 
9. Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med. 2016;11(12):869-872. PubMed
10. Tartaglia KM, Kman N, Ledford C. Medical student perceptions of cost-conscious care in an internal medicine clerkship: a thematic analysis. J Gen Intern Med. 2015;30(10):1491-1496.  PubMed
11. Iams W, Heck J, Kapp M, et al. A Multidisciplinary Housestaff-Led Initiative to Safely Reduce Daily Laboratory Testing. Acad Med. 2016;91(6):813-820. DOI:10.1097/ACM.0000000000001149. PubMed
12. Corson AH, Fan VS, White T, et al. A multifaceted hospitalist quality improvement intervention: decreased frequency of common labs. J Hosp Med. 2015;10:390-395. PubMed
13. Yarbrough P, Kukhareva P, Horton D, Edholm K, Kawamoto K. Multifaceted Intervention including Education, Rounding Checklist Implementation, Cost Feedback, and Financial Incentives Reduces Inpatient Laboratory Costs. J Hosp Med. 2016;11(5):348-354. PubMed
14. Feldman LS, Shihab HM, Thiemann D, et al. Impact of Providing Fee Data on Laboratory Test Ordering: A Controlled Clinical Trial. JAMA Intern Med. 2013;173(10):903-908. PubMed

Article PDF
Issue
Journal of Hospital Medicine 12 (9)
Publications
Topics
Page Number
743-746
Sections
Files
Files
Article PDF
Article PDF

Recent efforts to reduce waste and overuse in healthcare include reforms, such as merit-based physician reimbursement for efficient resource use1 and the inclusion of cost-effective care as a competency for physician trainees.2 Focusing on resource use in physician training and reimbursement presumes that teaching and feedback about utilization can alter physician behavior. Early studies of social comparison feedback observed considerable variation in effectiveness, depending on the behavior targeted and how feedback was provided to physicians.3-5 The widespread adoption of electronic medical record (EMR) software enables the design of feedback interventions that provide continuous feedback in real-time via EMR-based practice dashboards. Currently, little is known about physician engagement with practice dashboards and, in particular, about trainee engagement with dashboards aimed to improve cost-effective care.

To inform future efforts in using social comparison feedback to teach cost-effective care in residency, we measured internal medicine resident engagement with an EMR-based utilization dashboard that provides feedback on their use of routine laboratory tests on an inpatient medicine service. Routine labs are often overused in the inpatient setting. In fact, one study reported that 68% of laboratory tests ordered in an academic hospital did not contribute to improving patient outcomes.6 To understand resident perceptions of the dashboards and identify barriers to their use, we conducted a mixed methods study tracking resident utilization of the dashboard over time and collecting qualitative data from 3 focus groups about resident attitudes toward the dashboards.

METHODS

From January 2016 to June 2016, resident-specific rates of routine lab orders (eg, complete blood count, basic metabolic panel, complete metabolic panel, liver function panel, and common coagulation tests) were synthesized continuously in a web-based dashboard. Laboratory orders could be placed either individually on a day-to-day basis or ordered on a recurrent basis (eg, daily morning labs ordered on admission). The dashboard contained an interactive graph, which plotted the average number of labs per patient-day ordered by each resident over the past week, along with an overall graph for all services for comparison (Appendix Figure). Residents could click on an individual day on the graph to review the labs they ordered for each patient. The dashboard also allowed the user to look up each patient’s medical record to obtain more detailed information.

All residents received an e-mail describing the study, including the purpose of the intervention, basic description of the feedback intervention (dashboard and e-mail), potential risks and benefits, duration and scope of data collection, and contact information of the principal investigator. One hundred and ninety-eight resident-blocks on 6 general medicine services at the Hospital of the University of Pennsylvania were cluster-randomized with an equal probability to 1 of 2 arms: (1) those e-mailed a snapshot of the personalized dashboard, a link to the online dashboard, and text containing resident and service utilization averages, and (2) those who did not receive the feedback intervention. Postgraduate year (PGY) 1 residents were attributed only orders by that resident. PGY2 and PGY3 residents were attributed orders for all patients assigned to the resident’s team.

The initial e-mails were timed to arrive in the middle of each resident’s 2-week service to allow for a baseline and follow-up period. The e-mail contained an attachment of a snapshot of the personalized graphic dashboard (Appendix Figure), a link to the online dashboard, and a few sentences summarizing the resident utilization average compared to the general medicine service overall, for the same time interval. They were followed by a reminder e-mail 24 hours later containing only the link to the report card. We measured resident engagement with the utilization dashboard by using e-mail read-receipts and a web-based tracking platform that recorded when the dashboard was opened and who logged on.

Following completion of the intervention, 3-hour-long focus groups were conducted with residents. These focus groups were guided with prescripted questions to prompt discussion on the advantages and drawbacks of the study intervention and the usage of dashboards in general. These sessions were digitally recorded and transcribed. The transcripts were reviewed by 2 authors (KR and GK) and analyzed to identify common themes by using a grounded theory approach.7 First, the transcripts were reviewed independently by each author, who each generated a broad list of themes across 3 domains: dashboard usability, barriers to use, and suggestions for the future. Next, the codebook was refined through an iterative series of discussions and transcript review, resulting in a unified codebook. Lastly, all transcripts were reviewed by using the final codebook definitions, resulting in a list of exemplary quotes and suggestions.

The study was approved by the University of Pennsylvania Institutional Review Board and registered on clinicaltrials.gov (NCT02330289).

 

 

RESULTS

Eighty unique residents participated in the intervention, including 51 PGY1s (64%) and 29 PGY2- or PGY3-level (36%) residents. Of these, 19/80 (24%) physicians participated more than once. 74% of participants opened the e-mail and 21% opened the link to the dashboard. The average elapsed time from receiving the initial e-mail to logging into the dashboard was 28.5 hours (standard deviation [SD] = 25.7, median = 25.5, interquartile range [IQR] = 40.5). On average, residents deviated from the service mean by 0.54 laboratory test orders (SD = 0.49, median = 0.40, IQR = 0.60). The mean baseline rate of targeted labs was 1.30 (SD 1.77) labs per physician per patient-day.8

Table 1 shows the associations between dashboard use and participant characteristics. Participants who deviated from the service average by 1 SD of labs per patient-day had higher odds of opening the link to the dashboard (odds ratio [OR]: 1.48; 95% confidence interval [CI], 1.01-2.17; P = 0.047). Associations with other characteristics (direction of deviation from the mean, PGY level, first occurrence of intervention, weeks since the start of intervention, and other team members opening the link) were not significant.

We did not observe a statistically significant difference in routine laboratory ordering by dashboard use, although residents who opened the link to the dashboard ordered 0.26 fewer labs per doctor-patient-day than those who did not (95% CI, −0.77-0.25; P = 0.31). The greatest difference was observed on day 2 after the intervention, when lab orders were lower among dashboard users by 0.59 labs per doc-patient-day (95% CI, −1.41-0.24; P = 0.16) when compared with the residents who did not open the dashboard.

Table 2 displays the main themes generated from the resident focus groups and provides representative quotes. Focus groups were open to all residents, including those who were not randomized to receive the study intervention. A total of 23 residents participated in the focus groups. First, residents commented on the advantages of the dashboard intervention about test utilization. Specifically, they felt positively that it raised awareness about overuse, appreciated receiving individualized feedback about their own practice, and liked that the data could be reviewed quickly. However, residents also expressed concerns about the design and implementation of the dashboard, including a lack of adjustment for patient complexity, small sample size, and time constraints limiting detailed dashboard exploration. Second, participants questioned the practicality of using such data-driven individualized feedback for training purposes in general, considering the low patient volume assigned to trainees and the sense that such feedback is too simplistic. For example, 1 participant commented, “…it really takes all of the thinking out of it and just is glossing over the numbers, which I think could be a little bit frustrating.”

Third, participants identified barriers to using dashboards during training, including time constraints, insufficient patient volume, possible unanticipated consequences, and concerns regarding punitive action by the hospital administration or teaching supervisors. Suggestions to improve the uptake of practice feedback via dashboards included additional guidance for interpreting the data, exclusion of outlier cases or risk-adjustment, and ensuring ease of access to the data.

Last, participants also expressed enthusiasm toward receiving other types of individualized feedback data, including patient satisfaction, timing of discharges, readmission rates, utilization of consulting services, length of stay, antibiotic stewardship practices, costs and utilization data, and mortality or intensive care unit transfer rates (data not shown).

DISCUSSION

Overall, the engagement rates of internal medicine trainees with the online dashboard were low. Most residents did open the e-mails containing the link and basic information about their utilization rates, but less than a quarter of them accessed the dashboard containing real-time data. Additionally, on average, it took them more than a day to do so. However, there is some indication that residents who deviated further from the mean in either direction, which was described in the body of the e-mail, were more motivated to investigate further and click the link to access the dashboard. This suggests that providing practice feedback in this manner may be effective for a subset of residents who deviate from the “typical practice,” and as such, dashboards may represent a potential educational tool that could be aligned with practice-based learning competencies.

The focus groups provided important context about residents’ attitudes toward EMR-based dashboards. Overall, residents were enthusiastic about receiving information regarding their personal laboratory ordering, both in terms of preventing iatrogenic harm and waste of resources. This supports previous research that found that both medical students and residents overwhelmingly believe that the overuse of labs is a problem and that there may be insufficient focus on cost-conscious care during training.9,10 However, many residents questioned several aspects of the specific intervention used in this study and suggested that significant improvements would need to be made to future dashboards to increase their utility.

To our knowledge, this is the first attempt to evaluate resident engagement and attitudes toward receiving practice-based feedback via an EMR-based online dashboard. Previous efforts to influence resident laboratory ordering behavior have primarily focused on didactic sessions, financial incentives, price transparency, and repeated e-mail messaging containing summary statistics about ordering practices and peer comparisons.11-14 While some prior studies observed success in decreasing unnecessary use of laboratory tests, such efforts are challenging to implement routinely on a teaching service with multiple rotating providers and may be difficult to replicate. Future iterations of dashboards that incorporate focused curriculum design and active participation of teaching attendings require further study.

This study has several limitations. The sample size of physicians is relatively small and consists of residents at a single institution. This may limit the generalizability of the results. Additionally, the dashboard captured laboratory-ordering rates during a 2-week block on an inpatient medicine service and was not adjusted for factors such as patient case mix. However, the rates were adjusted for patient volume. In future iterations of utilization dashboards, residents’ concerns about small sample size and variability in clinical severity could be addressed through the adoption of risk-adjustment methodologies to balance out patient burden. This could be accomplished using currently available EMR data, such as diagnosis related groups or diagnoses codes to adjust for clinical complexity or report expected length of stay as a surrogate indicator of complexity.

Because residents are expected to be responsive to feedback, their use of the dashboards may represent an upper bound on physician responsiveness to social comparison feedback regarding utilization. However, e-mails alone may not be an effective way to provide feedback in areas that require additional engagement by the learner, especially given the volume of e-mails and alerts physicians receive. Future efforts to improve care efficiency may try to better capture baseline ordering rates, follow resident ordering over a longer period of time, encourage hospital staff to review utilization information with trainees, integrate dashboard information into regular performance reviews by the attendings, and provide more concrete feedback from attendings or senior residents for how this information can be used to adjust behavior.

 

 

Disclosure

Dr. Ryskina’s work on this study was supported by the Ruth L. Kirschstein National Research Service Award (T32-HP10026) and the NIA Career Development Award (K08AG052572). Dr. Patel reports board membership on the advisory board of and owning stock/stock options for Healthmine Services, and serving as a consultant and owning stock/stock options for Catalyst Health LLC. The authors declare no conflict of interest.

Recent efforts to reduce waste and overuse in healthcare include reforms, such as merit-based physician reimbursement for efficient resource use1 and the inclusion of cost-effective care as a competency for physician trainees.2 Focusing on resource use in physician training and reimbursement presumes that teaching and feedback about utilization can alter physician behavior. Early studies of social comparison feedback observed considerable variation in effectiveness, depending on the behavior targeted and how feedback was provided to physicians.3-5 The widespread adoption of electronic medical record (EMR) software enables the design of feedback interventions that provide continuous feedback in real-time via EMR-based practice dashboards. Currently, little is known about physician engagement with practice dashboards and, in particular, about trainee engagement with dashboards aimed to improve cost-effective care.

To inform future efforts in using social comparison feedback to teach cost-effective care in residency, we measured internal medicine resident engagement with an EMR-based utilization dashboard that provides feedback on their use of routine laboratory tests on an inpatient medicine service. Routine labs are often overused in the inpatient setting. In fact, one study reported that 68% of laboratory tests ordered in an academic hospital did not contribute to improving patient outcomes.6 To understand resident perceptions of the dashboards and identify barriers to their use, we conducted a mixed methods study tracking resident utilization of the dashboard over time and collecting qualitative data from 3 focus groups about resident attitudes toward the dashboards.

METHODS

From January 2016 to June 2016, resident-specific rates of routine lab orders (eg, complete blood count, basic metabolic panel, complete metabolic panel, liver function panel, and common coagulation tests) were synthesized continuously in a web-based dashboard. Laboratory orders could be placed either individually on a day-to-day basis or ordered on a recurrent basis (eg, daily morning labs ordered on admission). The dashboard contained an interactive graph, which plotted the average number of labs per patient-day ordered by each resident over the past week, along with an overall graph for all services for comparison (Appendix Figure). Residents could click on an individual day on the graph to review the labs they ordered for each patient. The dashboard also allowed the user to look up each patient’s medical record to obtain more detailed information.

All residents received an e-mail describing the study, including the purpose of the intervention, basic description of the feedback intervention (dashboard and e-mail), potential risks and benefits, duration and scope of data collection, and contact information of the principal investigator. One hundred and ninety-eight resident-blocks on 6 general medicine services at the Hospital of the University of Pennsylvania were cluster-randomized with an equal probability to 1 of 2 arms: (1) those e-mailed a snapshot of the personalized dashboard, a link to the online dashboard, and text containing resident and service utilization averages, and (2) those who did not receive the feedback intervention. Postgraduate year (PGY) 1 residents were attributed only orders by that resident. PGY2 and PGY3 residents were attributed orders for all patients assigned to the resident’s team.

The initial e-mails were timed to arrive in the middle of each resident’s 2-week service to allow for a baseline and follow-up period. The e-mail contained an attachment of a snapshot of the personalized graphic dashboard (Appendix Figure), a link to the online dashboard, and a few sentences summarizing the resident utilization average compared to the general medicine service overall, for the same time interval. They were followed by a reminder e-mail 24 hours later containing only the link to the report card. We measured resident engagement with the utilization dashboard by using e-mail read-receipts and a web-based tracking platform that recorded when the dashboard was opened and who logged on.

Following completion of the intervention, 3-hour-long focus groups were conducted with residents. These focus groups were guided with prescripted questions to prompt discussion on the advantages and drawbacks of the study intervention and the usage of dashboards in general. These sessions were digitally recorded and transcribed. The transcripts were reviewed by 2 authors (KR and GK) and analyzed to identify common themes by using a grounded theory approach.7 First, the transcripts were reviewed independently by each author, who each generated a broad list of themes across 3 domains: dashboard usability, barriers to use, and suggestions for the future. Next, the codebook was refined through an iterative series of discussions and transcript review, resulting in a unified codebook. Lastly, all transcripts were reviewed by using the final codebook definitions, resulting in a list of exemplary quotes and suggestions.

The study was approved by the University of Pennsylvania Institutional Review Board and registered on clinicaltrials.gov (NCT02330289).

 

 

RESULTS

Eighty unique residents participated in the intervention, including 51 PGY1s (64%) and 29 PGY2- or PGY3-level (36%) residents. Of these, 19/80 (24%) physicians participated more than once. 74% of participants opened the e-mail and 21% opened the link to the dashboard. The average elapsed time from receiving the initial e-mail to logging into the dashboard was 28.5 hours (standard deviation [SD] = 25.7, median = 25.5, interquartile range [IQR] = 40.5). On average, residents deviated from the service mean by 0.54 laboratory test orders (SD = 0.49, median = 0.40, IQR = 0.60). The mean baseline rate of targeted labs was 1.30 (SD 1.77) labs per physician per patient-day.8

Table 1 shows the associations between dashboard use and participant characteristics. Participants who deviated from the service average by 1 SD of labs per patient-day had higher odds of opening the link to the dashboard (odds ratio [OR]: 1.48; 95% confidence interval [CI], 1.01-2.17; P = 0.047). Associations with other characteristics (direction of deviation from the mean, PGY level, first occurrence of intervention, weeks since the start of intervention, and other team members opening the link) were not significant.

We did not observe a statistically significant difference in routine laboratory ordering by dashboard use, although residents who opened the link to the dashboard ordered 0.26 fewer labs per doctor-patient-day than those who did not (95% CI, −0.77-0.25; P = 0.31). The greatest difference was observed on day 2 after the intervention, when lab orders were lower among dashboard users by 0.59 labs per doc-patient-day (95% CI, −1.41-0.24; P = 0.16) when compared with the residents who did not open the dashboard.

Table 2 displays the main themes generated from the resident focus groups and provides representative quotes. Focus groups were open to all residents, including those who were not randomized to receive the study intervention. A total of 23 residents participated in the focus groups. First, residents commented on the advantages of the dashboard intervention about test utilization. Specifically, they felt positively that it raised awareness about overuse, appreciated receiving individualized feedback about their own practice, and liked that the data could be reviewed quickly. However, residents also expressed concerns about the design and implementation of the dashboard, including a lack of adjustment for patient complexity, small sample size, and time constraints limiting detailed dashboard exploration. Second, participants questioned the practicality of using such data-driven individualized feedback for training purposes in general, considering the low patient volume assigned to trainees and the sense that such feedback is too simplistic. For example, 1 participant commented, “…it really takes all of the thinking out of it and just is glossing over the numbers, which I think could be a little bit frustrating.”

Third, participants identified barriers to using dashboards during training, including time constraints, insufficient patient volume, possible unanticipated consequences, and concerns regarding punitive action by the hospital administration or teaching supervisors. Suggestions to improve the uptake of practice feedback via dashboards included additional guidance for interpreting the data, exclusion of outlier cases or risk-adjustment, and ensuring ease of access to the data.

Last, participants also expressed enthusiasm toward receiving other types of individualized feedback data, including patient satisfaction, timing of discharges, readmission rates, utilization of consulting services, length of stay, antibiotic stewardship practices, costs and utilization data, and mortality or intensive care unit transfer rates (data not shown).

DISCUSSION

Overall, the engagement rates of internal medicine trainees with the online dashboard were low. Most residents did open the e-mails containing the link and basic information about their utilization rates, but less than a quarter of them accessed the dashboard containing real-time data. Additionally, on average, it took them more than a day to do so. However, there is some indication that residents who deviated further from the mean in either direction, which was described in the body of the e-mail, were more motivated to investigate further and click the link to access the dashboard. This suggests that providing practice feedback in this manner may be effective for a subset of residents who deviate from the “typical practice,” and as such, dashboards may represent a potential educational tool that could be aligned with practice-based learning competencies.

The focus groups provided important context about residents’ attitudes toward EMR-based dashboards. Overall, residents were enthusiastic about receiving information regarding their personal laboratory ordering, both in terms of preventing iatrogenic harm and waste of resources. This supports previous research that found that both medical students and residents overwhelmingly believe that the overuse of labs is a problem and that there may be insufficient focus on cost-conscious care during training.9,10 However, many residents questioned several aspects of the specific intervention used in this study and suggested that significant improvements would need to be made to future dashboards to increase their utility.

To our knowledge, this is the first attempt to evaluate resident engagement and attitudes toward receiving practice-based feedback via an EMR-based online dashboard. Previous efforts to influence resident laboratory ordering behavior have primarily focused on didactic sessions, financial incentives, price transparency, and repeated e-mail messaging containing summary statistics about ordering practices and peer comparisons.11-14 While some prior studies observed success in decreasing unnecessary use of laboratory tests, such efforts are challenging to implement routinely on a teaching service with multiple rotating providers and may be difficult to replicate. Future iterations of dashboards that incorporate focused curriculum design and active participation of teaching attendings require further study.

This study has several limitations. The sample size of physicians is relatively small and consists of residents at a single institution. This may limit the generalizability of the results. Additionally, the dashboard captured laboratory-ordering rates during a 2-week block on an inpatient medicine service and was not adjusted for factors such as patient case mix. However, the rates were adjusted for patient volume. In future iterations of utilization dashboards, residents’ concerns about small sample size and variability in clinical severity could be addressed through the adoption of risk-adjustment methodologies to balance out patient burden. This could be accomplished using currently available EMR data, such as diagnosis related groups or diagnoses codes to adjust for clinical complexity or report expected length of stay as a surrogate indicator of complexity.

Because residents are expected to be responsive to feedback, their use of the dashboards may represent an upper bound on physician responsiveness to social comparison feedback regarding utilization. However, e-mails alone may not be an effective way to provide feedback in areas that require additional engagement by the learner, especially given the volume of e-mails and alerts physicians receive. Future efforts to improve care efficiency may try to better capture baseline ordering rates, follow resident ordering over a longer period of time, encourage hospital staff to review utilization information with trainees, integrate dashboard information into regular performance reviews by the attendings, and provide more concrete feedback from attendings or senior residents for how this information can be used to adjust behavior.

 

 

Disclosure

Dr. Ryskina’s work on this study was supported by the Ruth L. Kirschstein National Research Service Award (T32-HP10026) and the NIA Career Development Award (K08AG052572). Dr. Patel reports board membership on the advisory board of and owning stock/stock options for Healthmine Services, and serving as a consultant and owning stock/stock options for Catalyst Health LLC. The authors declare no conflict of interest.

References

1. Clough JD, McClellan M. Implementing MACRA: Implications for Physicians and for Physician Leadership. JAMA. 2016;315(22):2397-2398. PubMed
2. The Internal Medicine Subspecialty Milestones Project. A Joint Initiative of the Accrediation Council for Graduate Medical Education and The American Board of Internal Medicine. http://www.acgme.org/portals/0/pdfs/milestones/internalmedicinesubspecialtymilestoint.pdf. Accessed July 6, 2016.
3. Meeker D, Linder JA, Fox CR, et al. Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices: A Randomized Clinical Trial. JAMA. 2016;315(6):562-570. PubMed
4. Jamtvedt G, Young JM, Kristoffersen DT, O’Brien MA, Oxman AD. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2006;2(2):CD000259. PubMed
5. Navathe AS, Emanuel EJ. Physician Peer Comparisons as a Nonfinancial Strategy to Improve the Value of Care. JAMA. 2016;316(17)1759-1760. PubMed
6. Miyakis S, Karamanof G, Liontos M, Mountokalakis TD. Factors contributing to inappropriate ordering of tests in an academic medical department and the effect of an educational feedback strategy. Postgrad Med J. 2006;82(974):823-829. PubMed
7. Glaser B, Strauss A. The Discovery of Grounded Theory. London: Weidenfeld and Nicholson; 1967. 
8. Ryskina K, Dine J, Gitelman Y, et al. Effect of norms on laboratory and imaging testing (ENLITen): A Randomized Controlled Trial. Abstract presented at the Society of General Internal Medicine Conference; April 20, 2017; Washington, DC. 
9. Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med. 2016;11(12):869-872. PubMed
10. Tartaglia KM, Kman N, Ledford C. Medical student perceptions of cost-conscious care in an internal medicine clerkship: a thematic analysis. J Gen Intern Med. 2015;30(10):1491-1496.  PubMed
11. Iams W, Heck J, Kapp M, et al. A Multidisciplinary Housestaff-Led Initiative to Safely Reduce Daily Laboratory Testing. Acad Med. 2016;91(6):813-820. DOI:10.1097/ACM.0000000000001149. PubMed
12. Corson AH, Fan VS, White T, et al. A multifaceted hospitalist quality improvement intervention: decreased frequency of common labs. J Hosp Med. 2015;10:390-395. PubMed
13. Yarbrough P, Kukhareva P, Horton D, Edholm K, Kawamoto K. Multifaceted Intervention including Education, Rounding Checklist Implementation, Cost Feedback, and Financial Incentives Reduces Inpatient Laboratory Costs. J Hosp Med. 2016;11(5):348-354. PubMed
14. Feldman LS, Shihab HM, Thiemann D, et al. Impact of Providing Fee Data on Laboratory Test Ordering: A Controlled Clinical Trial. JAMA Intern Med. 2013;173(10):903-908. PubMed

References

1. Clough JD, McClellan M. Implementing MACRA: Implications for Physicians and for Physician Leadership. JAMA. 2016;315(22):2397-2398. PubMed
2. The Internal Medicine Subspecialty Milestones Project. A Joint Initiative of the Accrediation Council for Graduate Medical Education and The American Board of Internal Medicine. http://www.acgme.org/portals/0/pdfs/milestones/internalmedicinesubspecialtymilestoint.pdf. Accessed July 6, 2016.
3. Meeker D, Linder JA, Fox CR, et al. Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices: A Randomized Clinical Trial. JAMA. 2016;315(6):562-570. PubMed
4. Jamtvedt G, Young JM, Kristoffersen DT, O’Brien MA, Oxman AD. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2006;2(2):CD000259. PubMed
5. Navathe AS, Emanuel EJ. Physician Peer Comparisons as a Nonfinancial Strategy to Improve the Value of Care. JAMA. 2016;316(17)1759-1760. PubMed
6. Miyakis S, Karamanof G, Liontos M, Mountokalakis TD. Factors contributing to inappropriate ordering of tests in an academic medical department and the effect of an educational feedback strategy. Postgrad Med J. 2006;82(974):823-829. PubMed
7. Glaser B, Strauss A. The Discovery of Grounded Theory. London: Weidenfeld and Nicholson; 1967. 
8. Ryskina K, Dine J, Gitelman Y, et al. Effect of norms on laboratory and imaging testing (ENLITen): A Randomized Controlled Trial. Abstract presented at the Society of General Internal Medicine Conference; April 20, 2017; Washington, DC. 
9. Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med. 2016;11(12):869-872. PubMed
10. Tartaglia KM, Kman N, Ledford C. Medical student perceptions of cost-conscious care in an internal medicine clerkship: a thematic analysis. J Gen Intern Med. 2015;30(10):1491-1496.  PubMed
11. Iams W, Heck J, Kapp M, et al. A Multidisciplinary Housestaff-Led Initiative to Safely Reduce Daily Laboratory Testing. Acad Med. 2016;91(6):813-820. DOI:10.1097/ACM.0000000000001149. PubMed
12. Corson AH, Fan VS, White T, et al. A multifaceted hospitalist quality improvement intervention: decreased frequency of common labs. J Hosp Med. 2015;10:390-395. PubMed
13. Yarbrough P, Kukhareva P, Horton D, Edholm K, Kawamoto K. Multifaceted Intervention including Education, Rounding Checklist Implementation, Cost Feedback, and Financial Incentives Reduces Inpatient Laboratory Costs. J Hosp Med. 2016;11(5):348-354. PubMed
14. Feldman LS, Shihab HM, Thiemann D, et al. Impact of Providing Fee Data on Laboratory Test Ordering: A Controlled Clinical Trial. JAMA Intern Med. 2013;173(10):903-908. PubMed

Issue
Journal of Hospital Medicine 12 (9)
Issue
Journal of Hospital Medicine 12 (9)
Page Number
743-746
Page Number
743-746
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Kira Ryskina, MD, 12-30 12th Floor, Blockley Hall, 423 Guardian Drive, Philadelphia, PA 19104; Telephone: 215-898-3935; E-mail: ryskina@mail.med.upenn.edu
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Disqus Comments
Default
Use ProPublica
Article PDF Media
Media Files