Affiliations
Department of Medicine, Duke University Medical Center
Department of Medicine, Durham Veterans Affairs Medical Center, Durham, North Carolina
Given name(s)
Joel C.
Family name
Boggan
Degrees
MD, MPH

Physician Predictions of Discharge

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
An evaluation of physician predictions of discharge on a general medicine service

Hospital discharge planning is a complex process requiring efficient coordination of many different medical and social support services. For this reason, multidisciplinary teams work together to develop individualized discharge plans in an attempt to reduce preventable adverse events related to hospital discharge.[1, 2, 3, 4, 5] Despite these ongoing efforts, optimal discharge strategies have yet to be realized.[1, 4, 5, 6, 7, 8, 9]

One factor that may improve the discharge process is the early identification of patients who are approaching discharge.[10] Multidisciplinary teams cannot fully deploy comprehensive discharge plans until a physician deems a patient to be approaching discharge readiness.[8]

To our knowledge, no studies have examined the performance of physician predictions of upcoming discharge. Instead, prior studies have found that physicians have difficulty predicting the length of stay for patients seen in the emergency room and for elderly patients newly admitted to general medicine floor.[11, 12] The purpose of this study was to evaluate the ability of inpatient general medicine physicians to predict next or same‐day hospital discharges to help inform the timing of discharge planning.

METHODS

We collected daily in‐person predictions from all senior residents and attendings separately on the inpatient general medicine teams (5 resident/attending services and 4 attending‐only services) at a single 950‐bed academic medical center. We asked these physicians to predict whether each patient under their care had a greater than or equal to 80% chance of being discharged the next day, the same day, or neither (ie, no discharge on the next or same day).

Physician predictions of discharge occurred Monday through Friday at 1 of 3 time points: morning (79 am), midday (122 pm), or afternoon (57 pm). Data collection focused on 1 time point per week during 2 different weeks in November 2013 and 1 week in February 2014. Predictions of same‐day discharge could only be made at the morning and midday time points. Each patient could have multiple predictions if they remained hospitalized during subsequent assessments. For each physician making a prediction, we recorded the physician training level (resident or attending).

This protocol was deemed exempt by our university's institutional review board.

Outcomes

We measured the sensitivity (SN), specificity (SP), positive predictive value (PPV), and negative predictive value (NPV) for each type of physician prediction (next day, same day, or no discharge by the end of the next day). We also calculated these measurements for each time point in the time of day subgroup: morning, midday, and afternoon.

Statistical Analyses

Using a normal approximation to the binomial distribution, point estimates and 95% confidence intervals for SN, SP, PPV, and NPV for the group of all patients and for the time of day subgroup are reported. The Cochran‐Armitage trend test was used to examine trends in SN, SP, PPV, and NPV as time to discharge decreased. No adjustments were made for multiple comparisons. A 2‐sided significance level was prespecified at 0.05 for all tests.

For the subset of patients who had discharge predictions made by both a resident and an attending, agreement was examined using the kappa statistic. All analyses were conducted using SAS version 9.3 (SAS Institute, Cary, NC).

RESULTS

A total of 2660 predictions were made by 24 attendings and 15 residents. Nineteen predictions were excluded because of missing prediction type or date of discharge, leaving 2641 predictions for analysis. Table 1 summarizes the total number of predictions within subgroups.

Summary of Predictions
No. of Predictions
All predictions 2,641
Day of the week
Monday 596
Tuesday 503
Wednesday 525
Thursday 551
Friday 466
Physician training level
Resident 871
Attending 1,770
Time of day
Morning (7 am9 am) 906
Midday (12 pm2 pm) 832
Afternoon (5 pm7 pm) 903

The overall daily discharge rate in our population was 22.3% (see Supporting Table 1 in the online version of this article for the raw values). The SN and PPV of physician predictions of next‐day discharge were 48% (95% confidence interval [CI]: 43%‐52%) and 51% (95% CI: 46%‐56%), respectively. The SN and PPV for same‐day discharge predictions were 73% (95% CI: 68%‐78%) and 69% (95% CI: 64%‐73%), respectively. The SP for next and same‐day discharge predictions was 90% (95% CI: 89%‐91%) and 95% (95% CI: 94%‐96%), whereas the NPV was 89% (95% CI: 88%‐90%) and 96% (95% CI: 95%‐97%), respectively.

Outcome measures for each prediction type are stratified by time of day and summarized in Table 2. For next‐day discharge predictions, the SN and PPV were lowest in the morning (SN 27%, PPV 33%) and peaked by the afternoon (SN 67%, PPV 69%). Similarly, for same‐day discharges, SN and PPV were highest later in the day (midday SN 88%, PPV 79%). This trend is also demonstrated in the SP and NPV, which increased as time to actual discharge approached, although the trends are not as pronounced as for SN and PPV.

Results by Discharge Prediction Type and Time of Day
Validity Measure Next‐Day Discharge Predictions Trend P Value Same‐Day Discharge Predictions Trend P Value
Morning Midday Afternoon Morning Midday Afternoon
  • NOTE: Data are reported as proportion (95% confidence interval). A significant Cochran‐Armitage trend test, 1‐sided P value indicates that the validity measure increases as time progresses.

Sensitivity 0.27 (0.210.35) 0.50 (0.410.59) 0.67 (0.590.74) <0.001 0.66 (0.590.73) 0.88 (0.810.93) <0.001
Specificity 0.87 (0.850.90) 0.90 (0.880.92) 0.93 (0.910.95) <0.001 0.88 (0.850.90) 0.95 (0.930.97) <0.001
PPV 0.33 (0.250.41) 0.48 (0.400.57) 0.69 (0.610.76) <0.001 0.62 (0.550.68) 0.79 (0.710.85) <0.001
NPV 0.84 (0.810.87) 0.91 (0.880.93) 0.93 (0.910.94) <0.001 0.90 (0.880.92) 0.98 (0.960.99) <0.001

The overall agreement between resident and attending predictions was measured and found to have kappa values of 0.51 (P < 0.001) for next‐day predictions and 0.73 (P < 0.001) for same‐day predictions, indicating moderate and substantial agreement, respectively (see Supporting Table 2 in the online version of this article).[13]

DISCUSSION

This is the first study, to our knowledge, to examine the ability of physicians to predict upcoming discharge during the course of routine general medicine inpatient care. We found that although physicians are poor predictors of discharge in the morning prior to the day of expected discharge, their ability to correctly predict inpatient discharges showed continual improvement as the difference between the prediction time and time of actual discharge shortened.

For next‐day predictions, the most accurate time point was the afternoon, when physicians correctly predicted more than two‐thirds of actual next‐day discharges. This finding suggests that physicians can provide meaningful discharge estimates as early as the afternoon prior to expected discharge. This may be an optimal time for physicians to meet with the multidisciplinary discharge teams, as many preparations hinge on timely and accurate predictions of discharge (eg, arranging patient transportation, postdischarge visits by a home health company). Multidisciplinary teams will also be reassured that an afternoon prediction of next‐day discharge would only prematurely activate discharge resources in roughly 3 out of every 10 occurrences. Even in these instances, patients may benefit from the extra time for disease counselling, medication teaching, and arrangement of home services.[4, 5, 6, 7, 8, 9]

This investigation has several limitations. Our study was conducted at a large tertiary care center over brief time periods, with an overall discharge rate of about 1 in 5 patients per day. Thus, the results may not be generalizable to hospitals with different patient populations, volume, or turnover, or when predictions are made at different times throughout the year. Furthermore, we were unable to determine if the outcome measures were affected by prolonged lengths of stay or excessive predictions on relatively few patients. However, we sought to mitigate these constraints by surveying many different respondents with varying experience levels, caring for a heterogeneous patient population at nonconsecutive time points during the year. A review of our hospital's administrative data suggests that the bed occupancy and average length of stay during our surveys were similar with most other time points during the year, and therefore representative of a typical inpatient general medicine service.

Our investigation was a novel investigation into the performance of physician discharge predictions, which are daily predictions made either explicitly or implicitly by physicians caring for patients on a general medicine ward. By utilizing a simple, subjective survey without bulky calculations, this approach closely mirrors real‐world practice patterns, and if further validated, could be easily assimilated into the normal workflow of a wide range of busy clinicians to more effectively activate individualized discharge plans.[1, 2, 3, 4, 5]

Future work could capture additional patient information, such as functional status, diagnosis, and current length of stay, which would allow identification of certain subsets of patients in which physicians are more or less accurate in predicting hospital discharge. Additionally, the outcomes of incorrect predictions, particularly the surprise discharges that left even though they were predicted to stay, could be assessed. If patients were discharged prematurely, this may be reflected by a higher 30‐day readmission rate, lower clinic follow‐up rate, and/or lower patient satisfaction scores.

CONCLUSION

Although physicians are poor predictors of discharge in the morning prior to the day of expected discharge, their ability to correctly predict inpatient discharges steadily improves as the difference between the prediction time and time of actual discharge shortened. It remains to be determined if systematic incorporation of physician discharge predictions into standard workflows will improve the effectiveness of transition of care interventions.

Disclosure: Nothing to report.

Files
References
  1. Brock J, Mitchell J, Irby K, et al. Care transitions project team: Association between quality improvement for care transition in communities and rehospitalizations among Medicare beneficiaries. JAMA. 2013;309(4):381391.
  2. Coleman EA, Parry C, Chalmers S, Min SJ, The care transitions intervention: results of a randomized controlled trial. Arch Intern Med. 2006;166:18221828.
  3. Naylor MD, Brooten D, Campbell R, et al. Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial. JAMA. 1999;281:613620.
  4. Rennke S, Nguyen OK, Shoeb MH, et al. Hospital‐initiated transitional care interventions as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158:433440.
  5. Shepperd S, McClaren J, Phillips CO, et al. Discharge planning from hospital to home. Cochrane Database Syst Rev. 2013;1:CD000313.
  6. Carey MR, Sheth H, Braithwaite SR. A prospective study of reasons for prolonged hospitalizations on a general medicine teaching service. J General Intern Med. 2005;20:108115.
  7. Selker HP, Beshansky JR, Pauker SG, Kassirer JP. The epidemiology of delays in a teaching hospital. Med Care. 1989;27:112129.
  8. Soong C, Daub S, Lee J, et al. Development of a checklist of safe discharge practices for hospital patients. J Hosp Med. 2013;8:444449.
  9. Kwan JL, Lo L, Sampson M, Shojania KG, Medication reconciliation during transition of care as a patient safety strategy. Ann Intern Med. 2013;158:397403.
  10. Webber‐Maybank M, Luton H, Making effective use of predicted discharge dates to reduce the length of stay in hospital. Nurs Times. 2009;105(15):1213.
  11. Asberg KH. Physicians' outcome predictions for elderly patients: Survival, hospital discharge, and length of stay in a department of internal medicine. Scand J Soc Med. 1986;14(3):127132.
  12. Mak G, Grant WD, McKenzie JC, McCabe JB. Physicians' ability to predict hospital length of stay for patients admitted to the hospital from the emergency department. Emerg Med Int. 2012;2012:824674.
  13. Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Fam Med. 2005;37(5):360363.
Article PDF
Issue
Journal of Hospital Medicine - 10(12)
Publications
Page Number
808-810
Sections
Files
Files
Article PDF
Article PDF

Hospital discharge planning is a complex process requiring efficient coordination of many different medical and social support services. For this reason, multidisciplinary teams work together to develop individualized discharge plans in an attempt to reduce preventable adverse events related to hospital discharge.[1, 2, 3, 4, 5] Despite these ongoing efforts, optimal discharge strategies have yet to be realized.[1, 4, 5, 6, 7, 8, 9]

One factor that may improve the discharge process is the early identification of patients who are approaching discharge.[10] Multidisciplinary teams cannot fully deploy comprehensive discharge plans until a physician deems a patient to be approaching discharge readiness.[8]

To our knowledge, no studies have examined the performance of physician predictions of upcoming discharge. Instead, prior studies have found that physicians have difficulty predicting the length of stay for patients seen in the emergency room and for elderly patients newly admitted to general medicine floor.[11, 12] The purpose of this study was to evaluate the ability of inpatient general medicine physicians to predict next or same‐day hospital discharges to help inform the timing of discharge planning.

METHODS

We collected daily in‐person predictions from all senior residents and attendings separately on the inpatient general medicine teams (5 resident/attending services and 4 attending‐only services) at a single 950‐bed academic medical center. We asked these physicians to predict whether each patient under their care had a greater than or equal to 80% chance of being discharged the next day, the same day, or neither (ie, no discharge on the next or same day).

Physician predictions of discharge occurred Monday through Friday at 1 of 3 time points: morning (79 am), midday (122 pm), or afternoon (57 pm). Data collection focused on 1 time point per week during 2 different weeks in November 2013 and 1 week in February 2014. Predictions of same‐day discharge could only be made at the morning and midday time points. Each patient could have multiple predictions if they remained hospitalized during subsequent assessments. For each physician making a prediction, we recorded the physician training level (resident or attending).

This protocol was deemed exempt by our university's institutional review board.

Outcomes

We measured the sensitivity (SN), specificity (SP), positive predictive value (PPV), and negative predictive value (NPV) for each type of physician prediction (next day, same day, or no discharge by the end of the next day). We also calculated these measurements for each time point in the time of day subgroup: morning, midday, and afternoon.

Statistical Analyses

Using a normal approximation to the binomial distribution, point estimates and 95% confidence intervals for SN, SP, PPV, and NPV for the group of all patients and for the time of day subgroup are reported. The Cochran‐Armitage trend test was used to examine trends in SN, SP, PPV, and NPV as time to discharge decreased. No adjustments were made for multiple comparisons. A 2‐sided significance level was prespecified at 0.05 for all tests.

For the subset of patients who had discharge predictions made by both a resident and an attending, agreement was examined using the kappa statistic. All analyses were conducted using SAS version 9.3 (SAS Institute, Cary, NC).

RESULTS

A total of 2660 predictions were made by 24 attendings and 15 residents. Nineteen predictions were excluded because of missing prediction type or date of discharge, leaving 2641 predictions for analysis. Table 1 summarizes the total number of predictions within subgroups.

Summary of Predictions
No. of Predictions
All predictions 2,641
Day of the week
Monday 596
Tuesday 503
Wednesday 525
Thursday 551
Friday 466
Physician training level
Resident 871
Attending 1,770
Time of day
Morning (7 am9 am) 906
Midday (12 pm2 pm) 832
Afternoon (5 pm7 pm) 903

The overall daily discharge rate in our population was 22.3% (see Supporting Table 1 in the online version of this article for the raw values). The SN and PPV of physician predictions of next‐day discharge were 48% (95% confidence interval [CI]: 43%‐52%) and 51% (95% CI: 46%‐56%), respectively. The SN and PPV for same‐day discharge predictions were 73% (95% CI: 68%‐78%) and 69% (95% CI: 64%‐73%), respectively. The SP for next and same‐day discharge predictions was 90% (95% CI: 89%‐91%) and 95% (95% CI: 94%‐96%), whereas the NPV was 89% (95% CI: 88%‐90%) and 96% (95% CI: 95%‐97%), respectively.

Outcome measures for each prediction type are stratified by time of day and summarized in Table 2. For next‐day discharge predictions, the SN and PPV were lowest in the morning (SN 27%, PPV 33%) and peaked by the afternoon (SN 67%, PPV 69%). Similarly, for same‐day discharges, SN and PPV were highest later in the day (midday SN 88%, PPV 79%). This trend is also demonstrated in the SP and NPV, which increased as time to actual discharge approached, although the trends are not as pronounced as for SN and PPV.

Results by Discharge Prediction Type and Time of Day
Validity Measure Next‐Day Discharge Predictions Trend P Value Same‐Day Discharge Predictions Trend P Value
Morning Midday Afternoon Morning Midday Afternoon
  • NOTE: Data are reported as proportion (95% confidence interval). A significant Cochran‐Armitage trend test, 1‐sided P value indicates that the validity measure increases as time progresses.

Sensitivity 0.27 (0.210.35) 0.50 (0.410.59) 0.67 (0.590.74) <0.001 0.66 (0.590.73) 0.88 (0.810.93) <0.001
Specificity 0.87 (0.850.90) 0.90 (0.880.92) 0.93 (0.910.95) <0.001 0.88 (0.850.90) 0.95 (0.930.97) <0.001
PPV 0.33 (0.250.41) 0.48 (0.400.57) 0.69 (0.610.76) <0.001 0.62 (0.550.68) 0.79 (0.710.85) <0.001
NPV 0.84 (0.810.87) 0.91 (0.880.93) 0.93 (0.910.94) <0.001 0.90 (0.880.92) 0.98 (0.960.99) <0.001

The overall agreement between resident and attending predictions was measured and found to have kappa values of 0.51 (P < 0.001) for next‐day predictions and 0.73 (P < 0.001) for same‐day predictions, indicating moderate and substantial agreement, respectively (see Supporting Table 2 in the online version of this article).[13]

DISCUSSION

This is the first study, to our knowledge, to examine the ability of physicians to predict upcoming discharge during the course of routine general medicine inpatient care. We found that although physicians are poor predictors of discharge in the morning prior to the day of expected discharge, their ability to correctly predict inpatient discharges showed continual improvement as the difference between the prediction time and time of actual discharge shortened.

For next‐day predictions, the most accurate time point was the afternoon, when physicians correctly predicted more than two‐thirds of actual next‐day discharges. This finding suggests that physicians can provide meaningful discharge estimates as early as the afternoon prior to expected discharge. This may be an optimal time for physicians to meet with the multidisciplinary discharge teams, as many preparations hinge on timely and accurate predictions of discharge (eg, arranging patient transportation, postdischarge visits by a home health company). Multidisciplinary teams will also be reassured that an afternoon prediction of next‐day discharge would only prematurely activate discharge resources in roughly 3 out of every 10 occurrences. Even in these instances, patients may benefit from the extra time for disease counselling, medication teaching, and arrangement of home services.[4, 5, 6, 7, 8, 9]

This investigation has several limitations. Our study was conducted at a large tertiary care center over brief time periods, with an overall discharge rate of about 1 in 5 patients per day. Thus, the results may not be generalizable to hospitals with different patient populations, volume, or turnover, or when predictions are made at different times throughout the year. Furthermore, we were unable to determine if the outcome measures were affected by prolonged lengths of stay or excessive predictions on relatively few patients. However, we sought to mitigate these constraints by surveying many different respondents with varying experience levels, caring for a heterogeneous patient population at nonconsecutive time points during the year. A review of our hospital's administrative data suggests that the bed occupancy and average length of stay during our surveys were similar with most other time points during the year, and therefore representative of a typical inpatient general medicine service.

Our investigation was a novel investigation into the performance of physician discharge predictions, which are daily predictions made either explicitly or implicitly by physicians caring for patients on a general medicine ward. By utilizing a simple, subjective survey without bulky calculations, this approach closely mirrors real‐world practice patterns, and if further validated, could be easily assimilated into the normal workflow of a wide range of busy clinicians to more effectively activate individualized discharge plans.[1, 2, 3, 4, 5]

Future work could capture additional patient information, such as functional status, diagnosis, and current length of stay, which would allow identification of certain subsets of patients in which physicians are more or less accurate in predicting hospital discharge. Additionally, the outcomes of incorrect predictions, particularly the surprise discharges that left even though they were predicted to stay, could be assessed. If patients were discharged prematurely, this may be reflected by a higher 30‐day readmission rate, lower clinic follow‐up rate, and/or lower patient satisfaction scores.

CONCLUSION

Although physicians are poor predictors of discharge in the morning prior to the day of expected discharge, their ability to correctly predict inpatient discharges steadily improves as the difference between the prediction time and time of actual discharge shortened. It remains to be determined if systematic incorporation of physician discharge predictions into standard workflows will improve the effectiveness of transition of care interventions.

Disclosure: Nothing to report.

Hospital discharge planning is a complex process requiring efficient coordination of many different medical and social support services. For this reason, multidisciplinary teams work together to develop individualized discharge plans in an attempt to reduce preventable adverse events related to hospital discharge.[1, 2, 3, 4, 5] Despite these ongoing efforts, optimal discharge strategies have yet to be realized.[1, 4, 5, 6, 7, 8, 9]

One factor that may improve the discharge process is the early identification of patients who are approaching discharge.[10] Multidisciplinary teams cannot fully deploy comprehensive discharge plans until a physician deems a patient to be approaching discharge readiness.[8]

To our knowledge, no studies have examined the performance of physician predictions of upcoming discharge. Instead, prior studies have found that physicians have difficulty predicting the length of stay for patients seen in the emergency room and for elderly patients newly admitted to general medicine floor.[11, 12] The purpose of this study was to evaluate the ability of inpatient general medicine physicians to predict next or same‐day hospital discharges to help inform the timing of discharge planning.

METHODS

We collected daily in‐person predictions from all senior residents and attendings separately on the inpatient general medicine teams (5 resident/attending services and 4 attending‐only services) at a single 950‐bed academic medical center. We asked these physicians to predict whether each patient under their care had a greater than or equal to 80% chance of being discharged the next day, the same day, or neither (ie, no discharge on the next or same day).

Physician predictions of discharge occurred Monday through Friday at 1 of 3 time points: morning (79 am), midday (122 pm), or afternoon (57 pm). Data collection focused on 1 time point per week during 2 different weeks in November 2013 and 1 week in February 2014. Predictions of same‐day discharge could only be made at the morning and midday time points. Each patient could have multiple predictions if they remained hospitalized during subsequent assessments. For each physician making a prediction, we recorded the physician training level (resident or attending).

This protocol was deemed exempt by our university's institutional review board.

Outcomes

We measured the sensitivity (SN), specificity (SP), positive predictive value (PPV), and negative predictive value (NPV) for each type of physician prediction (next day, same day, or no discharge by the end of the next day). We also calculated these measurements for each time point in the time of day subgroup: morning, midday, and afternoon.

Statistical Analyses

Using a normal approximation to the binomial distribution, point estimates and 95% confidence intervals for SN, SP, PPV, and NPV for the group of all patients and for the time of day subgroup are reported. The Cochran‐Armitage trend test was used to examine trends in SN, SP, PPV, and NPV as time to discharge decreased. No adjustments were made for multiple comparisons. A 2‐sided significance level was prespecified at 0.05 for all tests.

For the subset of patients who had discharge predictions made by both a resident and an attending, agreement was examined using the kappa statistic. All analyses were conducted using SAS version 9.3 (SAS Institute, Cary, NC).

RESULTS

A total of 2660 predictions were made by 24 attendings and 15 residents. Nineteen predictions were excluded because of missing prediction type or date of discharge, leaving 2641 predictions for analysis. Table 1 summarizes the total number of predictions within subgroups.

Summary of Predictions
No. of Predictions
All predictions 2,641
Day of the week
Monday 596
Tuesday 503
Wednesday 525
Thursday 551
Friday 466
Physician training level
Resident 871
Attending 1,770
Time of day
Morning (7 am9 am) 906
Midday (12 pm2 pm) 832
Afternoon (5 pm7 pm) 903

The overall daily discharge rate in our population was 22.3% (see Supporting Table 1 in the online version of this article for the raw values). The SN and PPV of physician predictions of next‐day discharge were 48% (95% confidence interval [CI]: 43%‐52%) and 51% (95% CI: 46%‐56%), respectively. The SN and PPV for same‐day discharge predictions were 73% (95% CI: 68%‐78%) and 69% (95% CI: 64%‐73%), respectively. The SP for next and same‐day discharge predictions was 90% (95% CI: 89%‐91%) and 95% (95% CI: 94%‐96%), whereas the NPV was 89% (95% CI: 88%‐90%) and 96% (95% CI: 95%‐97%), respectively.

Outcome measures for each prediction type are stratified by time of day and summarized in Table 2. For next‐day discharge predictions, the SN and PPV were lowest in the morning (SN 27%, PPV 33%) and peaked by the afternoon (SN 67%, PPV 69%). Similarly, for same‐day discharges, SN and PPV were highest later in the day (midday SN 88%, PPV 79%). This trend is also demonstrated in the SP and NPV, which increased as time to actual discharge approached, although the trends are not as pronounced as for SN and PPV.

Results by Discharge Prediction Type and Time of Day
Validity Measure Next‐Day Discharge Predictions Trend P Value Same‐Day Discharge Predictions Trend P Value
Morning Midday Afternoon Morning Midday Afternoon
  • NOTE: Data are reported as proportion (95% confidence interval). A significant Cochran‐Armitage trend test, 1‐sided P value indicates that the validity measure increases as time progresses.

Sensitivity 0.27 (0.210.35) 0.50 (0.410.59) 0.67 (0.590.74) <0.001 0.66 (0.590.73) 0.88 (0.810.93) <0.001
Specificity 0.87 (0.850.90) 0.90 (0.880.92) 0.93 (0.910.95) <0.001 0.88 (0.850.90) 0.95 (0.930.97) <0.001
PPV 0.33 (0.250.41) 0.48 (0.400.57) 0.69 (0.610.76) <0.001 0.62 (0.550.68) 0.79 (0.710.85) <0.001
NPV 0.84 (0.810.87) 0.91 (0.880.93) 0.93 (0.910.94) <0.001 0.90 (0.880.92) 0.98 (0.960.99) <0.001

The overall agreement between resident and attending predictions was measured and found to have kappa values of 0.51 (P < 0.001) for next‐day predictions and 0.73 (P < 0.001) for same‐day predictions, indicating moderate and substantial agreement, respectively (see Supporting Table 2 in the online version of this article).[13]

DISCUSSION

This is the first study, to our knowledge, to examine the ability of physicians to predict upcoming discharge during the course of routine general medicine inpatient care. We found that although physicians are poor predictors of discharge in the morning prior to the day of expected discharge, their ability to correctly predict inpatient discharges showed continual improvement as the difference between the prediction time and time of actual discharge shortened.

For next‐day predictions, the most accurate time point was the afternoon, when physicians correctly predicted more than two‐thirds of actual next‐day discharges. This finding suggests that physicians can provide meaningful discharge estimates as early as the afternoon prior to expected discharge. This may be an optimal time for physicians to meet with the multidisciplinary discharge teams, as many preparations hinge on timely and accurate predictions of discharge (eg, arranging patient transportation, postdischarge visits by a home health company). Multidisciplinary teams will also be reassured that an afternoon prediction of next‐day discharge would only prematurely activate discharge resources in roughly 3 out of every 10 occurrences. Even in these instances, patients may benefit from the extra time for disease counselling, medication teaching, and arrangement of home services.[4, 5, 6, 7, 8, 9]

This investigation has several limitations. Our study was conducted at a large tertiary care center over brief time periods, with an overall discharge rate of about 1 in 5 patients per day. Thus, the results may not be generalizable to hospitals with different patient populations, volume, or turnover, or when predictions are made at different times throughout the year. Furthermore, we were unable to determine if the outcome measures were affected by prolonged lengths of stay or excessive predictions on relatively few patients. However, we sought to mitigate these constraints by surveying many different respondents with varying experience levels, caring for a heterogeneous patient population at nonconsecutive time points during the year. A review of our hospital's administrative data suggests that the bed occupancy and average length of stay during our surveys were similar with most other time points during the year, and therefore representative of a typical inpatient general medicine service.

Our investigation was a novel investigation into the performance of physician discharge predictions, which are daily predictions made either explicitly or implicitly by physicians caring for patients on a general medicine ward. By utilizing a simple, subjective survey without bulky calculations, this approach closely mirrors real‐world practice patterns, and if further validated, could be easily assimilated into the normal workflow of a wide range of busy clinicians to more effectively activate individualized discharge plans.[1, 2, 3, 4, 5]

Future work could capture additional patient information, such as functional status, diagnosis, and current length of stay, which would allow identification of certain subsets of patients in which physicians are more or less accurate in predicting hospital discharge. Additionally, the outcomes of incorrect predictions, particularly the surprise discharges that left even though they were predicted to stay, could be assessed. If patients were discharged prematurely, this may be reflected by a higher 30‐day readmission rate, lower clinic follow‐up rate, and/or lower patient satisfaction scores.

CONCLUSION

Although physicians are poor predictors of discharge in the morning prior to the day of expected discharge, their ability to correctly predict inpatient discharges steadily improves as the difference between the prediction time and time of actual discharge shortened. It remains to be determined if systematic incorporation of physician discharge predictions into standard workflows will improve the effectiveness of transition of care interventions.

Disclosure: Nothing to report.

References
  1. Brock J, Mitchell J, Irby K, et al. Care transitions project team: Association between quality improvement for care transition in communities and rehospitalizations among Medicare beneficiaries. JAMA. 2013;309(4):381391.
  2. Coleman EA, Parry C, Chalmers S, Min SJ, The care transitions intervention: results of a randomized controlled trial. Arch Intern Med. 2006;166:18221828.
  3. Naylor MD, Brooten D, Campbell R, et al. Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial. JAMA. 1999;281:613620.
  4. Rennke S, Nguyen OK, Shoeb MH, et al. Hospital‐initiated transitional care interventions as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158:433440.
  5. Shepperd S, McClaren J, Phillips CO, et al. Discharge planning from hospital to home. Cochrane Database Syst Rev. 2013;1:CD000313.
  6. Carey MR, Sheth H, Braithwaite SR. A prospective study of reasons for prolonged hospitalizations on a general medicine teaching service. J General Intern Med. 2005;20:108115.
  7. Selker HP, Beshansky JR, Pauker SG, Kassirer JP. The epidemiology of delays in a teaching hospital. Med Care. 1989;27:112129.
  8. Soong C, Daub S, Lee J, et al. Development of a checklist of safe discharge practices for hospital patients. J Hosp Med. 2013;8:444449.
  9. Kwan JL, Lo L, Sampson M, Shojania KG, Medication reconciliation during transition of care as a patient safety strategy. Ann Intern Med. 2013;158:397403.
  10. Webber‐Maybank M, Luton H, Making effective use of predicted discharge dates to reduce the length of stay in hospital. Nurs Times. 2009;105(15):1213.
  11. Asberg KH. Physicians' outcome predictions for elderly patients: Survival, hospital discharge, and length of stay in a department of internal medicine. Scand J Soc Med. 1986;14(3):127132.
  12. Mak G, Grant WD, McKenzie JC, McCabe JB. Physicians' ability to predict hospital length of stay for patients admitted to the hospital from the emergency department. Emerg Med Int. 2012;2012:824674.
  13. Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Fam Med. 2005;37(5):360363.
References
  1. Brock J, Mitchell J, Irby K, et al. Care transitions project team: Association between quality improvement for care transition in communities and rehospitalizations among Medicare beneficiaries. JAMA. 2013;309(4):381391.
  2. Coleman EA, Parry C, Chalmers S, Min SJ, The care transitions intervention: results of a randomized controlled trial. Arch Intern Med. 2006;166:18221828.
  3. Naylor MD, Brooten D, Campbell R, et al. Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial. JAMA. 1999;281:613620.
  4. Rennke S, Nguyen OK, Shoeb MH, et al. Hospital‐initiated transitional care interventions as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158:433440.
  5. Shepperd S, McClaren J, Phillips CO, et al. Discharge planning from hospital to home. Cochrane Database Syst Rev. 2013;1:CD000313.
  6. Carey MR, Sheth H, Braithwaite SR. A prospective study of reasons for prolonged hospitalizations on a general medicine teaching service. J General Intern Med. 2005;20:108115.
  7. Selker HP, Beshansky JR, Pauker SG, Kassirer JP. The epidemiology of delays in a teaching hospital. Med Care. 1989;27:112129.
  8. Soong C, Daub S, Lee J, et al. Development of a checklist of safe discharge practices for hospital patients. J Hosp Med. 2013;8:444449.
  9. Kwan JL, Lo L, Sampson M, Shojania KG, Medication reconciliation during transition of care as a patient safety strategy. Ann Intern Med. 2013;158:397403.
  10. Webber‐Maybank M, Luton H, Making effective use of predicted discharge dates to reduce the length of stay in hospital. Nurs Times. 2009;105(15):1213.
  11. Asberg KH. Physicians' outcome predictions for elderly patients: Survival, hospital discharge, and length of stay in a department of internal medicine. Scand J Soc Med. 1986;14(3):127132.
  12. Mak G, Grant WD, McKenzie JC, McCabe JB. Physicians' ability to predict hospital length of stay for patients admitted to the hospital from the emergency department. Emerg Med Int. 2012;2012:824674.
  13. Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Fam Med. 2005;37(5):360363.
Issue
Journal of Hospital Medicine - 10(12)
Issue
Journal of Hospital Medicine - 10(12)
Page Number
808-810
Page Number
808-810
Publications
Publications
Article Type
Display Headline
An evaluation of physician predictions of discharge on a general medicine service
Display Headline
An evaluation of physician predictions of discharge on a general medicine service
Sections
Article Source
© 2015 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Jonathan Bae, MD, Duke University Medical Center, Box 100800, Durham, NC 27710; Telephone: 919‐681‐8263; Fax: 919‐668‐5394; E‐mail: jon.bae@dm.duke.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Telemetry Order Duration Reductions

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Reductions in telemetry order duration do not reduce telemetry utilization

The Society of Hospital Medicine's Adult Choosing Wisely measures include not ordering continuous telemetry monitoring outside of the ICU [intensive care unit] without using a protocol that governs continuation.[1] Current guidelines for cardiac monitoring use recommend minimum durations for all adult class I and most class II indications.[2] However, telemetry ordering often fails to include timing or criteria for discontinuation. We determined the impact of a reduction in telemetry order duration within our hospital, hypothesizing this reduction would lead to earlier reassessment of telemetry need and therefore decrease overall utilization.

METHODS

Setting

Durham Veterans Affairs Medical Center (DVAMC) is a 151‐bed tertiary care hospital within Veterans Affairs (VA) Integrated Services Network Region 6 (VISN 6) serving as the primary VA hospital for >54,000 patients and a referral hospital for VISN 6. Twenty‐five telemetry units are available for use on 2 wards with 48 potential telemetry beds. All nonintensive care wards contain general medical and surgical patients, without a primary inpatient cardiology service. Most orders are written by housestaff supervised by attending physicians.

Intervention

Prior to our intervention, the maximum allowable duration of telemetry orders was 72 hours. The duration was enforced by nursing staff automatically discontinuing telemetry not renewed within 72 hours. For our intervention, we reduced the duration of telemetry within our electronic ordering system in November 2013 so that orders had to be renewed within 48 hours or they were discontinued. No education regarding appropriate telemetry use was provided. This intervention was created as a quality‐improvement (QI) project affecting all telemetry use within DVAMC and was exempt from institutional review board review.

Outcomes

Outcomes included the mean number of telemetry orders per week, mean duration of telemetry orders, mean duration of telemetry per episode, and the ratio of time on telemetry relative to the total length of stay. As a balancing measure, we examined rates of rapid response and code blue events. All measures were compared for 12 weeks before and 16 weeks after the intervention. Telemetry orders and durations were obtained using the Corporate Data Warehouse.

Analysis

All outcome measurements were continuous variables and compared using the Student t test in Stata version 9.2 (StataCorp, College Station, TX).

RESULTS

Following the intervention, overall order duration decreased by 33% from 66.68.3 hours to 44.52.3 hours per order (P<0.01), mirroring the reduction in the maximum telemetry order duration from 72 to 48 hours (Table 1). However, an increase in telemetry order frequency after the intervention resulted in no significant change in telemetry duration per episode or the proportion of the hospitalization on telemetry (59.3 vs 56.3 hours per patient, P=0.43; and 66.4% vs 66.2% of hospitalization, P=0.58). Rapid response and code blue events did not differ significantly relative to the intervention (2.8 events per week before and 3.1 events per week after, P=0.63).

Telemetry Utilization Before and After the Quality Improvement Intervention
Before Intervention After Intervention P Value
  • NOTE: Abbreviations: NA, not applicable; RRT, rapid response team; SD, standard deviation.

No. of hospitalizations with telemetry ordered 557 684 NA
No. of telemetry orders 952 1515 NA
Average no. of orders per week (SD) 79.3 (9.2) 94.7 (25.9) 0.06
Hours of telemetry per order (SD) 66.6 (8.3) 44.5 (2.3) <0.01
Duration of telemetry per patient, h 59.3 56.3 0.43
% of hospitalizations receiving telemetry per patient 66.4% 66.2% 0.90
RRT/code blue events per week 2.8 3.1 0.63

DISCUSSION

Overall, telemetry utilization was unchanged in spite of an intervention successfully reducing telemetry order duration. Providers responded to this decreased order duration by increasing renewal orders, leaving the amount of time patients spent on telemetry unchanged.

Little primary evidence underlies the American Heart Association recommendations for duration of telemetry in general ward patients.[2] The existing literature documents the timing in which arrhythmias occur after cardiac surgery or myocardial infarction, and therefore is limited in guiding patient care outside intensive care unit settings.[3, 4] As such, hospitalists and inpatient providers have little data directing additional telemetry decisions for these patients, and none for patients requiring telemetry for other indications.

As interventions focusing solely on telemetry duration may not lead to changes in usage patterns, reducing telemetry utilization may require active stewardship. For example, explicit justification may be needed for renewal of telemetry orders. Similarly, education on appropriate telemetry indications in tandem with electronic ordering changes may be more likely to change behavior. Alternatively, incorporating data identifying chest pain patients at very low risk of developing arrhythmias or cardiac complications, based on published risk scores at the time of ordering, may lead to better decision making in initiating telemetry.[5, 6]

This QI project had several limitations. First, the intervention occurred in a facility with a previous telemetry order duration limit. In hospitals without a current duration limitation, some reduction in overall telemetry utilization may be possible. Second, this project was a nonrandom before/after study and potentially subject to bias due to confounding. However, our limited number of telemetry resources, the relatively low number of inpatient teams at our facility, and the inability to target geographic locations for team admissions would have made a cluster‐randomized trial impractical. Third, rationales for telemetry ordering were unknown, as well as drivers for increased orders after the intervention. Better understanding these factors could lead to targeted interventions in some settings.

CONCLUSION

In conclusion, a QI initiative reducing telemetry order duration did not reduce overall telemetry utilization but increased the number of telemetry orders written. Interventions incorporating appropriate telemetry indications or event risks may be required to change ordering behaviors.

Disclosure: Nothing to report.

Files
References
  1. Society of Hospital Medicine. Society of Hospital Medicine–adult hospital medicine: five things physicians and patients should question. Available at: http://www.choosingwisely.org/doctor‐patient‐lists/society‐of‐hospital‐medicine‐adult‐hospital‐medicine. Accessed June 4, 2014.
  2. Drew BJ, Califf RM, Funk M, et al. Practice standards for electrocardiographic monitoring in hospital settings: an American Heart Association scientific statement from the Councils on Cardiovascular Nursing, Clinical Cardiology, and Cardiovascular Disease in the Young: endorsed by the International Society of Computerized Electrocardiology and the American Association of Critical‐Care Nurses. Circulation. 2004;110(17):27212746.
  3. Creswell LL, Schuessler RB, Rosenbloom M, Cox JL. Hazards of postoperative atrial arrhythmias. Ann Thorac Surg. 1993;56(3):539549.
  4. Newby LK, Hasselblad V, Armstrong PW, et al. Time‐based risk assessment after myocardial infarction. Implications for timing of discharge and applications to medical decision‐making. Eur Heart J. 2003;24(2):182189.
  5. Durairaj L, Reilly B, Das K, et al. Emergency department admissions to inpatient cardiac telemetry beds: a prospective cohort study of risk stratification and outcomes. Am J Med. 2001;110(1):711.
  6. Hollander JE, Sites FD, Pollack CV, Shofer FS. Lack of utility of telemetry monitoring for identification of cardiac death and life‐threatening ventricular dysrhythmias in low‐risk patients with chest pain. Ann Emerg Med. 2004;43(1):7176.
Article PDF
Issue
Journal of Hospital Medicine - 9(12)
Publications
Page Number
795-796
Sections
Files
Files
Article PDF
Article PDF

The Society of Hospital Medicine's Adult Choosing Wisely measures include not ordering continuous telemetry monitoring outside of the ICU [intensive care unit] without using a protocol that governs continuation.[1] Current guidelines for cardiac monitoring use recommend minimum durations for all adult class I and most class II indications.[2] However, telemetry ordering often fails to include timing or criteria for discontinuation. We determined the impact of a reduction in telemetry order duration within our hospital, hypothesizing this reduction would lead to earlier reassessment of telemetry need and therefore decrease overall utilization.

METHODS

Setting

Durham Veterans Affairs Medical Center (DVAMC) is a 151‐bed tertiary care hospital within Veterans Affairs (VA) Integrated Services Network Region 6 (VISN 6) serving as the primary VA hospital for >54,000 patients and a referral hospital for VISN 6. Twenty‐five telemetry units are available for use on 2 wards with 48 potential telemetry beds. All nonintensive care wards contain general medical and surgical patients, without a primary inpatient cardiology service. Most orders are written by housestaff supervised by attending physicians.

Intervention

Prior to our intervention, the maximum allowable duration of telemetry orders was 72 hours. The duration was enforced by nursing staff automatically discontinuing telemetry not renewed within 72 hours. For our intervention, we reduced the duration of telemetry within our electronic ordering system in November 2013 so that orders had to be renewed within 48 hours or they were discontinued. No education regarding appropriate telemetry use was provided. This intervention was created as a quality‐improvement (QI) project affecting all telemetry use within DVAMC and was exempt from institutional review board review.

Outcomes

Outcomes included the mean number of telemetry orders per week, mean duration of telemetry orders, mean duration of telemetry per episode, and the ratio of time on telemetry relative to the total length of stay. As a balancing measure, we examined rates of rapid response and code blue events. All measures were compared for 12 weeks before and 16 weeks after the intervention. Telemetry orders and durations were obtained using the Corporate Data Warehouse.

Analysis

All outcome measurements were continuous variables and compared using the Student t test in Stata version 9.2 (StataCorp, College Station, TX).

RESULTS

Following the intervention, overall order duration decreased by 33% from 66.68.3 hours to 44.52.3 hours per order (P<0.01), mirroring the reduction in the maximum telemetry order duration from 72 to 48 hours (Table 1). However, an increase in telemetry order frequency after the intervention resulted in no significant change in telemetry duration per episode or the proportion of the hospitalization on telemetry (59.3 vs 56.3 hours per patient, P=0.43; and 66.4% vs 66.2% of hospitalization, P=0.58). Rapid response and code blue events did not differ significantly relative to the intervention (2.8 events per week before and 3.1 events per week after, P=0.63).

Telemetry Utilization Before and After the Quality Improvement Intervention
Before Intervention After Intervention P Value
  • NOTE: Abbreviations: NA, not applicable; RRT, rapid response team; SD, standard deviation.

No. of hospitalizations with telemetry ordered 557 684 NA
No. of telemetry orders 952 1515 NA
Average no. of orders per week (SD) 79.3 (9.2) 94.7 (25.9) 0.06
Hours of telemetry per order (SD) 66.6 (8.3) 44.5 (2.3) <0.01
Duration of telemetry per patient, h 59.3 56.3 0.43
% of hospitalizations receiving telemetry per patient 66.4% 66.2% 0.90
RRT/code blue events per week 2.8 3.1 0.63

DISCUSSION

Overall, telemetry utilization was unchanged in spite of an intervention successfully reducing telemetry order duration. Providers responded to this decreased order duration by increasing renewal orders, leaving the amount of time patients spent on telemetry unchanged.

Little primary evidence underlies the American Heart Association recommendations for duration of telemetry in general ward patients.[2] The existing literature documents the timing in which arrhythmias occur after cardiac surgery or myocardial infarction, and therefore is limited in guiding patient care outside intensive care unit settings.[3, 4] As such, hospitalists and inpatient providers have little data directing additional telemetry decisions for these patients, and none for patients requiring telemetry for other indications.

As interventions focusing solely on telemetry duration may not lead to changes in usage patterns, reducing telemetry utilization may require active stewardship. For example, explicit justification may be needed for renewal of telemetry orders. Similarly, education on appropriate telemetry indications in tandem with electronic ordering changes may be more likely to change behavior. Alternatively, incorporating data identifying chest pain patients at very low risk of developing arrhythmias or cardiac complications, based on published risk scores at the time of ordering, may lead to better decision making in initiating telemetry.[5, 6]

This QI project had several limitations. First, the intervention occurred in a facility with a previous telemetry order duration limit. In hospitals without a current duration limitation, some reduction in overall telemetry utilization may be possible. Second, this project was a nonrandom before/after study and potentially subject to bias due to confounding. However, our limited number of telemetry resources, the relatively low number of inpatient teams at our facility, and the inability to target geographic locations for team admissions would have made a cluster‐randomized trial impractical. Third, rationales for telemetry ordering were unknown, as well as drivers for increased orders after the intervention. Better understanding these factors could lead to targeted interventions in some settings.

CONCLUSION

In conclusion, a QI initiative reducing telemetry order duration did not reduce overall telemetry utilization but increased the number of telemetry orders written. Interventions incorporating appropriate telemetry indications or event risks may be required to change ordering behaviors.

Disclosure: Nothing to report.

The Society of Hospital Medicine's Adult Choosing Wisely measures include not ordering continuous telemetry monitoring outside of the ICU [intensive care unit] without using a protocol that governs continuation.[1] Current guidelines for cardiac monitoring use recommend minimum durations for all adult class I and most class II indications.[2] However, telemetry ordering often fails to include timing or criteria for discontinuation. We determined the impact of a reduction in telemetry order duration within our hospital, hypothesizing this reduction would lead to earlier reassessment of telemetry need and therefore decrease overall utilization.

METHODS

Setting

Durham Veterans Affairs Medical Center (DVAMC) is a 151‐bed tertiary care hospital within Veterans Affairs (VA) Integrated Services Network Region 6 (VISN 6) serving as the primary VA hospital for >54,000 patients and a referral hospital for VISN 6. Twenty‐five telemetry units are available for use on 2 wards with 48 potential telemetry beds. All nonintensive care wards contain general medical and surgical patients, without a primary inpatient cardiology service. Most orders are written by housestaff supervised by attending physicians.

Intervention

Prior to our intervention, the maximum allowable duration of telemetry orders was 72 hours. The duration was enforced by nursing staff automatically discontinuing telemetry not renewed within 72 hours. For our intervention, we reduced the duration of telemetry within our electronic ordering system in November 2013 so that orders had to be renewed within 48 hours or they were discontinued. No education regarding appropriate telemetry use was provided. This intervention was created as a quality‐improvement (QI) project affecting all telemetry use within DVAMC and was exempt from institutional review board review.

Outcomes

Outcomes included the mean number of telemetry orders per week, mean duration of telemetry orders, mean duration of telemetry per episode, and the ratio of time on telemetry relative to the total length of stay. As a balancing measure, we examined rates of rapid response and code blue events. All measures were compared for 12 weeks before and 16 weeks after the intervention. Telemetry orders and durations were obtained using the Corporate Data Warehouse.

Analysis

All outcome measurements were continuous variables and compared using the Student t test in Stata version 9.2 (StataCorp, College Station, TX).

RESULTS

Following the intervention, overall order duration decreased by 33% from 66.68.3 hours to 44.52.3 hours per order (P<0.01), mirroring the reduction in the maximum telemetry order duration from 72 to 48 hours (Table 1). However, an increase in telemetry order frequency after the intervention resulted in no significant change in telemetry duration per episode or the proportion of the hospitalization on telemetry (59.3 vs 56.3 hours per patient, P=0.43; and 66.4% vs 66.2% of hospitalization, P=0.58). Rapid response and code blue events did not differ significantly relative to the intervention (2.8 events per week before and 3.1 events per week after, P=0.63).

Telemetry Utilization Before and After the Quality Improvement Intervention
Before Intervention After Intervention P Value
  • NOTE: Abbreviations: NA, not applicable; RRT, rapid response team; SD, standard deviation.

No. of hospitalizations with telemetry ordered 557 684 NA
No. of telemetry orders 952 1515 NA
Average no. of orders per week (SD) 79.3 (9.2) 94.7 (25.9) 0.06
Hours of telemetry per order (SD) 66.6 (8.3) 44.5 (2.3) <0.01
Duration of telemetry per patient, h 59.3 56.3 0.43
% of hospitalizations receiving telemetry per patient 66.4% 66.2% 0.90
RRT/code blue events per week 2.8 3.1 0.63

DISCUSSION

Overall, telemetry utilization was unchanged in spite of an intervention successfully reducing telemetry order duration. Providers responded to this decreased order duration by increasing renewal orders, leaving the amount of time patients spent on telemetry unchanged.

Little primary evidence underlies the American Heart Association recommendations for duration of telemetry in general ward patients.[2] The existing literature documents the timing in which arrhythmias occur after cardiac surgery or myocardial infarction, and therefore is limited in guiding patient care outside intensive care unit settings.[3, 4] As such, hospitalists and inpatient providers have little data directing additional telemetry decisions for these patients, and none for patients requiring telemetry for other indications.

As interventions focusing solely on telemetry duration may not lead to changes in usage patterns, reducing telemetry utilization may require active stewardship. For example, explicit justification may be needed for renewal of telemetry orders. Similarly, education on appropriate telemetry indications in tandem with electronic ordering changes may be more likely to change behavior. Alternatively, incorporating data identifying chest pain patients at very low risk of developing arrhythmias or cardiac complications, based on published risk scores at the time of ordering, may lead to better decision making in initiating telemetry.[5, 6]

This QI project had several limitations. First, the intervention occurred in a facility with a previous telemetry order duration limit. In hospitals without a current duration limitation, some reduction in overall telemetry utilization may be possible. Second, this project was a nonrandom before/after study and potentially subject to bias due to confounding. However, our limited number of telemetry resources, the relatively low number of inpatient teams at our facility, and the inability to target geographic locations for team admissions would have made a cluster‐randomized trial impractical. Third, rationales for telemetry ordering were unknown, as well as drivers for increased orders after the intervention. Better understanding these factors could lead to targeted interventions in some settings.

CONCLUSION

In conclusion, a QI initiative reducing telemetry order duration did not reduce overall telemetry utilization but increased the number of telemetry orders written. Interventions incorporating appropriate telemetry indications or event risks may be required to change ordering behaviors.

Disclosure: Nothing to report.

References
  1. Society of Hospital Medicine. Society of Hospital Medicine–adult hospital medicine: five things physicians and patients should question. Available at: http://www.choosingwisely.org/doctor‐patient‐lists/society‐of‐hospital‐medicine‐adult‐hospital‐medicine. Accessed June 4, 2014.
  2. Drew BJ, Califf RM, Funk M, et al. Practice standards for electrocardiographic monitoring in hospital settings: an American Heart Association scientific statement from the Councils on Cardiovascular Nursing, Clinical Cardiology, and Cardiovascular Disease in the Young: endorsed by the International Society of Computerized Electrocardiology and the American Association of Critical‐Care Nurses. Circulation. 2004;110(17):27212746.
  3. Creswell LL, Schuessler RB, Rosenbloom M, Cox JL. Hazards of postoperative atrial arrhythmias. Ann Thorac Surg. 1993;56(3):539549.
  4. Newby LK, Hasselblad V, Armstrong PW, et al. Time‐based risk assessment after myocardial infarction. Implications for timing of discharge and applications to medical decision‐making. Eur Heart J. 2003;24(2):182189.
  5. Durairaj L, Reilly B, Das K, et al. Emergency department admissions to inpatient cardiac telemetry beds: a prospective cohort study of risk stratification and outcomes. Am J Med. 2001;110(1):711.
  6. Hollander JE, Sites FD, Pollack CV, Shofer FS. Lack of utility of telemetry monitoring for identification of cardiac death and life‐threatening ventricular dysrhythmias in low‐risk patients with chest pain. Ann Emerg Med. 2004;43(1):7176.
References
  1. Society of Hospital Medicine. Society of Hospital Medicine–adult hospital medicine: five things physicians and patients should question. Available at: http://www.choosingwisely.org/doctor‐patient‐lists/society‐of‐hospital‐medicine‐adult‐hospital‐medicine. Accessed June 4, 2014.
  2. Drew BJ, Califf RM, Funk M, et al. Practice standards for electrocardiographic monitoring in hospital settings: an American Heart Association scientific statement from the Councils on Cardiovascular Nursing, Clinical Cardiology, and Cardiovascular Disease in the Young: endorsed by the International Society of Computerized Electrocardiology and the American Association of Critical‐Care Nurses. Circulation. 2004;110(17):27212746.
  3. Creswell LL, Schuessler RB, Rosenbloom M, Cox JL. Hazards of postoperative atrial arrhythmias. Ann Thorac Surg. 1993;56(3):539549.
  4. Newby LK, Hasselblad V, Armstrong PW, et al. Time‐based risk assessment after myocardial infarction. Implications for timing of discharge and applications to medical decision‐making. Eur Heart J. 2003;24(2):182189.
  5. Durairaj L, Reilly B, Das K, et al. Emergency department admissions to inpatient cardiac telemetry beds: a prospective cohort study of risk stratification and outcomes. Am J Med. 2001;110(1):711.
  6. Hollander JE, Sites FD, Pollack CV, Shofer FS. Lack of utility of telemetry monitoring for identification of cardiac death and life‐threatening ventricular dysrhythmias in low‐risk patients with chest pain. Ann Emerg Med. 2004;43(1):7176.
Issue
Journal of Hospital Medicine - 9(12)
Issue
Journal of Hospital Medicine - 9(12)
Page Number
795-796
Page Number
795-796
Publications
Publications
Article Type
Display Headline
Reductions in telemetry order duration do not reduce telemetry utilization
Display Headline
Reductions in telemetry order duration do not reduce telemetry utilization
Sections
Article Source
© 2014 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Joel C. Boggan, MD, Hospital Medicine Team (111), VA Medical Center, 508 Fulton St., Durham, NC 27705; Telephone: 919‐286‐6892; Fax: 919‐416‐5938; E‐mail: joel.boggan@duke.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files