The Hospital Readmissions Reduction Program: Inconvenient Observations

Article Type
Changed
Thu, 07/01/2021 - 11:26

Centers for Medicare and Medicaid Services (CMS)–promulgated quality metrics continue to attract critics. Physicians decry that many metrics are outside their control, while patient groups are frustrated that metrics lack meaning for beneficiaries. The Hospital Readmissions Reduction Program (HRRP) reduces payments for “excess” 30-day risk-standardized readmissions for six conditions and procedures, and may be less effective in reducing readmissions than previously reported due to intentional and increasing use of hospital observation stays.1

In this issue, Sheehy et al2 report that nearly one in five rehospitalizations were unrecognized because either the index hospitalization or the rehospitalization was an observation stay, highlighting yet another challenge with the HRRP. Limitations of their study include the use of a single year of claims data and the exclusion of Medicare Advantage claims data, as one might expect lower readmission rates in this capitated program. Opportunities for improving the HRRP could consist of updating the HRRP metric to include observation stays and, for surgical hospitalizations, extended-stay surgical recovery, wherein patients may be observed for up to 2 days following a procedure. Unfortunately, despite the HRRP missing nearly one in five readmissions, CMS would likely need additional statutory authority from Congress in order to reinterpret the definition of readmission3 to include observation stays.

Challenges with the HRRP metrics raise broader concerns about the program. For decades, administrators viewed readmissions as a utilization metric, only to have the Affordable Care Act re-designate and define all-cause readmissions as a quality metric. Yet hospitals and health systems control only some factors driving readmission. Readmissions occur for a variety of reasons, including not only poor quality of initial hospital care and inadequate care coordination, but also factors that are beyond the hospital’s purview, such as lack of access to ambulatory services, multiple and severe chronic conditions that progress or remain unresponsive to intervention,4 and demographic and social factors such as housing instability, health literacy, or residence in a food desert. These non-hospital factors reside within the domain of other market participants or local, state, and federal government agencies.

Challenges to the utility, validity, and appropriateness of HRRP metrics should remind policymakers of the dangers of over-legislating the details of healthcare policy and the statutory inflexibility that can ensue. Clinical care evolves, and artificial constructs—including payment categories such as observation status—may age poorly over time, exemplified best by the challenges of accessing post-acute care due to the 3-day rule.5 Introduced as a statutory requirement in 1967, when the average length of stay was 13.8 days and observation care did not exist as a payment category, the 3-day rule requires Medicare beneficiaries to spend 3 days admitted to the hospital in order to qualify for coverage of post-acute care, creating care gaps for observation stay patients.

Observation care itself is an artificial construct of CMS payment policy. In the Medicare program, observation care falls under Part B, exposing patients to both greater financial responsibility and billing complexity through the engagement of their supplemental insurance, even though those receiving observation care experience the same care as if hospitalized— routine monitoring, nursing care, blood draws, imaging, and diagnostic tests. While CMS requires notification of observation status and explanation of the difference in patient financial responsibility, in clinical practice, patient understanding is limited. Policymakers can support both Medicare beneficiaries and hospitals by reexamining observation care as a payment category.

Sheehy and colleagues’ work simultaneously challenges the face validity of the HRRP and the reasonableness of categorizing some inpatient stays as outpatient care in the hospital—issues that policymakers can and should address.

References

1. Sabbatini AK, Wright B. Excluding observation stays from readmission rates – what quality measures are missing. N Engl J Med. 2018;378(22):2062-2065. https://doi.org/10.1056/NEJMp1800732
2. Sheehy AM, Kaiksow F, Powell WR, et al. The hospital readmissions reduction program’s blind spot: observation hospitalizations. J Hosp Med. 2021;16(7):409-411. https://doi.org/10.12788/jhm.3634
3. The Patient Protection and Affordable Care Act, 42 USC 18001§3025 (2010).
4. Reuben DB, Tinetti ME. The hospital-dependent patient. N Engl J Med. 2014;370(8):694-697. https://doi.org/10.1056/NEJMp1315568
5. Patel N, Slota JM, Miller BJ. The continued conundrum of discharge to a skilled nursing facility after a medicare observation stay. JAMA Health Forum. 2020;1(5):e200577. https://doi.org/10.1001/jamahealthforum.2020.0577

Article PDF
Author and Disclosure Information

1Division of Hospital Medicine, Department of Medicine, The Johns Hopkins University School of Medicine, Baltimore, Maryland; 2Johns Hopkins Carey Business School, Baltimore, Maryland; 3Johns Hopkins University School of Nursing, Baltimore, Maryland.

Disclosures
Dr Miller formerly served as a Fellow at the Centers for Medicare & Medicaid Services. He reports serving as a member of the CMS Medicare Evidence Development and Coverage Advisory Committee, and receiving fees outside the related work from the Federal Trade Commission, the Health Resources and Services Administration, Radyus Research, Oxidien Pharmaceuticals, and the Heritage Foundation. Ms Deutschendorf and Dr Brotman report no relevant conflicts of interest.

Issue
Journal of Hospital Medicine 16(7)
Publications
Topics
Page Number
448
Sections
Author and Disclosure Information

1Division of Hospital Medicine, Department of Medicine, The Johns Hopkins University School of Medicine, Baltimore, Maryland; 2Johns Hopkins Carey Business School, Baltimore, Maryland; 3Johns Hopkins University School of Nursing, Baltimore, Maryland.

Disclosures
Dr Miller formerly served as a Fellow at the Centers for Medicare & Medicaid Services. He reports serving as a member of the CMS Medicare Evidence Development and Coverage Advisory Committee, and receiving fees outside the related work from the Federal Trade Commission, the Health Resources and Services Administration, Radyus Research, Oxidien Pharmaceuticals, and the Heritage Foundation. Ms Deutschendorf and Dr Brotman report no relevant conflicts of interest.

Author and Disclosure Information

1Division of Hospital Medicine, Department of Medicine, The Johns Hopkins University School of Medicine, Baltimore, Maryland; 2Johns Hopkins Carey Business School, Baltimore, Maryland; 3Johns Hopkins University School of Nursing, Baltimore, Maryland.

Disclosures
Dr Miller formerly served as a Fellow at the Centers for Medicare & Medicaid Services. He reports serving as a member of the CMS Medicare Evidence Development and Coverage Advisory Committee, and receiving fees outside the related work from the Federal Trade Commission, the Health Resources and Services Administration, Radyus Research, Oxidien Pharmaceuticals, and the Heritage Foundation. Ms Deutschendorf and Dr Brotman report no relevant conflicts of interest.

Article PDF
Article PDF
Related Articles

Centers for Medicare and Medicaid Services (CMS)–promulgated quality metrics continue to attract critics. Physicians decry that many metrics are outside their control, while patient groups are frustrated that metrics lack meaning for beneficiaries. The Hospital Readmissions Reduction Program (HRRP) reduces payments for “excess” 30-day risk-standardized readmissions for six conditions and procedures, and may be less effective in reducing readmissions than previously reported due to intentional and increasing use of hospital observation stays.1

In this issue, Sheehy et al2 report that nearly one in five rehospitalizations were unrecognized because either the index hospitalization or the rehospitalization was an observation stay, highlighting yet another challenge with the HRRP. Limitations of their study include the use of a single year of claims data and the exclusion of Medicare Advantage claims data, as one might expect lower readmission rates in this capitated program. Opportunities for improving the HRRP could consist of updating the HRRP metric to include observation stays and, for surgical hospitalizations, extended-stay surgical recovery, wherein patients may be observed for up to 2 days following a procedure. Unfortunately, despite the HRRP missing nearly one in five readmissions, CMS would likely need additional statutory authority from Congress in order to reinterpret the definition of readmission3 to include observation stays.

Challenges with the HRRP metrics raise broader concerns about the program. For decades, administrators viewed readmissions as a utilization metric, only to have the Affordable Care Act re-designate and define all-cause readmissions as a quality metric. Yet hospitals and health systems control only some factors driving readmission. Readmissions occur for a variety of reasons, including not only poor quality of initial hospital care and inadequate care coordination, but also factors that are beyond the hospital’s purview, such as lack of access to ambulatory services, multiple and severe chronic conditions that progress or remain unresponsive to intervention,4 and demographic and social factors such as housing instability, health literacy, or residence in a food desert. These non-hospital factors reside within the domain of other market participants or local, state, and federal government agencies.

Challenges to the utility, validity, and appropriateness of HRRP metrics should remind policymakers of the dangers of over-legislating the details of healthcare policy and the statutory inflexibility that can ensue. Clinical care evolves, and artificial constructs—including payment categories such as observation status—may age poorly over time, exemplified best by the challenges of accessing post-acute care due to the 3-day rule.5 Introduced as a statutory requirement in 1967, when the average length of stay was 13.8 days and observation care did not exist as a payment category, the 3-day rule requires Medicare beneficiaries to spend 3 days admitted to the hospital in order to qualify for coverage of post-acute care, creating care gaps for observation stay patients.

Observation care itself is an artificial construct of CMS payment policy. In the Medicare program, observation care falls under Part B, exposing patients to both greater financial responsibility and billing complexity through the engagement of their supplemental insurance, even though those receiving observation care experience the same care as if hospitalized— routine monitoring, nursing care, blood draws, imaging, and diagnostic tests. While CMS requires notification of observation status and explanation of the difference in patient financial responsibility, in clinical practice, patient understanding is limited. Policymakers can support both Medicare beneficiaries and hospitals by reexamining observation care as a payment category.

Sheehy and colleagues’ work simultaneously challenges the face validity of the HRRP and the reasonableness of categorizing some inpatient stays as outpatient care in the hospital—issues that policymakers can and should address.

Centers for Medicare and Medicaid Services (CMS)–promulgated quality metrics continue to attract critics. Physicians decry that many metrics are outside their control, while patient groups are frustrated that metrics lack meaning for beneficiaries. The Hospital Readmissions Reduction Program (HRRP) reduces payments for “excess” 30-day risk-standardized readmissions for six conditions and procedures, and may be less effective in reducing readmissions than previously reported due to intentional and increasing use of hospital observation stays.1

In this issue, Sheehy et al2 report that nearly one in five rehospitalizations were unrecognized because either the index hospitalization or the rehospitalization was an observation stay, highlighting yet another challenge with the HRRP. Limitations of their study include the use of a single year of claims data and the exclusion of Medicare Advantage claims data, as one might expect lower readmission rates in this capitated program. Opportunities for improving the HRRP could consist of updating the HRRP metric to include observation stays and, for surgical hospitalizations, extended-stay surgical recovery, wherein patients may be observed for up to 2 days following a procedure. Unfortunately, despite the HRRP missing nearly one in five readmissions, CMS would likely need additional statutory authority from Congress in order to reinterpret the definition of readmission3 to include observation stays.

Challenges with the HRRP metrics raise broader concerns about the program. For decades, administrators viewed readmissions as a utilization metric, only to have the Affordable Care Act re-designate and define all-cause readmissions as a quality metric. Yet hospitals and health systems control only some factors driving readmission. Readmissions occur for a variety of reasons, including not only poor quality of initial hospital care and inadequate care coordination, but also factors that are beyond the hospital’s purview, such as lack of access to ambulatory services, multiple and severe chronic conditions that progress or remain unresponsive to intervention,4 and demographic and social factors such as housing instability, health literacy, or residence in a food desert. These non-hospital factors reside within the domain of other market participants or local, state, and federal government agencies.

Challenges to the utility, validity, and appropriateness of HRRP metrics should remind policymakers of the dangers of over-legislating the details of healthcare policy and the statutory inflexibility that can ensue. Clinical care evolves, and artificial constructs—including payment categories such as observation status—may age poorly over time, exemplified best by the challenges of accessing post-acute care due to the 3-day rule.5 Introduced as a statutory requirement in 1967, when the average length of stay was 13.8 days and observation care did not exist as a payment category, the 3-day rule requires Medicare beneficiaries to spend 3 days admitted to the hospital in order to qualify for coverage of post-acute care, creating care gaps for observation stay patients.

Observation care itself is an artificial construct of CMS payment policy. In the Medicare program, observation care falls under Part B, exposing patients to both greater financial responsibility and billing complexity through the engagement of their supplemental insurance, even though those receiving observation care experience the same care as if hospitalized— routine monitoring, nursing care, blood draws, imaging, and diagnostic tests. While CMS requires notification of observation status and explanation of the difference in patient financial responsibility, in clinical practice, patient understanding is limited. Policymakers can support both Medicare beneficiaries and hospitals by reexamining observation care as a payment category.

Sheehy and colleagues’ work simultaneously challenges the face validity of the HRRP and the reasonableness of categorizing some inpatient stays as outpatient care in the hospital—issues that policymakers can and should address.

References

1. Sabbatini AK, Wright B. Excluding observation stays from readmission rates – what quality measures are missing. N Engl J Med. 2018;378(22):2062-2065. https://doi.org/10.1056/NEJMp1800732
2. Sheehy AM, Kaiksow F, Powell WR, et al. The hospital readmissions reduction program’s blind spot: observation hospitalizations. J Hosp Med. 2021;16(7):409-411. https://doi.org/10.12788/jhm.3634
3. The Patient Protection and Affordable Care Act, 42 USC 18001§3025 (2010).
4. Reuben DB, Tinetti ME. The hospital-dependent patient. N Engl J Med. 2014;370(8):694-697. https://doi.org/10.1056/NEJMp1315568
5. Patel N, Slota JM, Miller BJ. The continued conundrum of discharge to a skilled nursing facility after a medicare observation stay. JAMA Health Forum. 2020;1(5):e200577. https://doi.org/10.1001/jamahealthforum.2020.0577

References

1. Sabbatini AK, Wright B. Excluding observation stays from readmission rates – what quality measures are missing. N Engl J Med. 2018;378(22):2062-2065. https://doi.org/10.1056/NEJMp1800732
2. Sheehy AM, Kaiksow F, Powell WR, et al. The hospital readmissions reduction program’s blind spot: observation hospitalizations. J Hosp Med. 2021;16(7):409-411. https://doi.org/10.12788/jhm.3634
3. The Patient Protection and Affordable Care Act, 42 USC 18001§3025 (2010).
4. Reuben DB, Tinetti ME. The hospital-dependent patient. N Engl J Med. 2014;370(8):694-697. https://doi.org/10.1056/NEJMp1315568
5. Patel N, Slota JM, Miller BJ. The continued conundrum of discharge to a skilled nursing facility after a medicare observation stay. JAMA Health Forum. 2020;1(5):e200577. https://doi.org/10.1001/jamahealthforum.2020.0577

Issue
Journal of Hospital Medicine 16(7)
Issue
Journal of Hospital Medicine 16(7)
Page Number
448
Page Number
448
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2021 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Brian J. Miller, MD, MBA, MPH; Email: brian@brianjmillermd.com; Telephone: 410-614-4474; Twitter: 4_BetterHealth.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Page Free
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media
Image
Teambase ID
18001CF4.SIG
Disable zoom
Off

Supine-Related Pseudoanemia in Hospitalized Patients

Article Type
Changed
Tue, 03/30/2021 - 14:01
Display Headline
Supine-Related Pseudoanemia in Hospitalized Patients

The World Health Organization (WHO) defines anemia as a hemoglobin value less than 12 g/dL in women and less than 13 g/dL in men.1 Hospital-acquired anemia is loosely defined as normal hemoglobin levels on admission that, at their nadir during hospitalization or on discharge, are less than WHO sex-defined cutoffs. Hospital-acquired anemia or significant decreases in hemoglobin are often identified during hospitalization.2-6 Potential causes include blood loss from phlebotomy, occult gastrointestinal bleeding, hemolysis, anemia of inflammation, and hemodilution due to fluid resuscitation. Of these causes, some are dangerous to patients, some are iatrogenic, and some are due to laboratory error.7 Physicians often evaluate decreases in hemoglobin, which could otherwise be explained by laboratory error, hemodilution, or expected decrease in hemoglobin due to hospitalization, to identify causes that may lead to potential harm.

Jacob et al8 demonstrated the effect of posture on hemoglobin concentrations in healthy volunteers, showing an average 11% relative increase in hemoglobin when going from lying to standing. This increase was attributed to shifts in plasma volume to the vascular space with recumbence. They hypothesized that the initial hemoglobin on admission is measured when patients are upright or recently upright, whereas after admission, patients are more likely to be supine, resulting in lower hemoglobin concentrations. Others have also demonstrated similar effects of patient posture on hemoglobin concentration.9-13 However, these prior results are not readily generalizable to hospitalized patients. These prior studies enrolled healthy volunteers, and most examined postural changes from the supine and standing positions; blood is rarely obtained from hospitalized patients when they are standing.

The aim of this study was to investigate whether postural changes in hemoglobin can be demonstrated in positions that patients routinely encountered during in-hospital phlebotomy: upright in a chair or recumbent in a bed. Patient position, which is not standardized during blood draws, may contribute to lower measured hemoglobin concentrations in some patients, especially sicker individuals who are recumbent more frequently. We hypothesized that going from supine to upright in a chair would result in a relative increase in hemoglobin concentration of 5% to 6%, approximately half the value of going from supine to standing.8 To investigate this, we conducted a quasi-experimental study exploring the effect of position (supine or sitting in chair) on hemoglobin concentrations in medical inpatients.

METHODS

Participants

Patients were enrolled in this single-center study between October 2017 and August 2018. Patients aged 18 years or older who were hospitalized on the general internal medicine wards were screened to determine if they met the following inclusion criteria: hospitalized for <5 days, had blood work scheduled as part of routine care (in order to decrease phlebotomy required by this study), had baseline hemoglobin >8 g/dL, and were able to remain supine without interruption overnight and able to sit in a chair for at least 1 hour the following morning. Patients were excluded from the study if they had a hematologic malignancy, were at risk of >100 mL of blood loss (eg, admitted for gastrointestinal bleeding, planned surgery), had a transfusion requirement, or received intravascular modifiers such as fluid (>100 cc/h) or intravenous diuretics. The Johns Hopkins Institutional Review Board approved this study, and all patients provided written informed consent.

Study Design

Patients enrolled in this quasi-experimental study were asked to remain supine for at least 6 hours overnight. Adherence to the recumbent position was tracked by patient self-report and by corroboration with the patient’s nurse overnight. Any interruptions to supine positioning resulted in exclusion from the study. The following morning, a member of the study team performed phlebotomy while the patient remained supine. Patients were then asked to sit comfortably in a chair for at least 1 hour with their feet on the ground; the blood draw was then repeated. All blood samples were acquired by venipuncture. Prior to each blood draw, a tourniquet was placed over the upper arm below the axilla. An antecubital vein on either arm was visualized under ultrasound guidance, and a 23-G × 3/4” butterfly needle was used for venipuncture. The vials of blood were immediately inverted after blood collection. Hemoglobin assays were processed and analyzed using Sysmex XN-10 analyzer (Sysmex Corporation). The reference range for hemoglobin in our facility was 12.0 to 15.0 g/dL for women and 13.9 to 16.3 g/dL for men. Laboratory technicians were blinded to and uninvolved in the study.

We determined, a priori, that 33 enrolled patients would provide 80% power (alpha 0.05) to detect an average hemoglobin change of 4.1%, assuming that the standard deviation of the hemoglobin change was twice the mean (ie, SD = 8.2%). The Wilcoxon signed-rank test was used to test the significance of postural hemoglobin changes. Analyses were conducted using JMP Pro 13.0 (SAS) and GraphPad Prism 8 (GraphPad Software). Significance was defined at P < .05 for all analyses.

RESULTS

Thirty-nine patients were consented and enrolled in the study; four patients were excluded prior to blood draw (two patients because of interruption of supine time, two patients because of refusal in the morning). Of the 35 patients who completed the study, 13 were women (37%); median age was 49 years (range, 25-83 years). Median supine hemoglobin concentration in our sample was 11.7 g/dL (range, 9.3-18.1 g/dL), and median baseline creatinine level was 0.70 mg/dL (range, 0.5-2.5 mg/dL). Median supine hemoglobin levels were 11.7 g/dL (range, 9.6-13.2 g/dL) in women and 11.8 g/dL (range, 9.3-18.1 g/dL) in men. In aggregate, patients had a median increase in hemoglobin concentration of 0.60 g/dL (range, –0.6 to 1.4 g/dL) with sitting, a 5.2% (range, –4.5% to 15.1%) relative change (P < .001) (Figure 1).

derakhshan10470317_f1.jpg
Women had a median increase in hemoglobin concentration of 0.60 g/dL (range, –0.6 to 1.4 g/dL) with sitting, a relative change of 5.3% (range, –4.5% to 12.0%) (P = .02). Men had a median increase in hemoglobin concentration of 0.55 g/dL (range, –0.1 to 1.4 g/dL) with sitting, a 5.0% (range, –0.6% to 15.1%) relative change (P < .001). Ten of 35 participants (29%) exhibited an increase in hemoglobin level of 1.0 g/dL or more (Figure 2).
derakhshan10470317_f2.jpg

DISCUSSION

International blood collection guidelines acknowledge postural changes in laboratory values and recommend standardization of patient position to either sitting in a chair or lying flat in a bed, without changes in position for 15 minutes prior to blood draw.14 When these positional accommodations cannot be met, documenting positional disruptions is recommended so that laboratory values can be interpreted accordingly. To the best of our knowledge, no hospital in the United States has standardized patient position as part of phlebotomy procedure such that patient position is documented and can be made available to interpreting providers.

Relative increases in hemoglobin or hematocrit range from 7% to 12% when patients go from supine to standing.8,9,11 The reverse relationship has also been shown, where upright-to-supine position results in decreases in hemoglobin concentrations.10,13 We found that going from supine to a seated position resulted in significant increases in hemoglobin of 0.6 g/dL and in a more than 1 g/dL increase in 29% of the patients. Although four of the 35 patients experienced either no change or a slight decrease in their hemoglobin concentration when going from supine to upright and not all patients saw a uniform effect, providers should be aware that the patient’s position can contribute to changes in hemoglobin concentration in the hospitalized setting. Providers may be able to use this information to avoid an extensive diagnostic workup when anemia is identified in hospitalized patients, although more research is needed to identify patient subsets who are at higher risk for this effect.

Until hospitals implement protocols that require phlebotomists to report patient position during phlebotomy in a standardized fashion, providers should be alert to the fact that supine positioning may result in a hemoglobin level that is significantly lower than that when drawn in a sitting position, and in almost one-third of patients, this difference may be 1.0 g/dL or greater.

Given our study criteria requiring supine positions of at least 6 hours and a baseline hemoglobin concentration >8 g/dL, our sample of patients may have been younger and healthier than the average hospitalized patient on general internal medicine wards. Since greater relative changes in plasma volume shifts and hemoglobin might be seen in patients with lower baseline hemoglobin and lower baseline plasma protein, this selection bias may underestimate the effects of position on hemoglobin changes for the average inpatient population. Additionally, we intentionally sought to obtain sitting hemoglobin levels after the supine samples to avoid the possibility of incorrectly attributing dropping hemoglobin levels to progressive hospital-acquired anemia from phlebotomy or illness. Any concomitant trend of falling hemoglobin levels in our patients would be expected to lead to a systematic underestimation of the positional change in hemoglobin we observed. We did not objectively observe adherence to supine and upright position and instead relied on patient self-reporting, which is one possible contributor to the variable effects of position on hemoglobin concentration, with some patients having no change or decreases in hemoglobin concentrations.

CONCLUSION

Posture can significantly influence hemoglobin levels in hospitalized patients on general medicine wards. Further research can determine whether it would be cost and time effective to standardize patient positions prior to phlebotomy, or at least to report patient positioning with the laboratory testing results.

References

1. DeMaeyer E, Adiels-Tegman M. The prevalence of anaemia in the world. World Health Stat Q. 1985;38(3):302-316.
2. Martin ND, Scantling D. Hospital-acquired anemia. J Infus Nurs. 2015;38(5):330-338. https://doi.org/10.1097/NAN.0000000000000121
3. Thavendiranathan P, Bagai A, Ebidia A, Detsky AS, Choudhry NK. Do blood tests cause anemia in hospitalized patients? The effect of diagnostic phlebotomy on hemoglobin and hematocrit levels. J Gen Intern Med. 2005;20(6):520-524. https://doi.org/10.1111/j.1525-1497.2005.0094.x
4. Salisbury AC, Reid KJ, Alexander KP, et al. Diagnostic blood loss from phlebotomy and hospital-acquired anemia during acute myocardial infarction. Arch Intern Med. 2011;171(18):1646-1653. https://doi.org/10.1001/archinternmed.2011.361
5. Languasco A, Cazap N, Marciano S, et al. Hemoglobin concentration variations over time in general medical inpatients. J Hosp Med. 2010;5(5):283-288. https://doi.org/10.1002/jhm.650
6. van der Bom JG, Cannegieter SC. Hospital-acquired anemia: the contribution of diagnostic blood loss. J Thromb Haemost. 2015;13(6):1157-1159. https://doi.org/10.1111/jth.12886
7. Berkow L. Factors affecting hemoglobin measurement. J Clin Monit Comput. 2013;27(5):499-508. https://doi.org/10.1007/s10877-013-9456-3
8. Jacob G, Raj SR, Ketch T, et al. Postural pseudoanemia: posture-dependent change in hematocrit. Mayo Clin Proc. 2005;80(5):611-614. https://doi.org/10.4065/80.5.611
9. Fawcett JK, Wynn V. Effects of posture on plasma volume and some blood constituents. J Clin Pathol. 1960;13(4):304-310. https://doi.org/10.1136/jcp.13.4.304
10. Tombridge TL. Effect of posture on hematology results. Am J ClinPathol. 1968;49(4):491-493. https://doi.org/10.1093/ajcp/49.4.491
11. Hagan RD, Diaz FJ, Horvath SM. Plasma volume changes with movement to supine and standing positions. J Appl Physiol. 1978;45(3):414-417. https://doi.org/10.1152/jappl.1978.45.3.414
12. Maw GJ, Mackenzie IL, Taylor NA. Redistribution of body fluids during postural manipulations. Acta Physiol Scand. 1995;155(2):157-163. https://doi.org/10.1111/j.1748-1716.1995.tb09960.x
13. Lima-Oliveira G, Guidi GC, Salvagno GL, Danese E, Montagnana M, Lippi G. Patient posture for blood collection by venipuncture: recall for standardization after 28 years. Rev Bras Hematol Hemoter. 2017;39(2):127-132. https://doi.org/10.1016/j.bjhh.2017.01.004
14. Simundic AM, Bölenius K, Cadamuro J, et al. Working Group for Preanalytical Phase (WG-PRE), of the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) and Latin American Working Group for Preanalytical Phase (WG-PRE-LATAM) of the Latin America Confederation of Clinical Biochemistry (COLABIOCLI). Joint EFLM-COLABIOCLI recommendation for venous blood sampling. Clin Chem Lab Med. 2018;56(12):2015-2038. https://doi.org/10.1515/cclm-2018-0602

Article PDF
Author and Disclosure Information

1Department of Internal Medicine, Case Western Reserve University School of Medicine, University Hospital Cleveland Medical Center, Cleveland, Ohio; 2Department of Internal Medicine, Johns Hopkins University School of Medicine, Baltimore, Maryland; 3Department of Internal Medicine, Saint Joseph’s Medical Center, Towson, Maryland; 4Division of Cardiology, Department of Medicine, University of South Florida, Morsani College of Medicine, Tampa, Florida; 5Department of Pathology, Johns Hopkins University School of Medicine, Baltimore, Maryland; 6Department of Medicine, Johns Hopkins University School of Medicine, Baltimore, Maryland.

Disclosures

The authors have no financial relationships or conflicts of interest relevant to this article to disclose.

Issue
Journal of Hospital Medicine 16(4)
Publications
Topics
Page Number
219-222. Published Online First March 17, 2021
Sections
Author and Disclosure Information

1Department of Internal Medicine, Case Western Reserve University School of Medicine, University Hospital Cleveland Medical Center, Cleveland, Ohio; 2Department of Internal Medicine, Johns Hopkins University School of Medicine, Baltimore, Maryland; 3Department of Internal Medicine, Saint Joseph’s Medical Center, Towson, Maryland; 4Division of Cardiology, Department of Medicine, University of South Florida, Morsani College of Medicine, Tampa, Florida; 5Department of Pathology, Johns Hopkins University School of Medicine, Baltimore, Maryland; 6Department of Medicine, Johns Hopkins University School of Medicine, Baltimore, Maryland.

Disclosures

The authors have no financial relationships or conflicts of interest relevant to this article to disclose.

Author and Disclosure Information

1Department of Internal Medicine, Case Western Reserve University School of Medicine, University Hospital Cleveland Medical Center, Cleveland, Ohio; 2Department of Internal Medicine, Johns Hopkins University School of Medicine, Baltimore, Maryland; 3Department of Internal Medicine, Saint Joseph’s Medical Center, Towson, Maryland; 4Division of Cardiology, Department of Medicine, University of South Florida, Morsani College of Medicine, Tampa, Florida; 5Department of Pathology, Johns Hopkins University School of Medicine, Baltimore, Maryland; 6Department of Medicine, Johns Hopkins University School of Medicine, Baltimore, Maryland.

Disclosures

The authors have no financial relationships or conflicts of interest relevant to this article to disclose.

Article PDF
Article PDF
Related Articles

The World Health Organization (WHO) defines anemia as a hemoglobin value less than 12 g/dL in women and less than 13 g/dL in men.1 Hospital-acquired anemia is loosely defined as normal hemoglobin levels on admission that, at their nadir during hospitalization or on discharge, are less than WHO sex-defined cutoffs. Hospital-acquired anemia or significant decreases in hemoglobin are often identified during hospitalization.2-6 Potential causes include blood loss from phlebotomy, occult gastrointestinal bleeding, hemolysis, anemia of inflammation, and hemodilution due to fluid resuscitation. Of these causes, some are dangerous to patients, some are iatrogenic, and some are due to laboratory error.7 Physicians often evaluate decreases in hemoglobin, which could otherwise be explained by laboratory error, hemodilution, or expected decrease in hemoglobin due to hospitalization, to identify causes that may lead to potential harm.

Jacob et al8 demonstrated the effect of posture on hemoglobin concentrations in healthy volunteers, showing an average 11% relative increase in hemoglobin when going from lying to standing. This increase was attributed to shifts in plasma volume to the vascular space with recumbence. They hypothesized that the initial hemoglobin on admission is measured when patients are upright or recently upright, whereas after admission, patients are more likely to be supine, resulting in lower hemoglobin concentrations. Others have also demonstrated similar effects of patient posture on hemoglobin concentration.9-13 However, these prior results are not readily generalizable to hospitalized patients. These prior studies enrolled healthy volunteers, and most examined postural changes from the supine and standing positions; blood is rarely obtained from hospitalized patients when they are standing.

The aim of this study was to investigate whether postural changes in hemoglobin can be demonstrated in positions that patients routinely encountered during in-hospital phlebotomy: upright in a chair or recumbent in a bed. Patient position, which is not standardized during blood draws, may contribute to lower measured hemoglobin concentrations in some patients, especially sicker individuals who are recumbent more frequently. We hypothesized that going from supine to upright in a chair would result in a relative increase in hemoglobin concentration of 5% to 6%, approximately half the value of going from supine to standing.8 To investigate this, we conducted a quasi-experimental study exploring the effect of position (supine or sitting in chair) on hemoglobin concentrations in medical inpatients.

METHODS

Participants

Patients were enrolled in this single-center study between October 2017 and August 2018. Patients aged 18 years or older who were hospitalized on the general internal medicine wards were screened to determine if they met the following inclusion criteria: hospitalized for <5 days, had blood work scheduled as part of routine care (in order to decrease phlebotomy required by this study), had baseline hemoglobin >8 g/dL, and were able to remain supine without interruption overnight and able to sit in a chair for at least 1 hour the following morning. Patients were excluded from the study if they had a hematologic malignancy, were at risk of >100 mL of blood loss (eg, admitted for gastrointestinal bleeding, planned surgery), had a transfusion requirement, or received intravascular modifiers such as fluid (>100 cc/h) or intravenous diuretics. The Johns Hopkins Institutional Review Board approved this study, and all patients provided written informed consent.

Study Design

Patients enrolled in this quasi-experimental study were asked to remain supine for at least 6 hours overnight. Adherence to the recumbent position was tracked by patient self-report and by corroboration with the patient’s nurse overnight. Any interruptions to supine positioning resulted in exclusion from the study. The following morning, a member of the study team performed phlebotomy while the patient remained supine. Patients were then asked to sit comfortably in a chair for at least 1 hour with their feet on the ground; the blood draw was then repeated. All blood samples were acquired by venipuncture. Prior to each blood draw, a tourniquet was placed over the upper arm below the axilla. An antecubital vein on either arm was visualized under ultrasound guidance, and a 23-G × 3/4” butterfly needle was used for venipuncture. The vials of blood were immediately inverted after blood collection. Hemoglobin assays were processed and analyzed using Sysmex XN-10 analyzer (Sysmex Corporation). The reference range for hemoglobin in our facility was 12.0 to 15.0 g/dL for women and 13.9 to 16.3 g/dL for men. Laboratory technicians were blinded to and uninvolved in the study.

We determined, a priori, that 33 enrolled patients would provide 80% power (alpha 0.05) to detect an average hemoglobin change of 4.1%, assuming that the standard deviation of the hemoglobin change was twice the mean (ie, SD = 8.2%). The Wilcoxon signed-rank test was used to test the significance of postural hemoglobin changes. Analyses were conducted using JMP Pro 13.0 (SAS) and GraphPad Prism 8 (GraphPad Software). Significance was defined at P < .05 for all analyses.

RESULTS

Thirty-nine patients were consented and enrolled in the study; four patients were excluded prior to blood draw (two patients because of interruption of supine time, two patients because of refusal in the morning). Of the 35 patients who completed the study, 13 were women (37%); median age was 49 years (range, 25-83 years). Median supine hemoglobin concentration in our sample was 11.7 g/dL (range, 9.3-18.1 g/dL), and median baseline creatinine level was 0.70 mg/dL (range, 0.5-2.5 mg/dL). Median supine hemoglobin levels were 11.7 g/dL (range, 9.6-13.2 g/dL) in women and 11.8 g/dL (range, 9.3-18.1 g/dL) in men. In aggregate, patients had a median increase in hemoglobin concentration of 0.60 g/dL (range, –0.6 to 1.4 g/dL) with sitting, a 5.2% (range, –4.5% to 15.1%) relative change (P < .001) (Figure 1).

derakhshan10470317_f1.jpg
Women had a median increase in hemoglobin concentration of 0.60 g/dL (range, –0.6 to 1.4 g/dL) with sitting, a relative change of 5.3% (range, –4.5% to 12.0%) (P = .02). Men had a median increase in hemoglobin concentration of 0.55 g/dL (range, –0.1 to 1.4 g/dL) with sitting, a 5.0% (range, –0.6% to 15.1%) relative change (P < .001). Ten of 35 participants (29%) exhibited an increase in hemoglobin level of 1.0 g/dL or more (Figure 2).
derakhshan10470317_f2.jpg

DISCUSSION

International blood collection guidelines acknowledge postural changes in laboratory values and recommend standardization of patient position to either sitting in a chair or lying flat in a bed, without changes in position for 15 minutes prior to blood draw.14 When these positional accommodations cannot be met, documenting positional disruptions is recommended so that laboratory values can be interpreted accordingly. To the best of our knowledge, no hospital in the United States has standardized patient position as part of phlebotomy procedure such that patient position is documented and can be made available to interpreting providers.

Relative increases in hemoglobin or hematocrit range from 7% to 12% when patients go from supine to standing.8,9,11 The reverse relationship has also been shown, where upright-to-supine position results in decreases in hemoglobin concentrations.10,13 We found that going from supine to a seated position resulted in significant increases in hemoglobin of 0.6 g/dL and in a more than 1 g/dL increase in 29% of the patients. Although four of the 35 patients experienced either no change or a slight decrease in their hemoglobin concentration when going from supine to upright and not all patients saw a uniform effect, providers should be aware that the patient’s position can contribute to changes in hemoglobin concentration in the hospitalized setting. Providers may be able to use this information to avoid an extensive diagnostic workup when anemia is identified in hospitalized patients, although more research is needed to identify patient subsets who are at higher risk for this effect.

Until hospitals implement protocols that require phlebotomists to report patient position during phlebotomy in a standardized fashion, providers should be alert to the fact that supine positioning may result in a hemoglobin level that is significantly lower than that when drawn in a sitting position, and in almost one-third of patients, this difference may be 1.0 g/dL or greater.

Given our study criteria requiring supine positions of at least 6 hours and a baseline hemoglobin concentration >8 g/dL, our sample of patients may have been younger and healthier than the average hospitalized patient on general internal medicine wards. Since greater relative changes in plasma volume shifts and hemoglobin might be seen in patients with lower baseline hemoglobin and lower baseline plasma protein, this selection bias may underestimate the effects of position on hemoglobin changes for the average inpatient population. Additionally, we intentionally sought to obtain sitting hemoglobin levels after the supine samples to avoid the possibility of incorrectly attributing dropping hemoglobin levels to progressive hospital-acquired anemia from phlebotomy or illness. Any concomitant trend of falling hemoglobin levels in our patients would be expected to lead to a systematic underestimation of the positional change in hemoglobin we observed. We did not objectively observe adherence to supine and upright position and instead relied on patient self-reporting, which is one possible contributor to the variable effects of position on hemoglobin concentration, with some patients having no change or decreases in hemoglobin concentrations.

CONCLUSION

Posture can significantly influence hemoglobin levels in hospitalized patients on general medicine wards. Further research can determine whether it would be cost and time effective to standardize patient positions prior to phlebotomy, or at least to report patient positioning with the laboratory testing results.

The World Health Organization (WHO) defines anemia as a hemoglobin value less than 12 g/dL in women and less than 13 g/dL in men.1 Hospital-acquired anemia is loosely defined as normal hemoglobin levels on admission that, at their nadir during hospitalization or on discharge, are less than WHO sex-defined cutoffs. Hospital-acquired anemia or significant decreases in hemoglobin are often identified during hospitalization.2-6 Potential causes include blood loss from phlebotomy, occult gastrointestinal bleeding, hemolysis, anemia of inflammation, and hemodilution due to fluid resuscitation. Of these causes, some are dangerous to patients, some are iatrogenic, and some are due to laboratory error.7 Physicians often evaluate decreases in hemoglobin, which could otherwise be explained by laboratory error, hemodilution, or expected decrease in hemoglobin due to hospitalization, to identify causes that may lead to potential harm.

Jacob et al8 demonstrated the effect of posture on hemoglobin concentrations in healthy volunteers, showing an average 11% relative increase in hemoglobin when going from lying to standing. This increase was attributed to shifts in plasma volume to the vascular space with recumbence. They hypothesized that the initial hemoglobin on admission is measured when patients are upright or recently upright, whereas after admission, patients are more likely to be supine, resulting in lower hemoglobin concentrations. Others have also demonstrated similar effects of patient posture on hemoglobin concentration.9-13 However, these prior results are not readily generalizable to hospitalized patients. These prior studies enrolled healthy volunteers, and most examined postural changes from the supine and standing positions; blood is rarely obtained from hospitalized patients when they are standing.

The aim of this study was to investigate whether postural changes in hemoglobin can be demonstrated in positions that patients routinely encountered during in-hospital phlebotomy: upright in a chair or recumbent in a bed. Patient position, which is not standardized during blood draws, may contribute to lower measured hemoglobin concentrations in some patients, especially sicker individuals who are recumbent more frequently. We hypothesized that going from supine to upright in a chair would result in a relative increase in hemoglobin concentration of 5% to 6%, approximately half the value of going from supine to standing.8 To investigate this, we conducted a quasi-experimental study exploring the effect of position (supine or sitting in chair) on hemoglobin concentrations in medical inpatients.

METHODS

Participants

Patients were enrolled in this single-center study between October 2017 and August 2018. Patients aged 18 years or older who were hospitalized on the general internal medicine wards were screened to determine if they met the following inclusion criteria: hospitalized for <5 days, had blood work scheduled as part of routine care (in order to decrease phlebotomy required by this study), had baseline hemoglobin >8 g/dL, and were able to remain supine without interruption overnight and able to sit in a chair for at least 1 hour the following morning. Patients were excluded from the study if they had a hematologic malignancy, were at risk of >100 mL of blood loss (eg, admitted for gastrointestinal bleeding, planned surgery), had a transfusion requirement, or received intravascular modifiers such as fluid (>100 cc/h) or intravenous diuretics. The Johns Hopkins Institutional Review Board approved this study, and all patients provided written informed consent.

Study Design

Patients enrolled in this quasi-experimental study were asked to remain supine for at least 6 hours overnight. Adherence to the recumbent position was tracked by patient self-report and by corroboration with the patient’s nurse overnight. Any interruptions to supine positioning resulted in exclusion from the study. The following morning, a member of the study team performed phlebotomy while the patient remained supine. Patients were then asked to sit comfortably in a chair for at least 1 hour with their feet on the ground; the blood draw was then repeated. All blood samples were acquired by venipuncture. Prior to each blood draw, a tourniquet was placed over the upper arm below the axilla. An antecubital vein on either arm was visualized under ultrasound guidance, and a 23-G × 3/4” butterfly needle was used for venipuncture. The vials of blood were immediately inverted after blood collection. Hemoglobin assays were processed and analyzed using Sysmex XN-10 analyzer (Sysmex Corporation). The reference range for hemoglobin in our facility was 12.0 to 15.0 g/dL for women and 13.9 to 16.3 g/dL for men. Laboratory technicians were blinded to and uninvolved in the study.

We determined, a priori, that 33 enrolled patients would provide 80% power (alpha 0.05) to detect an average hemoglobin change of 4.1%, assuming that the standard deviation of the hemoglobin change was twice the mean (ie, SD = 8.2%). The Wilcoxon signed-rank test was used to test the significance of postural hemoglobin changes. Analyses were conducted using JMP Pro 13.0 (SAS) and GraphPad Prism 8 (GraphPad Software). Significance was defined at P < .05 for all analyses.

RESULTS

Thirty-nine patients were consented and enrolled in the study; four patients were excluded prior to blood draw (two patients because of interruption of supine time, two patients because of refusal in the morning). Of the 35 patients who completed the study, 13 were women (37%); median age was 49 years (range, 25-83 years). Median supine hemoglobin concentration in our sample was 11.7 g/dL (range, 9.3-18.1 g/dL), and median baseline creatinine level was 0.70 mg/dL (range, 0.5-2.5 mg/dL). Median supine hemoglobin levels were 11.7 g/dL (range, 9.6-13.2 g/dL) in women and 11.8 g/dL (range, 9.3-18.1 g/dL) in men. In aggregate, patients had a median increase in hemoglobin concentration of 0.60 g/dL (range, –0.6 to 1.4 g/dL) with sitting, a 5.2% (range, –4.5% to 15.1%) relative change (P < .001) (Figure 1).

derakhshan10470317_f1.jpg
Women had a median increase in hemoglobin concentration of 0.60 g/dL (range, –0.6 to 1.4 g/dL) with sitting, a relative change of 5.3% (range, –4.5% to 12.0%) (P = .02). Men had a median increase in hemoglobin concentration of 0.55 g/dL (range, –0.1 to 1.4 g/dL) with sitting, a 5.0% (range, –0.6% to 15.1%) relative change (P < .001). Ten of 35 participants (29%) exhibited an increase in hemoglobin level of 1.0 g/dL or more (Figure 2).
derakhshan10470317_f2.jpg

DISCUSSION

International blood collection guidelines acknowledge postural changes in laboratory values and recommend standardization of patient position to either sitting in a chair or lying flat in a bed, without changes in position for 15 minutes prior to blood draw.14 When these positional accommodations cannot be met, documenting positional disruptions is recommended so that laboratory values can be interpreted accordingly. To the best of our knowledge, no hospital in the United States has standardized patient position as part of phlebotomy procedure such that patient position is documented and can be made available to interpreting providers.

Relative increases in hemoglobin or hematocrit range from 7% to 12% when patients go from supine to standing.8,9,11 The reverse relationship has also been shown, where upright-to-supine position results in decreases in hemoglobin concentrations.10,13 We found that going from supine to a seated position resulted in significant increases in hemoglobin of 0.6 g/dL and in a more than 1 g/dL increase in 29% of the patients. Although four of the 35 patients experienced either no change or a slight decrease in their hemoglobin concentration when going from supine to upright and not all patients saw a uniform effect, providers should be aware that the patient’s position can contribute to changes in hemoglobin concentration in the hospitalized setting. Providers may be able to use this information to avoid an extensive diagnostic workup when anemia is identified in hospitalized patients, although more research is needed to identify patient subsets who are at higher risk for this effect.

Until hospitals implement protocols that require phlebotomists to report patient position during phlebotomy in a standardized fashion, providers should be alert to the fact that supine positioning may result in a hemoglobin level that is significantly lower than that when drawn in a sitting position, and in almost one-third of patients, this difference may be 1.0 g/dL or greater.

Given our study criteria requiring supine positions of at least 6 hours and a baseline hemoglobin concentration >8 g/dL, our sample of patients may have been younger and healthier than the average hospitalized patient on general internal medicine wards. Since greater relative changes in plasma volume shifts and hemoglobin might be seen in patients with lower baseline hemoglobin and lower baseline plasma protein, this selection bias may underestimate the effects of position on hemoglobin changes for the average inpatient population. Additionally, we intentionally sought to obtain sitting hemoglobin levels after the supine samples to avoid the possibility of incorrectly attributing dropping hemoglobin levels to progressive hospital-acquired anemia from phlebotomy or illness. Any concomitant trend of falling hemoglobin levels in our patients would be expected to lead to a systematic underestimation of the positional change in hemoglobin we observed. We did not objectively observe adherence to supine and upright position and instead relied on patient self-reporting, which is one possible contributor to the variable effects of position on hemoglobin concentration, with some patients having no change or decreases in hemoglobin concentrations.

CONCLUSION

Posture can significantly influence hemoglobin levels in hospitalized patients on general medicine wards. Further research can determine whether it would be cost and time effective to standardize patient positions prior to phlebotomy, or at least to report patient positioning with the laboratory testing results.

References

1. DeMaeyer E, Adiels-Tegman M. The prevalence of anaemia in the world. World Health Stat Q. 1985;38(3):302-316.
2. Martin ND, Scantling D. Hospital-acquired anemia. J Infus Nurs. 2015;38(5):330-338. https://doi.org/10.1097/NAN.0000000000000121
3. Thavendiranathan P, Bagai A, Ebidia A, Detsky AS, Choudhry NK. Do blood tests cause anemia in hospitalized patients? The effect of diagnostic phlebotomy on hemoglobin and hematocrit levels. J Gen Intern Med. 2005;20(6):520-524. https://doi.org/10.1111/j.1525-1497.2005.0094.x
4. Salisbury AC, Reid KJ, Alexander KP, et al. Diagnostic blood loss from phlebotomy and hospital-acquired anemia during acute myocardial infarction. Arch Intern Med. 2011;171(18):1646-1653. https://doi.org/10.1001/archinternmed.2011.361
5. Languasco A, Cazap N, Marciano S, et al. Hemoglobin concentration variations over time in general medical inpatients. J Hosp Med. 2010;5(5):283-288. https://doi.org/10.1002/jhm.650
6. van der Bom JG, Cannegieter SC. Hospital-acquired anemia: the contribution of diagnostic blood loss. J Thromb Haemost. 2015;13(6):1157-1159. https://doi.org/10.1111/jth.12886
7. Berkow L. Factors affecting hemoglobin measurement. J Clin Monit Comput. 2013;27(5):499-508. https://doi.org/10.1007/s10877-013-9456-3
8. Jacob G, Raj SR, Ketch T, et al. Postural pseudoanemia: posture-dependent change in hematocrit. Mayo Clin Proc. 2005;80(5):611-614. https://doi.org/10.4065/80.5.611
9. Fawcett JK, Wynn V. Effects of posture on plasma volume and some blood constituents. J Clin Pathol. 1960;13(4):304-310. https://doi.org/10.1136/jcp.13.4.304
10. Tombridge TL. Effect of posture on hematology results. Am J ClinPathol. 1968;49(4):491-493. https://doi.org/10.1093/ajcp/49.4.491
11. Hagan RD, Diaz FJ, Horvath SM. Plasma volume changes with movement to supine and standing positions. J Appl Physiol. 1978;45(3):414-417. https://doi.org/10.1152/jappl.1978.45.3.414
12. Maw GJ, Mackenzie IL, Taylor NA. Redistribution of body fluids during postural manipulations. Acta Physiol Scand. 1995;155(2):157-163. https://doi.org/10.1111/j.1748-1716.1995.tb09960.x
13. Lima-Oliveira G, Guidi GC, Salvagno GL, Danese E, Montagnana M, Lippi G. Patient posture for blood collection by venipuncture: recall for standardization after 28 years. Rev Bras Hematol Hemoter. 2017;39(2):127-132. https://doi.org/10.1016/j.bjhh.2017.01.004
14. Simundic AM, Bölenius K, Cadamuro J, et al. Working Group for Preanalytical Phase (WG-PRE), of the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) and Latin American Working Group for Preanalytical Phase (WG-PRE-LATAM) of the Latin America Confederation of Clinical Biochemistry (COLABIOCLI). Joint EFLM-COLABIOCLI recommendation for venous blood sampling. Clin Chem Lab Med. 2018;56(12):2015-2038. https://doi.org/10.1515/cclm-2018-0602

References

1. DeMaeyer E, Adiels-Tegman M. The prevalence of anaemia in the world. World Health Stat Q. 1985;38(3):302-316.
2. Martin ND, Scantling D. Hospital-acquired anemia. J Infus Nurs. 2015;38(5):330-338. https://doi.org/10.1097/NAN.0000000000000121
3. Thavendiranathan P, Bagai A, Ebidia A, Detsky AS, Choudhry NK. Do blood tests cause anemia in hospitalized patients? The effect of diagnostic phlebotomy on hemoglobin and hematocrit levels. J Gen Intern Med. 2005;20(6):520-524. https://doi.org/10.1111/j.1525-1497.2005.0094.x
4. Salisbury AC, Reid KJ, Alexander KP, et al. Diagnostic blood loss from phlebotomy and hospital-acquired anemia during acute myocardial infarction. Arch Intern Med. 2011;171(18):1646-1653. https://doi.org/10.1001/archinternmed.2011.361
5. Languasco A, Cazap N, Marciano S, et al. Hemoglobin concentration variations over time in general medical inpatients. J Hosp Med. 2010;5(5):283-288. https://doi.org/10.1002/jhm.650
6. van der Bom JG, Cannegieter SC. Hospital-acquired anemia: the contribution of diagnostic blood loss. J Thromb Haemost. 2015;13(6):1157-1159. https://doi.org/10.1111/jth.12886
7. Berkow L. Factors affecting hemoglobin measurement. J Clin Monit Comput. 2013;27(5):499-508. https://doi.org/10.1007/s10877-013-9456-3
8. Jacob G, Raj SR, Ketch T, et al. Postural pseudoanemia: posture-dependent change in hematocrit. Mayo Clin Proc. 2005;80(5):611-614. https://doi.org/10.4065/80.5.611
9. Fawcett JK, Wynn V. Effects of posture on plasma volume and some blood constituents. J Clin Pathol. 1960;13(4):304-310. https://doi.org/10.1136/jcp.13.4.304
10. Tombridge TL. Effect of posture on hematology results. Am J ClinPathol. 1968;49(4):491-493. https://doi.org/10.1093/ajcp/49.4.491
11. Hagan RD, Diaz FJ, Horvath SM. Plasma volume changes with movement to supine and standing positions. J Appl Physiol. 1978;45(3):414-417. https://doi.org/10.1152/jappl.1978.45.3.414
12. Maw GJ, Mackenzie IL, Taylor NA. Redistribution of body fluids during postural manipulations. Acta Physiol Scand. 1995;155(2):157-163. https://doi.org/10.1111/j.1748-1716.1995.tb09960.x
13. Lima-Oliveira G, Guidi GC, Salvagno GL, Danese E, Montagnana M, Lippi G. Patient posture for blood collection by venipuncture: recall for standardization after 28 years. Rev Bras Hematol Hemoter. 2017;39(2):127-132. https://doi.org/10.1016/j.bjhh.2017.01.004
14. Simundic AM, Bölenius K, Cadamuro J, et al. Working Group for Preanalytical Phase (WG-PRE), of the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) and Latin American Working Group for Preanalytical Phase (WG-PRE-LATAM) of the Latin America Confederation of Clinical Biochemistry (COLABIOCLI). Joint EFLM-COLABIOCLI recommendation for venous blood sampling. Clin Chem Lab Med. 2018;56(12):2015-2038. https://doi.org/10.1515/cclm-2018-0602

Issue
Journal of Hospital Medicine 16(4)
Issue
Journal of Hospital Medicine 16(4)
Page Number
219-222. Published Online First March 17, 2021
Page Number
219-222. Published Online First March 17, 2021
Publications
Publications
Topics
Article Type
Display Headline
Supine-Related Pseudoanemia in Hospitalized Patients
Display Headline
Supine-Related Pseudoanemia in Hospitalized Patients
Sections
Article Source

© 2021 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Arsalan Derakhshan, MD; Email: Arsalan.Derakhshan@UHhospitals.org; Twitter: @ArsalanMedEd.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Page Free
Medscape Article
Display survey writer
Reuters content
Article PDF Media

Development of a Simple Index to Measure Overuse of Diagnostic Testing at the Hospital Level Using Administrative Data

Article Type
Changed
Thu, 03/18/2021 - 13:34

There is substantial geographic variation in intensity of healthcare use in the United States,1 yet areas with higher healthcare utilization do not demonstrate superior clinical outcomes.2 Low-value care exposes patients to unnecessary anxiety, radiation, and risk for adverse events.

Previous research has focused on measuring low-value care at the level of hospital referral regions,3-6 metropolitan statistical areas,7 provider organizations,8 and individual physicians.9,10 Hospital referral regions designate regional healthcare markets for tertiary care and generally include at least one major referral center.11 Well-calibrated and validated hospital-level measures of diagnostic overuse are lacking.

We sought to construct a novel index to measure hospital level overuse of diagnostic testing. We focused on diagnostic intensity rather than other forms of overuse such as screening or treatment intensity. Moreover, we aimed to create a parsimonious index—one that is simple, relies on a small number of inputs, is derived from readily available administrative data without the need for chart review or complex logic, and does not require exclusion criteria.

METHODS

Conceptual Framework for Choosing Index Components

To create our overuse index, we took advantage of the requirements for International Classification of Diseases, 9th Revision-Clinical Modification (ICD-9-CM) billing codes 780-796; these codes are based on “symptoms, signs, and ill-defined conditions” and can only be listed as the primary discharge diagnosis if no more specific diagnosis is made.12 As such, when coupled with expensive tests, a high prevalence of these symptom-based diagnosis codes at discharge may serve as a proxy for low-value care. One of the candidate metrics we selected was based on Choosing Wisely® recommendations.13 The other candidate metrics were based on clinical experience and consensus of the study team.

Data Sources

We used hospital-level data on primary discharge diagnosis codes and utilization of testing data from the State Inpatient Databases (SID), which are part of the Agency for Healthcare Research and Quality Healthcare Cost and Utilization Project (HCUP). Our derivation cohort used data from acute care hospitals in Maryland, New Jersey, and Washington state. Our validation cohort used data from acute care hospitals in Kentucky, North Carolina, New York, and West Virginia. States were selected based on availability of data (certain states lacked complete testing utilization data) and cost of data acquisition. The SID contains hospital-level utilization of computed tomography (CT) scans (CT of the body and head) and diagnostic testing, including stress testing and esophagogastroduodenoscopy (EGD).

Data on three prespecified Dartmouth Atlas of Health Care metrics at the hospital service area (HSA) level were obtained from the Dartmouth Atlas website.14 These metrics were (1) rate of inpatient coronary angiograms per 1,000 Medicare enrollees, (2) price-adjusted physician reimbursement per fee-for-service Medicare enrollee per year (adjusted for patient sex, race, and age), and (3) mean inpatient spending per decedent in the last 6 months of life.15 Data on three prespecified Medicare metrics at the county level were obtained from the Centers for Medicare & Medicaid Services (CMS) website.16 These metrics were standardized per capita cost per (1) procedure, (2) imaging, and (3) test of Medicare fee-for-service patients. The CMS uses the Berenson-Eggers Type of Service Codes to classify fee-generating interventions into a number of categories, including procedure, imaging, and test.17

Components of the Overuse Index

We tested five candidate metrics for index inclusion (Table 1). We utilized Clinical Classifications Software (CCS) codes provided by HCUP, which combine several ICD-9-CM codes into a single primary CCS discharge code for ease of use. The components were (1) primary CCS diagnosis of “nausea and vomiting” coupled with body CT scan or EGD, (2) primary CCS diagnosis of abdominal pain and body CT scan or EGD, (3) primary CCS diagnosis of “nonspecific chest pain” and body CT scan or stress test, (4) primary CCS diagnosis of syncope and stress test, and (5) primary CCS diagnosis for syncope and CT of the brain. For a given metric, the denominator was all patients with the particular primary CCS discharge diagnosis code. The numerator was patients with the diagnostic code who also had the specific test or procedure. We characterized the denominators of each metric in terms of mean, SD, and range.

ellenbogen10010120e_t1.jpg

Index Inclusion Criteria and Construction

Specialty, pediatric, rehabilitation, and long-term care hospitals were excluded. Moreover, any hospital with an overall denominator (for the entire index, not an individual metric) of five or fewer observations was excluded. Admissions to acute care hospitals between January 2011 and September 2015 (time of transition from ICD-9-CM to ICD-10-CM) that had one of the specified diagnosis codes were included. For a given hospital, the value of each of the five candidate metrics was defined as the ratio of all admissions that had the given testing and all admissions during the observation period with inclusion CCS diagnosis codes.

Derivation and Validation of the Index

In our derivation cohort (hospitals in Maryland, New Jersey, and Washington state), we tested the temporal stability of each candidate metric by year using the intraclass correlation coefficient (ICC). Using exploratory factor analysis (EFA) and Cronbach’s alpha, we then tested internal consistency of the index candidate components to ensure that all measured a common underlying factor (ie, diagnostic overuse). To standardize data, test rates for both of these analyses were converted to z-scores. For the EFA, we expected that if the index was reflecting only a single underlying factor, the Eigenvalue for one factor should be much higher (typically above 1.0) than that for multiple factors. We calculated item-test correlation for each candidate metric and Cronbach’s alpha for the entire index. A high and stable value for item-test correlation for each index component, as well as a high Cronbach’s alpha, suggests that index components measure a single common factor. Given the small number of test items, we considered a Cronbach’s alpha above 0.6 to be satisfactory.

This analysis showed satisfactory temporal stability of each candidate metric and good internal consistency of the candidate metrics in the derivation cohort. Therefore, we decided to keep all metrics rather than discard any of them. This same process was repeated with the validation cohort (Kentucky, New York, North Carolina, and West Virginia) and then with the combined group of seven states. Tests on the validation and entire cohort further supported our decision to keep all five metrics.

To determine the overall index value for a hospital, all of its metric numerators and denominators were added to calculate one fraction. In this way for a given hospital, a metric for which there were no observations was effectively excluded from the index. This essentially weights each index component by frequency. We chose to count syncope admissions only once in the denominator to avoid the index being unduly influenced by this diagnosis. The hospital index values were combined into their HSAs by adding numerators and denominators from each hospital to calculate HSA index values, effectively giving higher weight to hospitals with more observations. Spearman’s correlation coefficients were measured for these Dartmouth Atlas metrics, also at the HSA level. For the county level analysis, we used a hospital-county crosswalk (available from the American Hospital Association [AHA] Annual Survey; https://www.ahadata.com/aha-annual-survey-database) to link a hospital overuse index value to a county level cost value rather than aggregating data at the county level. We felt this was appropriate, as HSAs were constructed to represent a local healthcare market, whereas counties are less likely to be homogenous from a healthcare perspective.

Analysis of Entire Hospital Sample

The mean index value and SD were calculated for the entire sample of hospitals and for each state. The mean index value for each year of data was calculated to measure the temporal change of the index (representing a change in diagnostic intensity over the study period) using linear regression. We divided the cohort of hospitals into tertiles based on their index value. This is consistent with the CMS categorization of hospital payments and value of care as being “at,” “significantly above,” or “significantly below” a mean value.18 The characteristics of hospitals by tertile were described by mean total hospital beds, mean annual admissions, teaching status (nonteaching hospital, minor teaching hospital, major teaching hospital), and critical access hospital (yes/no). We utilized the AHA Annual Survey for data on hospital characteristics. We calculated P values using analysis of variance for hospital bed size and a chi-square test for teaching status and critical access hospital.

The entire group of hospitals from seven states was then used to apply the index to the HSA level. Numerators and denominators for each hospital in an HSA were added to calculate an HSA-level proportion. Thus, the HSA level index value, though unweighted, is dominated by hospitals with larger numbers of observations. For each of the Dartmouth metrics, the adjusted odds ratio of being in a higher diagnostic overuse index tertile given being in a certain Dartmouth Atlas metric tertile was calculated using ordinal logistic regression. This model controlled for the mean number of beds of hospitals in the HSA (continuous variable), mean Elixhauser Comorbidity Index (ECI) score (continuous variable; unweighted average among hospitals in an HSA), whether the HSA had a major or minor teaching hospital (yes/no) or was a critical access hospital (yes/no), and state fixed effects. The ECI score is a validated score that uses the presence or absence of 29 comorbidities to predict in-hospital mortality.19 For discriminant validity, we also tested two variables not expected to be associated with overuse—hospital ownership and affiliation with the Catholic Church.

For the county-level analysis, ordinal logistic regression was used to predict the adjusted odds ratio of being in a higher diagnostic overuse index tertile given being in a certain tertile of a given county-level spending metric. This model controlled for hospital bed size (continuous variable), hospital ECI score (continuous variable), teaching status (major, minor, nonteaching), critical access hospital status (yes/no), and state fixed effects.

RESULTS

Descriptive Statistics for Metrics

A total of 620 acute care hospitals were included in the index. Thirteen hospitals were excluded because their denominator was five or fewer. The vast majority of HSAs (85.9%) had only one hospital, 8.2% had two hospitals, and 2.4% had three hospitals. Similarly, the majority of counties (68.7%) had only one hospital, 15.1% had two hospitals, and 6.6% had three hospitals (Appendix Tables 1.1 and 1.2). Nonspecific chest pain was the metric with largest denominator mean (650), SD (1,012), and range (0-10,725) (Appendix Table 2). Overall, the metric denominators were a small fraction of total hospital discharges, with means at the hospital level ranging from 0.69% for nausea and vomiting to 5.81% for nonspecific chest pain, suggesting that our index relies on a relatively small fraction of discharges.

Tests for Temporal Stability and Internal Consistency by Derivation and Validation Strategy

Overall, the ICCs for the derivation, validation, and entire cohort suggested strong temporal stability (Appendix Table 3). The EFA of the derivation, validation, and entire cohort showed high Eigenvalues for one principal component, with no other factors close to 1, indicating strong internal consistency (Appendix Table 4). The Cronbach’s alpha analysis also suggested strong internal consistency, with alpha values ranging from 0.73 for the validation cohort to 0.80 for the derivation cohort (Table 2).

ellenbogen10010120e_t2.jpg

Correlation With External Validation Measures

For the entire cohort, the Spearman’s rho for correlation between our overuse index and inpatient rate of coronary angiography at the HSA level was 0.186 (95% CI, 0.089-0.283), Medicare reimbursement at the HSA level was 0.355 (95% CI, 0.272-0.437), and Medicare spending during the last 6 months of life at the HSA level was 0.149 (95% CI, 0.061-0.236) (Appendix Figures 5.1-5.3). The Spearman’s rho for correlation between our overuse index and county level standardized procedure cost was 0.284 (95% CI, 0.210-0.358), imaging cost was 0.268 (95% CI, 0.195-0.342), and testing cost was 0.226 (95% CI, 0.152-0.300) (Appendix Figures 6.1-6.3).

Overall Index Values and Change Over Time

The mean hospital index value was 0.541 (SD, 0.178) (Appendix Table 7). There was a slight but statistically significant annual increase in the overall mean index value over the study period, suggesting a small rise in overuse of diagnostic testing (coefficient 0.011; P <.001) (Appendix Figure 8).

Diagnostic Overuse Index Tertiles

Hospitals in the lowest tertile of the index tended to be smaller (based on number of beds) (P < .0001) and were more likely to be critical access hospitals (P <.0001). There was a significant difference in the proportion of nonteaching, minor teaching, and major teaching hospitals, with more nonteaching hospitals in tertile 1 (P = .001) (Table 3). The median ECI score was not significantly different among tertiles. Neither of the variables tested for discriminant validity (hospital ownership and Catholic Church affiliation) was associated with our index.

ellenbogen10010120e_t3.jpg

Adjusted Multilevel Mixed-Effects Ordinal Logistic Regression

Our overuse index correlated most closely with physician reimbursement, with an odds ratio of 2.02 (95% CI, 1.11-3.66) of being in a higher tertile of the overuse index when comparing tertiles 3 and 1 of this Dartmouth metric. Of the Medicare county-level metrics, our index correlated most closely with cost of procedures, with an odds ratio of 2.03 (95% CI, 1.21-3.39) of being in a higher overuse index tertile when comparing tertiles 3 and 1 of the cost per procedure metric (Figure 1).

ellenbogen10010120e_f1.jpg

DISCUSSION

Previous research shows variation among hospitals for overall physician spending,20 noninvasive cardiac imaging,21 and the rate of finding obstructive lesions during elective coronary angiography.22 However, there is a lack of standardized methods to study a broad range of diagnostic overuse at the hospital level. To our knowledge, no studies have attempted to develop a diagnostic overuse index at the hospital level. We used a derivation-validation approach to achieve our goal. Although the five metrics represent a range of conditions, the EFA and Cronbach’s alpha tests suggest that they measure a common phenomenon. To avoid systematically excluding smaller hospitals, we limited the extent to which we eliminated hospitals with few observations. Our findings suggest that it may be reasonable to make generalizations on the diagnostic intensity of a hospital based on a relatively small number of discharges. Moreover, our index is a proof of concept that rates of negative diagnostic testing can serve as a proxy for estimating diagnostic overuse.

Our hospital-level index values extrapolated to the HSA level weakly correlated with prespecified Dartmouth Atlas metrics. In a multivariate ordinal regression, there was a significant though weak association between hospitals in higher tertiles of the Dartmouth Atlas metrics and categorization in higher tertiles of our diagnostic overuse index. Similarly, our hospital-level index correlated with two of the three county-level metrics in a multivariate ordinal regression.

We do not assume that all of the metrics in our index track together. However, our results, including the wide dispersion of index values among the tertiles (Table 3), suggest that at least some hospitals are outliers in multiple metrics. We did not assume ex ante that our index should correlate with Dartmouth overuse metrics or Medicare county-level spending; however, we did believe that an association with these measures would assist in validating our index. Given that our index utilizes four common diagnoses, while the Dartmouth and Medicare cost metrics are based on a much broader range of conditions, we would not expect more than a weak correlation even if our index is a valid way to measure overuse.

All of the metrics were based on the concept that hospitals with high rates of negative testing are likely providing large amounts of low-value care. Prior studies on diagnostic yield of CT scans in the emergency department for pulmonary embolus (PE) found an increase in testing and decrease in yield over time; these studies also showed that physicians with more experience ordered fewer CT scans and had a higher yield.23 A review of electronic health records and billing data also showed that hospitals with higher rates of D-dimer testing had higher yields on CT scans ordered to test for PE.24

We took advantage of the coding convention that certain diagnoses only be listed as the primary discharge diagnosis if no more specific diagnosis is made. This allowed us to identify hospitals that likely had high rates of negative tests without granular data. Of course, the metrics are not measuring rates of negative testing per se, but a proxy for this, based instead on the proportion of patients with a symptom-based primary discharge diagnosis who underwent diagnostic testing.

Measuring diagnostic overuse at the hospital level may help to understand factors that drive overuse, given that institutional incentives and culture likely play important roles in ordering tests. There is evidence that financial incentives drive physicians’ decisions,25-27 and there is also evidence that institutional culture impacts outcomes.28 Further, quality improvement projects are typically designed at the hospital level and may be an effective way to curb overuse.29,30

Previous studies have focused on measuring variation among providers and identifying outlier physicians.9,10,20 Providing feedback to underperforming physicians has been shown to change practice habits.31,32 Efforts to improve the practice habits of outlier hospitals may have a number of advantages, including economies of scale and scope and the added benefit of improving the habits of all providers—not just those who are underperforming.

Ordering expensive diagnostic tests on patients with a low pretest probability of having an organic etiology for their symptoms contributes to high healthcare costs. Of course, we do not believe that the ideal rate of negative testing is zero. However, hospitals with high rates of negative diagnostic testing are more likely to be those with clinicians who use expensive tests as a substitute for clinical judgment or less-expensive tests (eg, D-dimer testing to rule out PE).

One challenge we faced is that there is no gold standard of hospital-level overuse with which to validate our index. Our index is weakly correlated with a number of regional metrics that may be proxies for overuse. We are reassured that there is a statistically significant correlation with measures at both HSA and county levels. These correlations are weak, but these regional metrics are themselves imperfect surrogates for overuse. Furthermore, our index is preliminary and will need refinement in future studies.

Limitations

Our analysis has multiple limitations. First, since it relies heavily on primary ICD discharge diagnosis codes, biases could exist due to variations in coding practices. Second, the SID does not include observation stays or tests conducted in the ED, so differential use of observation stays among hospitals might impact results. Finally, based on utilization data, we were not able to distinguish between CT scans of the chest, abdomen, and pelvis because the SID labels each of these as body CT.

CONCLUSION

We developed a novel index to measure diagnostic intensity at the hospital level. This index relies on the concept that high rates of negative diagnostic testing likely indicate some degree of overuse. Our index is parsimonious, does not require granular claims data, and measures a range of potentially overused tests for common clinical scenarios. Our next steps include further refining the index, testing it with granular data, and validating it with other datasets. Thereafter, this index may be useful at identifying positive and negative outliers to understand what processes of care contribute to outlier high and low levels of diagnostic testing. We suspect our index is more useful at identifying extremes than comparing hospitals in the middle of the utilization curve. Additionally, exploring the relationship among individual metrics and the relationship between our index and quality measures like mortality and readmissions may be informative.

Files
References

1. Fisher ES, Wennberg JE, Stukel TA, et al. Associations among hospital capacity, utilization, and mortality of US Medicare beneficiaries, controlling for sociodemographic factors. Health Serv Res. 2000;34(6):1351-1362.
2. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder ÉL. The implications of regional variations in Medicare spending. Part 2: health outcomes and satisfaction with care. Ann Intern Med. 2003;138(4):288-298. https://doi.org/10.7326/0003-4819-138-4-200302180-00007
3. Segal JB, Nassery N, Chang H-Y, Chang E, Chan K, Bridges JFP. An index for measuring overuse of health care resources with Medicare claims. Med Care. 2015;53(3):230-236. https://doi.org/10.1097/mlr.0000000000000304
4. Colla CH, Morden NE, Sequist TD, Schpero WL, Rosenthal MB. Choosing wisely: prevalence and correlates of low-value health care services in the United States. J Gen Intern Med. 2014;30(2):221-228. https://doi.org/10.1007/s11606-014-3070-z
5. Colla CH, Morden NE, Sequist TD, Mainor AJ, Li Z, Rosenthal MB. Payer type and low-value care: comparing Choosing Wisely services across commercial and Medicare populations. Health Serv Res. 2018;53(2):730-746. https://doi.org/10.1111/1475-6773.12665
6. Schwartz AL, Landon BE, Elshaug AG, Chernew ME, McWilliams JM. Measuring low-value care in Medicare. JAMA Intern Med. 2014;174(7):1067-1076. https://doi.org/10.1001/jamainternmed.2014.1541
7. Oakes AH, Chang H-Y, Segal JB. Systemic overuse of health care in a commercially insured US population, 2010–2015. BMC Health Serv Res. 2019;19(1). https://doi.org/10.1186/s12913-019-4079-0
8. Schwartz AL, Zaslavsky AM, Landon BE, Chernew ME, McWilliams JM. Low-value service use in provider organizations. Health Serv Res. 2018;53(1):87-119. https://doi.org/10.1111/1475-6773.12597
9. Schwartz AL, Jena AB, Zaslavsky AM, McWilliams JM. Analysis of physician variation in provision of low-value services. JAMA Intern Med. 2019;179(1):16-25. https://doi.org/10.1001/jamainternmed.2018.5086
10. Bouck Z, Ferguson J, Ivers NM, et al. Physician characteristics associated with ordering 4 low-value screening tests in primary care. JAMA Netw Open. 2018;1(6):e183506. https://doi.org/10.1001/jamanetworkopen.2018.3506
11. Dartmouth Atlas Project. Data By Region - Dartmouth Atlas of Health Care. Accessed August 29, 2019. http://archive.dartmouthatlas.org/data/region/
12. ICD-9-CM Official Guidelines for Coding and Reporting (Effective October 11, 2011). Accessed March 1, 2018. https://www.cdc.gov/nchs/data/icd/icd9cm_guidelines_2011.pdf
13. Cassel CK, Guest JA. Choosing wisely - helping physicians and patients make smart decisions about their care. JAMA. 2012;307(17):1801-1802. https://doi.org/10.1001/jama.2012.476
14. The Dartmouth Atlas of Health Care. Accessed July 17, 2018. http://www.dartmouthatlas.org/
15. The Dartmouth Atlas of Healthcare. Research Methods. Accessed January 27, 2019. http://archive.dartmouthatlas.org/downloads/methods/research_methods.pdf
16. Centers for Medicare & Medicaid Services. Medicare geographic variation, public use file. Accessed January 5, 2020. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Geographic-Variation/GV_PUF
17. Centers for Medicare & Medicaid Services. Berenson-Eggers Type of Service (BETOS) codes. Accessed January 10, 2020. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/MedicareFeeforSvcPartsAB/downloads/betosdesccodes.pdf
18. Data.Medicare.gov. Payment and value of care – hospital: hospital compare. Accessed August 21, 2019. https://data.medicare.gov/Hospital-Compare/Payment-and-value-of-care-Hospital/c7us-v4mf
19. Moore BJ, White S, Washington R, Coenen N, Elixhauser A. Identifying increased risk of readmission and in-hospital mortality using hospital administrative data: the AHRQ Elixhauser comorbidity index. Med Care. 2017;55(7):698-705. https://doi.org/10.1097/mlr.0000000000000735
20. Tsugawa Y, Jha AK, Newhouse JP, Zaslavsky AM, Jena AB. Variation in physician spending and association with patient outcomes. JAMA Intern Med. 2017;177(5):675-682. https://doi.org/10.1001/jamainternmed.2017.0059
21. Safavi KC, Li S-X, Dharmarajan K, et al. Hospital variation in the use of noninvasive cardiac imaging and its association with downstream testing, interventions, and outcomes. JAMA Intern Med. 2014;174(4):546-553. https://doi.org/10.1001/jamainternmed.2013.14407
22. Douglas PS, Patel MR, Bailey SR, et al. Hospital variability in the rate of finding obstructive coronary artery disease at elective, diagnostic coronary angiography. J Am Coll Cardiol. 2011;58(8):801-809. https://doi.org/10.1016/j.jacc.2011.05.019
23. Venkatesh AK, Agha L, Abaluck J, Rothenberg C, Kabrhel C, Raja AS. Trends and variation in the utilization and diagnostic yield of chest imaging for Medicare patients with suspected pulmonary embolism in the emergency department. Am J Roentgenol. 2018;210(3):572-577. https://doi.org/10.2214/ajr.17.18586
24. Kline JA, Garrett JS, Sarmiento EJ, Strachan CC, Courtney DM. Over-testing for suspected pulmonary embolism in american emergency departments: the continuing epidemic. Circ Cardiovasc Qual Outcomes. 2020;13(1):e005753. https://doi.org/10.1161/circoutcomes.119.005753
25. Welch HG, Fisher ES. Income and cancer overdiagnosis – when too much care is harmful. N Engl J Med. 2017;376(23):2208-2209. https://doi.org/10.1056/nejmp1615069
26. Nicholson S. Physician specialty choice under uncertainty. J Labor Econ. 2002;20(4):816-847. https://doi.org/10.1086/342039
27. Chang R-KR, Halfon N. Geographic distribution of pediatricians in the United States: an analysis of the fifty states and Washington, DC. Pediatrics. 1997;100(2 pt 1):172-179. https://doi.org/10.1542/peds.100.2.172
28. Braithwaite J, Herkes J, Ludlow K, Lamprell G, Testa L. Association between organisational and workplace cultures, and patient outcomes: systematic review protocol. BMJ Open. 2016;6(12):e013758. https://doi.org/10.1136/bmjopen-2016-013758
29. Bhatia RS, Milford CE, Picard MH, Weiner RB. An educational intervention reduces the rate of inappropriate echocardiograms on an inpatient medical service. JACC Cardiovasc Imaging. 2013;6(5):545-555. https://doi.org/10.1016/j.jcmg.2013.01.010
30. Blackmore CC, Watt D, Sicuro PL. The success and failure of a radiology quality metric: the case of OP-10. J Am Coll Radiol. 2016;13(6):630-637. https://doi.org/10.1016/j.jacr.2016.01.006
31. Albertini JG, Wang P, Fahim C, et al. Evaluation of a peer-to-peer data transparency intervention for Mohs micrographic surgery overuse. JAMA Dermatol. 2019;155(8):906-913. https://dx.doi.org/10.1001%2Fjamadermatol.2019.1259
32. Sacarny A, Barnett ML, Le J, Tetkoski F, Yokum D, Agrawal S. Effect of peer comparison letters for high-volume primary care prescribers of quetiapine in older and disabled adults: a randomized clinical trial. JAMA Psychiatry. 2018;75(10):1003-1011. https://doi.org/10.1001/jamapsychiatry.2018.1867

Article PDF
Author and Disclosure Information

1Department of Medicine, Johns Hopkins School of Medicine, Baltimore, Maryland; 2Biostatistics, Epidemiology, and Data Management (BEAD) Core, Johns Hopkins School of Medicine, Baltimore, Maryland; 3Department of Radiology, Johns Hopkins School of Medicine, Baltimore, Maryland.

Disclosures

Drs Ellenbogen, Prichett, and Brotman have no potential conflicts of interest to disclose. Dr Johnson reports salary support from an Agency for Healthcare Research and Quality grant and the Johns Hopkins Center for Innovative Medicine, personal fees from Oliver Wyman Practicing Wisely, outside the submitted work; and potential future royalties from licensure of Johns Hopkins University School of Medicine appropriate use criteria (AUCs) and evidence-based guidelines to AgileMD.

Funding

Internal funding was received from Johns Hopkins Hospitalist Scholars Fund; no external funding was received.

Issue
Journal of Hospital Medicine 16(2)
Publications
Topics
Page Number
77-83. Published Online First January 20, 2021
Sections
Files
Files
Author and Disclosure Information

1Department of Medicine, Johns Hopkins School of Medicine, Baltimore, Maryland; 2Biostatistics, Epidemiology, and Data Management (BEAD) Core, Johns Hopkins School of Medicine, Baltimore, Maryland; 3Department of Radiology, Johns Hopkins School of Medicine, Baltimore, Maryland.

Disclosures

Drs Ellenbogen, Prichett, and Brotman have no potential conflicts of interest to disclose. Dr Johnson reports salary support from an Agency for Healthcare Research and Quality grant and the Johns Hopkins Center for Innovative Medicine, personal fees from Oliver Wyman Practicing Wisely, outside the submitted work; and potential future royalties from licensure of Johns Hopkins University School of Medicine appropriate use criteria (AUCs) and evidence-based guidelines to AgileMD.

Funding

Internal funding was received from Johns Hopkins Hospitalist Scholars Fund; no external funding was received.

Author and Disclosure Information

1Department of Medicine, Johns Hopkins School of Medicine, Baltimore, Maryland; 2Biostatistics, Epidemiology, and Data Management (BEAD) Core, Johns Hopkins School of Medicine, Baltimore, Maryland; 3Department of Radiology, Johns Hopkins School of Medicine, Baltimore, Maryland.

Disclosures

Drs Ellenbogen, Prichett, and Brotman have no potential conflicts of interest to disclose. Dr Johnson reports salary support from an Agency for Healthcare Research and Quality grant and the Johns Hopkins Center for Innovative Medicine, personal fees from Oliver Wyman Practicing Wisely, outside the submitted work; and potential future royalties from licensure of Johns Hopkins University School of Medicine appropriate use criteria (AUCs) and evidence-based guidelines to AgileMD.

Funding

Internal funding was received from Johns Hopkins Hospitalist Scholars Fund; no external funding was received.

Article PDF
Article PDF
Related Articles

There is substantial geographic variation in intensity of healthcare use in the United States,1 yet areas with higher healthcare utilization do not demonstrate superior clinical outcomes.2 Low-value care exposes patients to unnecessary anxiety, radiation, and risk for adverse events.

Previous research has focused on measuring low-value care at the level of hospital referral regions,3-6 metropolitan statistical areas,7 provider organizations,8 and individual physicians.9,10 Hospital referral regions designate regional healthcare markets for tertiary care and generally include at least one major referral center.11 Well-calibrated and validated hospital-level measures of diagnostic overuse are lacking.

We sought to construct a novel index to measure hospital level overuse of diagnostic testing. We focused on diagnostic intensity rather than other forms of overuse such as screening or treatment intensity. Moreover, we aimed to create a parsimonious index—one that is simple, relies on a small number of inputs, is derived from readily available administrative data without the need for chart review or complex logic, and does not require exclusion criteria.

METHODS

Conceptual Framework for Choosing Index Components

To create our overuse index, we took advantage of the requirements for International Classification of Diseases, 9th Revision-Clinical Modification (ICD-9-CM) billing codes 780-796; these codes are based on “symptoms, signs, and ill-defined conditions” and can only be listed as the primary discharge diagnosis if no more specific diagnosis is made.12 As such, when coupled with expensive tests, a high prevalence of these symptom-based diagnosis codes at discharge may serve as a proxy for low-value care. One of the candidate metrics we selected was based on Choosing Wisely® recommendations.13 The other candidate metrics were based on clinical experience and consensus of the study team.

Data Sources

We used hospital-level data on primary discharge diagnosis codes and utilization of testing data from the State Inpatient Databases (SID), which are part of the Agency for Healthcare Research and Quality Healthcare Cost and Utilization Project (HCUP). Our derivation cohort used data from acute care hospitals in Maryland, New Jersey, and Washington state. Our validation cohort used data from acute care hospitals in Kentucky, North Carolina, New York, and West Virginia. States were selected based on availability of data (certain states lacked complete testing utilization data) and cost of data acquisition. The SID contains hospital-level utilization of computed tomography (CT) scans (CT of the body and head) and diagnostic testing, including stress testing and esophagogastroduodenoscopy (EGD).

Data on three prespecified Dartmouth Atlas of Health Care metrics at the hospital service area (HSA) level were obtained from the Dartmouth Atlas website.14 These metrics were (1) rate of inpatient coronary angiograms per 1,000 Medicare enrollees, (2) price-adjusted physician reimbursement per fee-for-service Medicare enrollee per year (adjusted for patient sex, race, and age), and (3) mean inpatient spending per decedent in the last 6 months of life.15 Data on three prespecified Medicare metrics at the county level were obtained from the Centers for Medicare & Medicaid Services (CMS) website.16 These metrics were standardized per capita cost per (1) procedure, (2) imaging, and (3) test of Medicare fee-for-service patients. The CMS uses the Berenson-Eggers Type of Service Codes to classify fee-generating interventions into a number of categories, including procedure, imaging, and test.17

Components of the Overuse Index

We tested five candidate metrics for index inclusion (Table 1). We utilized Clinical Classifications Software (CCS) codes provided by HCUP, which combine several ICD-9-CM codes into a single primary CCS discharge code for ease of use. The components were (1) primary CCS diagnosis of “nausea and vomiting” coupled with body CT scan or EGD, (2) primary CCS diagnosis of abdominal pain and body CT scan or EGD, (3) primary CCS diagnosis of “nonspecific chest pain” and body CT scan or stress test, (4) primary CCS diagnosis of syncope and stress test, and (5) primary CCS diagnosis for syncope and CT of the brain. For a given metric, the denominator was all patients with the particular primary CCS discharge diagnosis code. The numerator was patients with the diagnostic code who also had the specific test or procedure. We characterized the denominators of each metric in terms of mean, SD, and range.

ellenbogen10010120e_t1.jpg

Index Inclusion Criteria and Construction

Specialty, pediatric, rehabilitation, and long-term care hospitals were excluded. Moreover, any hospital with an overall denominator (for the entire index, not an individual metric) of five or fewer observations was excluded. Admissions to acute care hospitals between January 2011 and September 2015 (time of transition from ICD-9-CM to ICD-10-CM) that had one of the specified diagnosis codes were included. For a given hospital, the value of each of the five candidate metrics was defined as the ratio of all admissions that had the given testing and all admissions during the observation period with inclusion CCS diagnosis codes.

Derivation and Validation of the Index

In our derivation cohort (hospitals in Maryland, New Jersey, and Washington state), we tested the temporal stability of each candidate metric by year using the intraclass correlation coefficient (ICC). Using exploratory factor analysis (EFA) and Cronbach’s alpha, we then tested internal consistency of the index candidate components to ensure that all measured a common underlying factor (ie, diagnostic overuse). To standardize data, test rates for both of these analyses were converted to z-scores. For the EFA, we expected that if the index was reflecting only a single underlying factor, the Eigenvalue for one factor should be much higher (typically above 1.0) than that for multiple factors. We calculated item-test correlation for each candidate metric and Cronbach’s alpha for the entire index. A high and stable value for item-test correlation for each index component, as well as a high Cronbach’s alpha, suggests that index components measure a single common factor. Given the small number of test items, we considered a Cronbach’s alpha above 0.6 to be satisfactory.

This analysis showed satisfactory temporal stability of each candidate metric and good internal consistency of the candidate metrics in the derivation cohort. Therefore, we decided to keep all metrics rather than discard any of them. This same process was repeated with the validation cohort (Kentucky, New York, North Carolina, and West Virginia) and then with the combined group of seven states. Tests on the validation and entire cohort further supported our decision to keep all five metrics.

To determine the overall index value for a hospital, all of its metric numerators and denominators were added to calculate one fraction. In this way for a given hospital, a metric for which there were no observations was effectively excluded from the index. This essentially weights each index component by frequency. We chose to count syncope admissions only once in the denominator to avoid the index being unduly influenced by this diagnosis. The hospital index values were combined into their HSAs by adding numerators and denominators from each hospital to calculate HSA index values, effectively giving higher weight to hospitals with more observations. Spearman’s correlation coefficients were measured for these Dartmouth Atlas metrics, also at the HSA level. For the county level analysis, we used a hospital-county crosswalk (available from the American Hospital Association [AHA] Annual Survey; https://www.ahadata.com/aha-annual-survey-database) to link a hospital overuse index value to a county level cost value rather than aggregating data at the county level. We felt this was appropriate, as HSAs were constructed to represent a local healthcare market, whereas counties are less likely to be homogenous from a healthcare perspective.

Analysis of Entire Hospital Sample

The mean index value and SD were calculated for the entire sample of hospitals and for each state. The mean index value for each year of data was calculated to measure the temporal change of the index (representing a change in diagnostic intensity over the study period) using linear regression. We divided the cohort of hospitals into tertiles based on their index value. This is consistent with the CMS categorization of hospital payments and value of care as being “at,” “significantly above,” or “significantly below” a mean value.18 The characteristics of hospitals by tertile were described by mean total hospital beds, mean annual admissions, teaching status (nonteaching hospital, minor teaching hospital, major teaching hospital), and critical access hospital (yes/no). We utilized the AHA Annual Survey for data on hospital characteristics. We calculated P values using analysis of variance for hospital bed size and a chi-square test for teaching status and critical access hospital.

The entire group of hospitals from seven states was then used to apply the index to the HSA level. Numerators and denominators for each hospital in an HSA were added to calculate an HSA-level proportion. Thus, the HSA level index value, though unweighted, is dominated by hospitals with larger numbers of observations. For each of the Dartmouth metrics, the adjusted odds ratio of being in a higher diagnostic overuse index tertile given being in a certain Dartmouth Atlas metric tertile was calculated using ordinal logistic regression. This model controlled for the mean number of beds of hospitals in the HSA (continuous variable), mean Elixhauser Comorbidity Index (ECI) score (continuous variable; unweighted average among hospitals in an HSA), whether the HSA had a major or minor teaching hospital (yes/no) or was a critical access hospital (yes/no), and state fixed effects. The ECI score is a validated score that uses the presence or absence of 29 comorbidities to predict in-hospital mortality.19 For discriminant validity, we also tested two variables not expected to be associated with overuse—hospital ownership and affiliation with the Catholic Church.

For the county-level analysis, ordinal logistic regression was used to predict the adjusted odds ratio of being in a higher diagnostic overuse index tertile given being in a certain tertile of a given county-level spending metric. This model controlled for hospital bed size (continuous variable), hospital ECI score (continuous variable), teaching status (major, minor, nonteaching), critical access hospital status (yes/no), and state fixed effects.

RESULTS

Descriptive Statistics for Metrics

A total of 620 acute care hospitals were included in the index. Thirteen hospitals were excluded because their denominator was five or fewer. The vast majority of HSAs (85.9%) had only one hospital, 8.2% had two hospitals, and 2.4% had three hospitals. Similarly, the majority of counties (68.7%) had only one hospital, 15.1% had two hospitals, and 6.6% had three hospitals (Appendix Tables 1.1 and 1.2). Nonspecific chest pain was the metric with largest denominator mean (650), SD (1,012), and range (0-10,725) (Appendix Table 2). Overall, the metric denominators were a small fraction of total hospital discharges, with means at the hospital level ranging from 0.69% for nausea and vomiting to 5.81% for nonspecific chest pain, suggesting that our index relies on a relatively small fraction of discharges.

Tests for Temporal Stability and Internal Consistency by Derivation and Validation Strategy

Overall, the ICCs for the derivation, validation, and entire cohort suggested strong temporal stability (Appendix Table 3). The EFA of the derivation, validation, and entire cohort showed high Eigenvalues for one principal component, with no other factors close to 1, indicating strong internal consistency (Appendix Table 4). The Cronbach’s alpha analysis also suggested strong internal consistency, with alpha values ranging from 0.73 for the validation cohort to 0.80 for the derivation cohort (Table 2).

ellenbogen10010120e_t2.jpg

Correlation With External Validation Measures

For the entire cohort, the Spearman’s rho for correlation between our overuse index and inpatient rate of coronary angiography at the HSA level was 0.186 (95% CI, 0.089-0.283), Medicare reimbursement at the HSA level was 0.355 (95% CI, 0.272-0.437), and Medicare spending during the last 6 months of life at the HSA level was 0.149 (95% CI, 0.061-0.236) (Appendix Figures 5.1-5.3). The Spearman’s rho for correlation between our overuse index and county level standardized procedure cost was 0.284 (95% CI, 0.210-0.358), imaging cost was 0.268 (95% CI, 0.195-0.342), and testing cost was 0.226 (95% CI, 0.152-0.300) (Appendix Figures 6.1-6.3).

Overall Index Values and Change Over Time

The mean hospital index value was 0.541 (SD, 0.178) (Appendix Table 7). There was a slight but statistically significant annual increase in the overall mean index value over the study period, suggesting a small rise in overuse of diagnostic testing (coefficient 0.011; P <.001) (Appendix Figure 8).

Diagnostic Overuse Index Tertiles

Hospitals in the lowest tertile of the index tended to be smaller (based on number of beds) (P < .0001) and were more likely to be critical access hospitals (P <.0001). There was a significant difference in the proportion of nonteaching, minor teaching, and major teaching hospitals, with more nonteaching hospitals in tertile 1 (P = .001) (Table 3). The median ECI score was not significantly different among tertiles. Neither of the variables tested for discriminant validity (hospital ownership and Catholic Church affiliation) was associated with our index.

ellenbogen10010120e_t3.jpg

Adjusted Multilevel Mixed-Effects Ordinal Logistic Regression

Our overuse index correlated most closely with physician reimbursement, with an odds ratio of 2.02 (95% CI, 1.11-3.66) of being in a higher tertile of the overuse index when comparing tertiles 3 and 1 of this Dartmouth metric. Of the Medicare county-level metrics, our index correlated most closely with cost of procedures, with an odds ratio of 2.03 (95% CI, 1.21-3.39) of being in a higher overuse index tertile when comparing tertiles 3 and 1 of the cost per procedure metric (Figure 1).

ellenbogen10010120e_f1.jpg

DISCUSSION

Previous research shows variation among hospitals for overall physician spending,20 noninvasive cardiac imaging,21 and the rate of finding obstructive lesions during elective coronary angiography.22 However, there is a lack of standardized methods to study a broad range of diagnostic overuse at the hospital level. To our knowledge, no studies have attempted to develop a diagnostic overuse index at the hospital level. We used a derivation-validation approach to achieve our goal. Although the five metrics represent a range of conditions, the EFA and Cronbach’s alpha tests suggest that they measure a common phenomenon. To avoid systematically excluding smaller hospitals, we limited the extent to which we eliminated hospitals with few observations. Our findings suggest that it may be reasonable to make generalizations on the diagnostic intensity of a hospital based on a relatively small number of discharges. Moreover, our index is a proof of concept that rates of negative diagnostic testing can serve as a proxy for estimating diagnostic overuse.

Our hospital-level index values extrapolated to the HSA level weakly correlated with prespecified Dartmouth Atlas metrics. In a multivariate ordinal regression, there was a significant though weak association between hospitals in higher tertiles of the Dartmouth Atlas metrics and categorization in higher tertiles of our diagnostic overuse index. Similarly, our hospital-level index correlated with two of the three county-level metrics in a multivariate ordinal regression.

We do not assume that all of the metrics in our index track together. However, our results, including the wide dispersion of index values among the tertiles (Table 3), suggest that at least some hospitals are outliers in multiple metrics. We did not assume ex ante that our index should correlate with Dartmouth overuse metrics or Medicare county-level spending; however, we did believe that an association with these measures would assist in validating our index. Given that our index utilizes four common diagnoses, while the Dartmouth and Medicare cost metrics are based on a much broader range of conditions, we would not expect more than a weak correlation even if our index is a valid way to measure overuse.

All of the metrics were based on the concept that hospitals with high rates of negative testing are likely providing large amounts of low-value care. Prior studies on diagnostic yield of CT scans in the emergency department for pulmonary embolus (PE) found an increase in testing and decrease in yield over time; these studies also showed that physicians with more experience ordered fewer CT scans and had a higher yield.23 A review of electronic health records and billing data also showed that hospitals with higher rates of D-dimer testing had higher yields on CT scans ordered to test for PE.24

We took advantage of the coding convention that certain diagnoses only be listed as the primary discharge diagnosis if no more specific diagnosis is made. This allowed us to identify hospitals that likely had high rates of negative tests without granular data. Of course, the metrics are not measuring rates of negative testing per se, but a proxy for this, based instead on the proportion of patients with a symptom-based primary discharge diagnosis who underwent diagnostic testing.

Measuring diagnostic overuse at the hospital level may help to understand factors that drive overuse, given that institutional incentives and culture likely play important roles in ordering tests. There is evidence that financial incentives drive physicians’ decisions,25-27 and there is also evidence that institutional culture impacts outcomes.28 Further, quality improvement projects are typically designed at the hospital level and may be an effective way to curb overuse.29,30

Previous studies have focused on measuring variation among providers and identifying outlier physicians.9,10,20 Providing feedback to underperforming physicians has been shown to change practice habits.31,32 Efforts to improve the practice habits of outlier hospitals may have a number of advantages, including economies of scale and scope and the added benefit of improving the habits of all providers—not just those who are underperforming.

Ordering expensive diagnostic tests on patients with a low pretest probability of having an organic etiology for their symptoms contributes to high healthcare costs. Of course, we do not believe that the ideal rate of negative testing is zero. However, hospitals with high rates of negative diagnostic testing are more likely to be those with clinicians who use expensive tests as a substitute for clinical judgment or less-expensive tests (eg, D-dimer testing to rule out PE).

One challenge we faced is that there is no gold standard of hospital-level overuse with which to validate our index. Our index is weakly correlated with a number of regional metrics that may be proxies for overuse. We are reassured that there is a statistically significant correlation with measures at both HSA and county levels. These correlations are weak, but these regional metrics are themselves imperfect surrogates for overuse. Furthermore, our index is preliminary and will need refinement in future studies.

Limitations

Our analysis has multiple limitations. First, since it relies heavily on primary ICD discharge diagnosis codes, biases could exist due to variations in coding practices. Second, the SID does not include observation stays or tests conducted in the ED, so differential use of observation stays among hospitals might impact results. Finally, based on utilization data, we were not able to distinguish between CT scans of the chest, abdomen, and pelvis because the SID labels each of these as body CT.

CONCLUSION

We developed a novel index to measure diagnostic intensity at the hospital level. This index relies on the concept that high rates of negative diagnostic testing likely indicate some degree of overuse. Our index is parsimonious, does not require granular claims data, and measures a range of potentially overused tests for common clinical scenarios. Our next steps include further refining the index, testing it with granular data, and validating it with other datasets. Thereafter, this index may be useful at identifying positive and negative outliers to understand what processes of care contribute to outlier high and low levels of diagnostic testing. We suspect our index is more useful at identifying extremes than comparing hospitals in the middle of the utilization curve. Additionally, exploring the relationship among individual metrics and the relationship between our index and quality measures like mortality and readmissions may be informative.

There is substantial geographic variation in intensity of healthcare use in the United States,1 yet areas with higher healthcare utilization do not demonstrate superior clinical outcomes.2 Low-value care exposes patients to unnecessary anxiety, radiation, and risk for adverse events.

Previous research has focused on measuring low-value care at the level of hospital referral regions,3-6 metropolitan statistical areas,7 provider organizations,8 and individual physicians.9,10 Hospital referral regions designate regional healthcare markets for tertiary care and generally include at least one major referral center.11 Well-calibrated and validated hospital-level measures of diagnostic overuse are lacking.

We sought to construct a novel index to measure hospital level overuse of diagnostic testing. We focused on diagnostic intensity rather than other forms of overuse such as screening or treatment intensity. Moreover, we aimed to create a parsimonious index—one that is simple, relies on a small number of inputs, is derived from readily available administrative data without the need for chart review or complex logic, and does not require exclusion criteria.

METHODS

Conceptual Framework for Choosing Index Components

To create our overuse index, we took advantage of the requirements for International Classification of Diseases, 9th Revision-Clinical Modification (ICD-9-CM) billing codes 780-796; these codes are based on “symptoms, signs, and ill-defined conditions” and can only be listed as the primary discharge diagnosis if no more specific diagnosis is made.12 As such, when coupled with expensive tests, a high prevalence of these symptom-based diagnosis codes at discharge may serve as a proxy for low-value care. One of the candidate metrics we selected was based on Choosing Wisely® recommendations.13 The other candidate metrics were based on clinical experience and consensus of the study team.

Data Sources

We used hospital-level data on primary discharge diagnosis codes and utilization of testing data from the State Inpatient Databases (SID), which are part of the Agency for Healthcare Research and Quality Healthcare Cost and Utilization Project (HCUP). Our derivation cohort used data from acute care hospitals in Maryland, New Jersey, and Washington state. Our validation cohort used data from acute care hospitals in Kentucky, North Carolina, New York, and West Virginia. States were selected based on availability of data (certain states lacked complete testing utilization data) and cost of data acquisition. The SID contains hospital-level utilization of computed tomography (CT) scans (CT of the body and head) and diagnostic testing, including stress testing and esophagogastroduodenoscopy (EGD).

Data on three prespecified Dartmouth Atlas of Health Care metrics at the hospital service area (HSA) level were obtained from the Dartmouth Atlas website.14 These metrics were (1) rate of inpatient coronary angiograms per 1,000 Medicare enrollees, (2) price-adjusted physician reimbursement per fee-for-service Medicare enrollee per year (adjusted for patient sex, race, and age), and (3) mean inpatient spending per decedent in the last 6 months of life.15 Data on three prespecified Medicare metrics at the county level were obtained from the Centers for Medicare & Medicaid Services (CMS) website.16 These metrics were standardized per capita cost per (1) procedure, (2) imaging, and (3) test of Medicare fee-for-service patients. The CMS uses the Berenson-Eggers Type of Service Codes to classify fee-generating interventions into a number of categories, including procedure, imaging, and test.17

Components of the Overuse Index

We tested five candidate metrics for index inclusion (Table 1). We utilized Clinical Classifications Software (CCS) codes provided by HCUP, which combine several ICD-9-CM codes into a single primary CCS discharge code for ease of use. The components were (1) primary CCS diagnosis of “nausea and vomiting” coupled with body CT scan or EGD, (2) primary CCS diagnosis of abdominal pain and body CT scan or EGD, (3) primary CCS diagnosis of “nonspecific chest pain” and body CT scan or stress test, (4) primary CCS diagnosis of syncope and stress test, and (5) primary CCS diagnosis for syncope and CT of the brain. For a given metric, the denominator was all patients with the particular primary CCS discharge diagnosis code. The numerator was patients with the diagnostic code who also had the specific test or procedure. We characterized the denominators of each metric in terms of mean, SD, and range.

ellenbogen10010120e_t1.jpg

Index Inclusion Criteria and Construction

Specialty, pediatric, rehabilitation, and long-term care hospitals were excluded. Moreover, any hospital with an overall denominator (for the entire index, not an individual metric) of five or fewer observations was excluded. Admissions to acute care hospitals between January 2011 and September 2015 (time of transition from ICD-9-CM to ICD-10-CM) that had one of the specified diagnosis codes were included. For a given hospital, the value of each of the five candidate metrics was defined as the ratio of all admissions that had the given testing and all admissions during the observation period with inclusion CCS diagnosis codes.

Derivation and Validation of the Index

In our derivation cohort (hospitals in Maryland, New Jersey, and Washington state), we tested the temporal stability of each candidate metric by year using the intraclass correlation coefficient (ICC). Using exploratory factor analysis (EFA) and Cronbach’s alpha, we then tested internal consistency of the index candidate components to ensure that all measured a common underlying factor (ie, diagnostic overuse). To standardize data, test rates for both of these analyses were converted to z-scores. For the EFA, we expected that if the index was reflecting only a single underlying factor, the Eigenvalue for one factor should be much higher (typically above 1.0) than that for multiple factors. We calculated item-test correlation for each candidate metric and Cronbach’s alpha for the entire index. A high and stable value for item-test correlation for each index component, as well as a high Cronbach’s alpha, suggests that index components measure a single common factor. Given the small number of test items, we considered a Cronbach’s alpha above 0.6 to be satisfactory.

This analysis showed satisfactory temporal stability of each candidate metric and good internal consistency of the candidate metrics in the derivation cohort. Therefore, we decided to keep all metrics rather than discard any of them. This same process was repeated with the validation cohort (Kentucky, New York, North Carolina, and West Virginia) and then with the combined group of seven states. Tests on the validation and entire cohort further supported our decision to keep all five metrics.

To determine the overall index value for a hospital, all of its metric numerators and denominators were added to calculate one fraction. In this way for a given hospital, a metric for which there were no observations was effectively excluded from the index. This essentially weights each index component by frequency. We chose to count syncope admissions only once in the denominator to avoid the index being unduly influenced by this diagnosis. The hospital index values were combined into their HSAs by adding numerators and denominators from each hospital to calculate HSA index values, effectively giving higher weight to hospitals with more observations. Spearman’s correlation coefficients were measured for these Dartmouth Atlas metrics, also at the HSA level. For the county level analysis, we used a hospital-county crosswalk (available from the American Hospital Association [AHA] Annual Survey; https://www.ahadata.com/aha-annual-survey-database) to link a hospital overuse index value to a county level cost value rather than aggregating data at the county level. We felt this was appropriate, as HSAs were constructed to represent a local healthcare market, whereas counties are less likely to be homogenous from a healthcare perspective.

Analysis of Entire Hospital Sample

The mean index value and SD were calculated for the entire sample of hospitals and for each state. The mean index value for each year of data was calculated to measure the temporal change of the index (representing a change in diagnostic intensity over the study period) using linear regression. We divided the cohort of hospitals into tertiles based on their index value. This is consistent with the CMS categorization of hospital payments and value of care as being “at,” “significantly above,” or “significantly below” a mean value.18 The characteristics of hospitals by tertile were described by mean total hospital beds, mean annual admissions, teaching status (nonteaching hospital, minor teaching hospital, major teaching hospital), and critical access hospital (yes/no). We utilized the AHA Annual Survey for data on hospital characteristics. We calculated P values using analysis of variance for hospital bed size and a chi-square test for teaching status and critical access hospital.

The entire group of hospitals from seven states was then used to apply the index to the HSA level. Numerators and denominators for each hospital in an HSA were added to calculate an HSA-level proportion. Thus, the HSA level index value, though unweighted, is dominated by hospitals with larger numbers of observations. For each of the Dartmouth metrics, the adjusted odds ratio of being in a higher diagnostic overuse index tertile given being in a certain Dartmouth Atlas metric tertile was calculated using ordinal logistic regression. This model controlled for the mean number of beds of hospitals in the HSA (continuous variable), mean Elixhauser Comorbidity Index (ECI) score (continuous variable; unweighted average among hospitals in an HSA), whether the HSA had a major or minor teaching hospital (yes/no) or was a critical access hospital (yes/no), and state fixed effects. The ECI score is a validated score that uses the presence or absence of 29 comorbidities to predict in-hospital mortality.19 For discriminant validity, we also tested two variables not expected to be associated with overuse—hospital ownership and affiliation with the Catholic Church.

For the county-level analysis, ordinal logistic regression was used to predict the adjusted odds ratio of being in a higher diagnostic overuse index tertile given being in a certain tertile of a given county-level spending metric. This model controlled for hospital bed size (continuous variable), hospital ECI score (continuous variable), teaching status (major, minor, nonteaching), critical access hospital status (yes/no), and state fixed effects.

RESULTS

Descriptive Statistics for Metrics

A total of 620 acute care hospitals were included in the index. Thirteen hospitals were excluded because their denominator was five or fewer. The vast majority of HSAs (85.9%) had only one hospital, 8.2% had two hospitals, and 2.4% had three hospitals. Similarly, the majority of counties (68.7%) had only one hospital, 15.1% had two hospitals, and 6.6% had three hospitals (Appendix Tables 1.1 and 1.2). Nonspecific chest pain was the metric with largest denominator mean (650), SD (1,012), and range (0-10,725) (Appendix Table 2). Overall, the metric denominators were a small fraction of total hospital discharges, with means at the hospital level ranging from 0.69% for nausea and vomiting to 5.81% for nonspecific chest pain, suggesting that our index relies on a relatively small fraction of discharges.

Tests for Temporal Stability and Internal Consistency by Derivation and Validation Strategy

Overall, the ICCs for the derivation, validation, and entire cohort suggested strong temporal stability (Appendix Table 3). The EFA of the derivation, validation, and entire cohort showed high Eigenvalues for one principal component, with no other factors close to 1, indicating strong internal consistency (Appendix Table 4). The Cronbach’s alpha analysis also suggested strong internal consistency, with alpha values ranging from 0.73 for the validation cohort to 0.80 for the derivation cohort (Table 2).

ellenbogen10010120e_t2.jpg

Correlation With External Validation Measures

For the entire cohort, the Spearman’s rho for correlation between our overuse index and inpatient rate of coronary angiography at the HSA level was 0.186 (95% CI, 0.089-0.283), Medicare reimbursement at the HSA level was 0.355 (95% CI, 0.272-0.437), and Medicare spending during the last 6 months of life at the HSA level was 0.149 (95% CI, 0.061-0.236) (Appendix Figures 5.1-5.3). The Spearman’s rho for correlation between our overuse index and county level standardized procedure cost was 0.284 (95% CI, 0.210-0.358), imaging cost was 0.268 (95% CI, 0.195-0.342), and testing cost was 0.226 (95% CI, 0.152-0.300) (Appendix Figures 6.1-6.3).

Overall Index Values and Change Over Time

The mean hospital index value was 0.541 (SD, 0.178) (Appendix Table 7). There was a slight but statistically significant annual increase in the overall mean index value over the study period, suggesting a small rise in overuse of diagnostic testing (coefficient 0.011; P <.001) (Appendix Figure 8).

Diagnostic Overuse Index Tertiles

Hospitals in the lowest tertile of the index tended to be smaller (based on number of beds) (P < .0001) and were more likely to be critical access hospitals (P <.0001). There was a significant difference in the proportion of nonteaching, minor teaching, and major teaching hospitals, with more nonteaching hospitals in tertile 1 (P = .001) (Table 3). The median ECI score was not significantly different among tertiles. Neither of the variables tested for discriminant validity (hospital ownership and Catholic Church affiliation) was associated with our index.

ellenbogen10010120e_t3.jpg

Adjusted Multilevel Mixed-Effects Ordinal Logistic Regression

Our overuse index correlated most closely with physician reimbursement, with an odds ratio of 2.02 (95% CI, 1.11-3.66) of being in a higher tertile of the overuse index when comparing tertiles 3 and 1 of this Dartmouth metric. Of the Medicare county-level metrics, our index correlated most closely with cost of procedures, with an odds ratio of 2.03 (95% CI, 1.21-3.39) of being in a higher overuse index tertile when comparing tertiles 3 and 1 of the cost per procedure metric (Figure 1).

ellenbogen10010120e_f1.jpg

DISCUSSION

Previous research shows variation among hospitals for overall physician spending,20 noninvasive cardiac imaging,21 and the rate of finding obstructive lesions during elective coronary angiography.22 However, there is a lack of standardized methods to study a broad range of diagnostic overuse at the hospital level. To our knowledge, no studies have attempted to develop a diagnostic overuse index at the hospital level. We used a derivation-validation approach to achieve our goal. Although the five metrics represent a range of conditions, the EFA and Cronbach’s alpha tests suggest that they measure a common phenomenon. To avoid systematically excluding smaller hospitals, we limited the extent to which we eliminated hospitals with few observations. Our findings suggest that it may be reasonable to make generalizations on the diagnostic intensity of a hospital based on a relatively small number of discharges. Moreover, our index is a proof of concept that rates of negative diagnostic testing can serve as a proxy for estimating diagnostic overuse.

Our hospital-level index values extrapolated to the HSA level weakly correlated with prespecified Dartmouth Atlas metrics. In a multivariate ordinal regression, there was a significant though weak association between hospitals in higher tertiles of the Dartmouth Atlas metrics and categorization in higher tertiles of our diagnostic overuse index. Similarly, our hospital-level index correlated with two of the three county-level metrics in a multivariate ordinal regression.

We do not assume that all of the metrics in our index track together. However, our results, including the wide dispersion of index values among the tertiles (Table 3), suggest that at least some hospitals are outliers in multiple metrics. We did not assume ex ante that our index should correlate with Dartmouth overuse metrics or Medicare county-level spending; however, we did believe that an association with these measures would assist in validating our index. Given that our index utilizes four common diagnoses, while the Dartmouth and Medicare cost metrics are based on a much broader range of conditions, we would not expect more than a weak correlation even if our index is a valid way to measure overuse.

All of the metrics were based on the concept that hospitals with high rates of negative testing are likely providing large amounts of low-value care. Prior studies on diagnostic yield of CT scans in the emergency department for pulmonary embolus (PE) found an increase in testing and decrease in yield over time; these studies also showed that physicians with more experience ordered fewer CT scans and had a higher yield.23 A review of electronic health records and billing data also showed that hospitals with higher rates of D-dimer testing had higher yields on CT scans ordered to test for PE.24

We took advantage of the coding convention that certain diagnoses only be listed as the primary discharge diagnosis if no more specific diagnosis is made. This allowed us to identify hospitals that likely had high rates of negative tests without granular data. Of course, the metrics are not measuring rates of negative testing per se, but a proxy for this, based instead on the proportion of patients with a symptom-based primary discharge diagnosis who underwent diagnostic testing.

Measuring diagnostic overuse at the hospital level may help to understand factors that drive overuse, given that institutional incentives and culture likely play important roles in ordering tests. There is evidence that financial incentives drive physicians’ decisions,25-27 and there is also evidence that institutional culture impacts outcomes.28 Further, quality improvement projects are typically designed at the hospital level and may be an effective way to curb overuse.29,30

Previous studies have focused on measuring variation among providers and identifying outlier physicians.9,10,20 Providing feedback to underperforming physicians has been shown to change practice habits.31,32 Efforts to improve the practice habits of outlier hospitals may have a number of advantages, including economies of scale and scope and the added benefit of improving the habits of all providers—not just those who are underperforming.

Ordering expensive diagnostic tests on patients with a low pretest probability of having an organic etiology for their symptoms contributes to high healthcare costs. Of course, we do not believe that the ideal rate of negative testing is zero. However, hospitals with high rates of negative diagnostic testing are more likely to be those with clinicians who use expensive tests as a substitute for clinical judgment or less-expensive tests (eg, D-dimer testing to rule out PE).

One challenge we faced is that there is no gold standard of hospital-level overuse with which to validate our index. Our index is weakly correlated with a number of regional metrics that may be proxies for overuse. We are reassured that there is a statistically significant correlation with measures at both HSA and county levels. These correlations are weak, but these regional metrics are themselves imperfect surrogates for overuse. Furthermore, our index is preliminary and will need refinement in future studies.

Limitations

Our analysis has multiple limitations. First, since it relies heavily on primary ICD discharge diagnosis codes, biases could exist due to variations in coding practices. Second, the SID does not include observation stays or tests conducted in the ED, so differential use of observation stays among hospitals might impact results. Finally, based on utilization data, we were not able to distinguish between CT scans of the chest, abdomen, and pelvis because the SID labels each of these as body CT.

CONCLUSION

We developed a novel index to measure diagnostic intensity at the hospital level. This index relies on the concept that high rates of negative diagnostic testing likely indicate some degree of overuse. Our index is parsimonious, does not require granular claims data, and measures a range of potentially overused tests for common clinical scenarios. Our next steps include further refining the index, testing it with granular data, and validating it with other datasets. Thereafter, this index may be useful at identifying positive and negative outliers to understand what processes of care contribute to outlier high and low levels of diagnostic testing. We suspect our index is more useful at identifying extremes than comparing hospitals in the middle of the utilization curve. Additionally, exploring the relationship among individual metrics and the relationship between our index and quality measures like mortality and readmissions may be informative.

References

1. Fisher ES, Wennberg JE, Stukel TA, et al. Associations among hospital capacity, utilization, and mortality of US Medicare beneficiaries, controlling for sociodemographic factors. Health Serv Res. 2000;34(6):1351-1362.
2. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder ÉL. The implications of regional variations in Medicare spending. Part 2: health outcomes and satisfaction with care. Ann Intern Med. 2003;138(4):288-298. https://doi.org/10.7326/0003-4819-138-4-200302180-00007
3. Segal JB, Nassery N, Chang H-Y, Chang E, Chan K, Bridges JFP. An index for measuring overuse of health care resources with Medicare claims. Med Care. 2015;53(3):230-236. https://doi.org/10.1097/mlr.0000000000000304
4. Colla CH, Morden NE, Sequist TD, Schpero WL, Rosenthal MB. Choosing wisely: prevalence and correlates of low-value health care services in the United States. J Gen Intern Med. 2014;30(2):221-228. https://doi.org/10.1007/s11606-014-3070-z
5. Colla CH, Morden NE, Sequist TD, Mainor AJ, Li Z, Rosenthal MB. Payer type and low-value care: comparing Choosing Wisely services across commercial and Medicare populations. Health Serv Res. 2018;53(2):730-746. https://doi.org/10.1111/1475-6773.12665
6. Schwartz AL, Landon BE, Elshaug AG, Chernew ME, McWilliams JM. Measuring low-value care in Medicare. JAMA Intern Med. 2014;174(7):1067-1076. https://doi.org/10.1001/jamainternmed.2014.1541
7. Oakes AH, Chang H-Y, Segal JB. Systemic overuse of health care in a commercially insured US population, 2010–2015. BMC Health Serv Res. 2019;19(1). https://doi.org/10.1186/s12913-019-4079-0
8. Schwartz AL, Zaslavsky AM, Landon BE, Chernew ME, McWilliams JM. Low-value service use in provider organizations. Health Serv Res. 2018;53(1):87-119. https://doi.org/10.1111/1475-6773.12597
9. Schwartz AL, Jena AB, Zaslavsky AM, McWilliams JM. Analysis of physician variation in provision of low-value services. JAMA Intern Med. 2019;179(1):16-25. https://doi.org/10.1001/jamainternmed.2018.5086
10. Bouck Z, Ferguson J, Ivers NM, et al. Physician characteristics associated with ordering 4 low-value screening tests in primary care. JAMA Netw Open. 2018;1(6):e183506. https://doi.org/10.1001/jamanetworkopen.2018.3506
11. Dartmouth Atlas Project. Data By Region - Dartmouth Atlas of Health Care. Accessed August 29, 2019. http://archive.dartmouthatlas.org/data/region/
12. ICD-9-CM Official Guidelines for Coding and Reporting (Effective October 11, 2011). Accessed March 1, 2018. https://www.cdc.gov/nchs/data/icd/icd9cm_guidelines_2011.pdf
13. Cassel CK, Guest JA. Choosing wisely - helping physicians and patients make smart decisions about their care. JAMA. 2012;307(17):1801-1802. https://doi.org/10.1001/jama.2012.476
14. The Dartmouth Atlas of Health Care. Accessed July 17, 2018. http://www.dartmouthatlas.org/
15. The Dartmouth Atlas of Healthcare. Research Methods. Accessed January 27, 2019. http://archive.dartmouthatlas.org/downloads/methods/research_methods.pdf
16. Centers for Medicare & Medicaid Services. Medicare geographic variation, public use file. Accessed January 5, 2020. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Geographic-Variation/GV_PUF
17. Centers for Medicare & Medicaid Services. Berenson-Eggers Type of Service (BETOS) codes. Accessed January 10, 2020. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/MedicareFeeforSvcPartsAB/downloads/betosdesccodes.pdf
18. Data.Medicare.gov. Payment and value of care – hospital: hospital compare. Accessed August 21, 2019. https://data.medicare.gov/Hospital-Compare/Payment-and-value-of-care-Hospital/c7us-v4mf
19. Moore BJ, White S, Washington R, Coenen N, Elixhauser A. Identifying increased risk of readmission and in-hospital mortality using hospital administrative data: the AHRQ Elixhauser comorbidity index. Med Care. 2017;55(7):698-705. https://doi.org/10.1097/mlr.0000000000000735
20. Tsugawa Y, Jha AK, Newhouse JP, Zaslavsky AM, Jena AB. Variation in physician spending and association with patient outcomes. JAMA Intern Med. 2017;177(5):675-682. https://doi.org/10.1001/jamainternmed.2017.0059
21. Safavi KC, Li S-X, Dharmarajan K, et al. Hospital variation in the use of noninvasive cardiac imaging and its association with downstream testing, interventions, and outcomes. JAMA Intern Med. 2014;174(4):546-553. https://doi.org/10.1001/jamainternmed.2013.14407
22. Douglas PS, Patel MR, Bailey SR, et al. Hospital variability in the rate of finding obstructive coronary artery disease at elective, diagnostic coronary angiography. J Am Coll Cardiol. 2011;58(8):801-809. https://doi.org/10.1016/j.jacc.2011.05.019
23. Venkatesh AK, Agha L, Abaluck J, Rothenberg C, Kabrhel C, Raja AS. Trends and variation in the utilization and diagnostic yield of chest imaging for Medicare patients with suspected pulmonary embolism in the emergency department. Am J Roentgenol. 2018;210(3):572-577. https://doi.org/10.2214/ajr.17.18586
24. Kline JA, Garrett JS, Sarmiento EJ, Strachan CC, Courtney DM. Over-testing for suspected pulmonary embolism in american emergency departments: the continuing epidemic. Circ Cardiovasc Qual Outcomes. 2020;13(1):e005753. https://doi.org/10.1161/circoutcomes.119.005753
25. Welch HG, Fisher ES. Income and cancer overdiagnosis – when too much care is harmful. N Engl J Med. 2017;376(23):2208-2209. https://doi.org/10.1056/nejmp1615069
26. Nicholson S. Physician specialty choice under uncertainty. J Labor Econ. 2002;20(4):816-847. https://doi.org/10.1086/342039
27. Chang R-KR, Halfon N. Geographic distribution of pediatricians in the United States: an analysis of the fifty states and Washington, DC. Pediatrics. 1997;100(2 pt 1):172-179. https://doi.org/10.1542/peds.100.2.172
28. Braithwaite J, Herkes J, Ludlow K, Lamprell G, Testa L. Association between organisational and workplace cultures, and patient outcomes: systematic review protocol. BMJ Open. 2016;6(12):e013758. https://doi.org/10.1136/bmjopen-2016-013758
29. Bhatia RS, Milford CE, Picard MH, Weiner RB. An educational intervention reduces the rate of inappropriate echocardiograms on an inpatient medical service. JACC Cardiovasc Imaging. 2013;6(5):545-555. https://doi.org/10.1016/j.jcmg.2013.01.010
30. Blackmore CC, Watt D, Sicuro PL. The success and failure of a radiology quality metric: the case of OP-10. J Am Coll Radiol. 2016;13(6):630-637. https://doi.org/10.1016/j.jacr.2016.01.006
31. Albertini JG, Wang P, Fahim C, et al. Evaluation of a peer-to-peer data transparency intervention for Mohs micrographic surgery overuse. JAMA Dermatol. 2019;155(8):906-913. https://dx.doi.org/10.1001%2Fjamadermatol.2019.1259
32. Sacarny A, Barnett ML, Le J, Tetkoski F, Yokum D, Agrawal S. Effect of peer comparison letters for high-volume primary care prescribers of quetiapine in older and disabled adults: a randomized clinical trial. JAMA Psychiatry. 2018;75(10):1003-1011. https://doi.org/10.1001/jamapsychiatry.2018.1867

References

1. Fisher ES, Wennberg JE, Stukel TA, et al. Associations among hospital capacity, utilization, and mortality of US Medicare beneficiaries, controlling for sociodemographic factors. Health Serv Res. 2000;34(6):1351-1362.
2. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder ÉL. The implications of regional variations in Medicare spending. Part 2: health outcomes and satisfaction with care. Ann Intern Med. 2003;138(4):288-298. https://doi.org/10.7326/0003-4819-138-4-200302180-00007
3. Segal JB, Nassery N, Chang H-Y, Chang E, Chan K, Bridges JFP. An index for measuring overuse of health care resources with Medicare claims. Med Care. 2015;53(3):230-236. https://doi.org/10.1097/mlr.0000000000000304
4. Colla CH, Morden NE, Sequist TD, Schpero WL, Rosenthal MB. Choosing wisely: prevalence and correlates of low-value health care services in the United States. J Gen Intern Med. 2014;30(2):221-228. https://doi.org/10.1007/s11606-014-3070-z
5. Colla CH, Morden NE, Sequist TD, Mainor AJ, Li Z, Rosenthal MB. Payer type and low-value care: comparing Choosing Wisely services across commercial and Medicare populations. Health Serv Res. 2018;53(2):730-746. https://doi.org/10.1111/1475-6773.12665
6. Schwartz AL, Landon BE, Elshaug AG, Chernew ME, McWilliams JM. Measuring low-value care in Medicare. JAMA Intern Med. 2014;174(7):1067-1076. https://doi.org/10.1001/jamainternmed.2014.1541
7. Oakes AH, Chang H-Y, Segal JB. Systemic overuse of health care in a commercially insured US population, 2010–2015. BMC Health Serv Res. 2019;19(1). https://doi.org/10.1186/s12913-019-4079-0
8. Schwartz AL, Zaslavsky AM, Landon BE, Chernew ME, McWilliams JM. Low-value service use in provider organizations. Health Serv Res. 2018;53(1):87-119. https://doi.org/10.1111/1475-6773.12597
9. Schwartz AL, Jena AB, Zaslavsky AM, McWilliams JM. Analysis of physician variation in provision of low-value services. JAMA Intern Med. 2019;179(1):16-25. https://doi.org/10.1001/jamainternmed.2018.5086
10. Bouck Z, Ferguson J, Ivers NM, et al. Physician characteristics associated with ordering 4 low-value screening tests in primary care. JAMA Netw Open. 2018;1(6):e183506. https://doi.org/10.1001/jamanetworkopen.2018.3506
11. Dartmouth Atlas Project. Data By Region - Dartmouth Atlas of Health Care. Accessed August 29, 2019. http://archive.dartmouthatlas.org/data/region/
12. ICD-9-CM Official Guidelines for Coding and Reporting (Effective October 11, 2011). Accessed March 1, 2018. https://www.cdc.gov/nchs/data/icd/icd9cm_guidelines_2011.pdf
13. Cassel CK, Guest JA. Choosing wisely - helping physicians and patients make smart decisions about their care. JAMA. 2012;307(17):1801-1802. https://doi.org/10.1001/jama.2012.476
14. The Dartmouth Atlas of Health Care. Accessed July 17, 2018. http://www.dartmouthatlas.org/
15. The Dartmouth Atlas of Healthcare. Research Methods. Accessed January 27, 2019. http://archive.dartmouthatlas.org/downloads/methods/research_methods.pdf
16. Centers for Medicare & Medicaid Services. Medicare geographic variation, public use file. Accessed January 5, 2020. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Geographic-Variation/GV_PUF
17. Centers for Medicare & Medicaid Services. Berenson-Eggers Type of Service (BETOS) codes. Accessed January 10, 2020. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/MedicareFeeforSvcPartsAB/downloads/betosdesccodes.pdf
18. Data.Medicare.gov. Payment and value of care – hospital: hospital compare. Accessed August 21, 2019. https://data.medicare.gov/Hospital-Compare/Payment-and-value-of-care-Hospital/c7us-v4mf
19. Moore BJ, White S, Washington R, Coenen N, Elixhauser A. Identifying increased risk of readmission and in-hospital mortality using hospital administrative data: the AHRQ Elixhauser comorbidity index. Med Care. 2017;55(7):698-705. https://doi.org/10.1097/mlr.0000000000000735
20. Tsugawa Y, Jha AK, Newhouse JP, Zaslavsky AM, Jena AB. Variation in physician spending and association with patient outcomes. JAMA Intern Med. 2017;177(5):675-682. https://doi.org/10.1001/jamainternmed.2017.0059
21. Safavi KC, Li S-X, Dharmarajan K, et al. Hospital variation in the use of noninvasive cardiac imaging and its association with downstream testing, interventions, and outcomes. JAMA Intern Med. 2014;174(4):546-553. https://doi.org/10.1001/jamainternmed.2013.14407
22. Douglas PS, Patel MR, Bailey SR, et al. Hospital variability in the rate of finding obstructive coronary artery disease at elective, diagnostic coronary angiography. J Am Coll Cardiol. 2011;58(8):801-809. https://doi.org/10.1016/j.jacc.2011.05.019
23. Venkatesh AK, Agha L, Abaluck J, Rothenberg C, Kabrhel C, Raja AS. Trends and variation in the utilization and diagnostic yield of chest imaging for Medicare patients with suspected pulmonary embolism in the emergency department. Am J Roentgenol. 2018;210(3):572-577. https://doi.org/10.2214/ajr.17.18586
24. Kline JA, Garrett JS, Sarmiento EJ, Strachan CC, Courtney DM. Over-testing for suspected pulmonary embolism in american emergency departments: the continuing epidemic. Circ Cardiovasc Qual Outcomes. 2020;13(1):e005753. https://doi.org/10.1161/circoutcomes.119.005753
25. Welch HG, Fisher ES. Income and cancer overdiagnosis – when too much care is harmful. N Engl J Med. 2017;376(23):2208-2209. https://doi.org/10.1056/nejmp1615069
26. Nicholson S. Physician specialty choice under uncertainty. J Labor Econ. 2002;20(4):816-847. https://doi.org/10.1086/342039
27. Chang R-KR, Halfon N. Geographic distribution of pediatricians in the United States: an analysis of the fifty states and Washington, DC. Pediatrics. 1997;100(2 pt 1):172-179. https://doi.org/10.1542/peds.100.2.172
28. Braithwaite J, Herkes J, Ludlow K, Lamprell G, Testa L. Association between organisational and workplace cultures, and patient outcomes: systematic review protocol. BMJ Open. 2016;6(12):e013758. https://doi.org/10.1136/bmjopen-2016-013758
29. Bhatia RS, Milford CE, Picard MH, Weiner RB. An educational intervention reduces the rate of inappropriate echocardiograms on an inpatient medical service. JACC Cardiovasc Imaging. 2013;6(5):545-555. https://doi.org/10.1016/j.jcmg.2013.01.010
30. Blackmore CC, Watt D, Sicuro PL. The success and failure of a radiology quality metric: the case of OP-10. J Am Coll Radiol. 2016;13(6):630-637. https://doi.org/10.1016/j.jacr.2016.01.006
31. Albertini JG, Wang P, Fahim C, et al. Evaluation of a peer-to-peer data transparency intervention for Mohs micrographic surgery overuse. JAMA Dermatol. 2019;155(8):906-913. https://dx.doi.org/10.1001%2Fjamadermatol.2019.1259
32. Sacarny A, Barnett ML, Le J, Tetkoski F, Yokum D, Agrawal S. Effect of peer comparison letters for high-volume primary care prescribers of quetiapine in older and disabled adults: a randomized clinical trial. JAMA Psychiatry. 2018;75(10):1003-1011. https://doi.org/10.1001/jamapsychiatry.2018.1867

Issue
Journal of Hospital Medicine 16(2)
Issue
Journal of Hospital Medicine 16(2)
Page Number
77-83. Published Online First January 20, 2021
Page Number
77-83. Published Online First January 20, 2021
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2021 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Michael I. Ellenbogen, MD; Email: mellenb6@jhmi.edu; Telephone: 443-287-4362.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Page Free
Medscape Article
Display survey writer
Reuters content
Article PDF Media
Image
Teambase ID
18001AC8.SIG
Disable zoom
Off
Media Files

Academic Hospital Medicine 2.0: If You Aren’t Teaching Residents, Are You Still Academic?

Article Type
Changed
Wed, 09/30/2020 - 11:38

How much teaching time will I get in my first year on faculty?” Leaders at academic hospitalist programs know to expect this question from almost every applicant. We also know that we will be graded on our response; the more resident-covered service time, the better. For some applicants, this question is a key litmus test. Some prospective faculty choose to pursue academic hospital medicine because of their own experiences on the wards during residency. They recall the excitement of leading a team of interns and students under the wing of a seasoned attending, replete with chalk talks, clinical pearls, and inspired learners. Teaching time is more quantifiable than mentorship quality and academic opportunity, more important than salary and patient load for some, and more familiar than relative value unit expectations.

Over the past two decades, academic hospitalist programs have steadily grown,1 but their teaching footprints have not.2,3 Although historically some academic hospitalists spent almost 100% of their clinical time on teaching services, work hour rules and diversification of resident clinical time toward outpatient and subspecialty activities have decreased the amount of general medicine ward time for residents.2 In addition, as academic medical centers broadened their clinical networks, inpatient volumes exceeded the capacity of teaching services. Finally, several large academic medical centers and healthcare networks are acquiring or building additional hospitals, increasing the number of medical beds that are staffed by hospitalists without residents.4

In our experience, as academic healthcare systems continue to grow and hospital medicine programs rapidly expand to meet clinical needs, the percentage of clinical time spent on a traditional ward teaching service continues to decrease. In several academic hospitalist programs, the majority of faculty effort is now devoted to direct care,5 with limited resident-covered ward time spread across a larger group of faculty. The 2018 State of Hospital Medicine Report suggests that our experience is not unique with academic programs caring for adults reporting that 31% of clinical work was on traditional ward teaching services, 16% on direct care services with intermittent learners, and 53% on nonteaching services.5

This current state of affairs raises a number of questions as follows:

  • How can hospitalist program leaders take advantage of existing resident teaching opportunities?
  • How should those teaching opportunities be allocated?
  • What nontraditional teaching venues exist in academic medicine?
  • How can faculty develop their teaching skills in an environment with limited traditional ward teaching time.

We believe that these changes require us to redefine what it means to be an academic hospitalist, both for existing faculty and for prospective faculty whose views of academic hospital medicine may have been shaped by role models seen only in their clinical teaching role.

 

 

MAXIMIZING RESIDENT TEACHING OPPORTUNITIES

Is reduced teaching time the new normal or will the pendulum swing back toward more resident teaching time for academic hospitalists? The former is likely the case. None of the current trends in medical education point to an expansion of residents in the inpatient setting. Although there may be some opportunities to assume general medicine attending time is currently covered by primary care physicians and subspecialists, in several programs, hospitalists already cover the overwhelming majority of general medicine teaching services.

Although there may be occasional opportunities for academic hospitalist programs to develop new teaching roles with residents or fellows (for example, by expanding to community sites with residency programs or to subspecialty teaching services, or by creating hospital medicine fellowships and resident or student electives), the reality is that we as hospitalists will need to adapt to direct care as the plurality of our work.

ALLOCATING TEACHING TIME

How should we allocate traditional teaching time among our faculty? Since it is a coveted—but relatively scarce—resource, teaching time should be allocated thoughtfully. Based on our collective experience, academic hospitalist groups have taken a variety of approaches to this challenge, including forming separate clinical groups at the same institution (a teaching faculty group and a nonteaching group),6 requiring all hospitalists to do some amount of direct care to facilitate distribution of teaching time or having merit or seniority-based teaching time allocation (based on teaching evaluations, formal teaching roles such as program director status, or years on faculty).

Each approach to assigning teaching time has its challenges. Hospitalist leaders must manage these issues through transparency about the selection process for teaching rotations and open discussion of teaching evaluations with faculty. It is also critical that the recruitment process set appropriate expectations for faculty candidates. Highlighting academic opportunities outside of teaching residents, including leadership roles, quality improvement work, and research, may encourage applicants and current hospitalists to explore more varied career trajectories. Hospitalists focusing on these other paths may elect to have less teaching time, freeing up opportunities for dedicated clinician educators.

BEYOND TRADITIONAL RESIDENT TEACHING TEAMS

What other ward-based teaching opportunities might be available for academic hospitalists who do not have the opportunity to attend on traditional resident teaching teams? As supervisory requirements for residents have been strengthened, expansion of teaching into the evening and overnight hours to supervise new admissions to the teaching services has been one approach to augment teaching footprints.7,8

In addition, nontraditional teaching teams such as attending/intern teams (without a supervising resident) or attending/subintern (fourth-year medical student) teams have been developed at some institutions.9 Although allowing for additional exposure to learners, these models require a more hands-on approach than traditional teaching teams, particularly at the start of the academic year. Finally, as hospitalist teams have grown to include advanced practice providers (APPs), some programs have established formal teaching programs to address professional development needs of these healthcare professionals.10,11

DEVELOPING HOSPITALIST EDUCATORS

How do we help junior faculty who have the potential to be talented educators succeed in teaching when they have limited opportunities to engage with residents on clinical services? One approach is to encourage hospitalists to participate in resident didactic sessions such as “morning report” and noon conference. Another approach is to focus on teaching other learners. For example, several academic medical centers provide opportunities for hospitalists to engage in student teaching, either on the wards or via classroom instruction. In addition, as mentioned previously, APPs who are new to hospital medicine are an engaged audience and represent an opportunity for hospitalist educators to utilize and hone their teaching skills. Finally, organizing lectures for nursing colleagues is another way for the faculty to practice “chalk talks” and develop teaching portfolios.

 

 

Hospitalists can also leverage their expertise to build systems in which academic hospitalists are teaching each other, creating a culture of continuous learning. These activities may include case conferences, morbidity and mortality conferences, journal clubs, clinical topic updates developed by and for hospitalists, simulation exercises, and other group learning sessions. Giving hospitalists the opportunity to teach each other allows for professional growth that is not dependent on the presence of traditional learners.

REDEFINING ACADEMIC HOSPITALISTS

Philosophically, a key question is “What makes ‘academic’ academic?” Traditionally, academic hospitalist positions were synonymous with resident teaching or, for a small number of academic hospitalists, significant funded research. In an era where teaching residents may no longer be part of the job description for many hospitalists at academic medical centers, what distinguishes these positions from 100% clinical positions and what are the implications for academic hospital medicine?

Although data regarding why hospitalists seek “nonteaching” positions at academic medical centers are lacking, we believe that these jobs remain popular due to opportunities that are perceived to be unique to academic medical centers. These include more flexible scheduling (academic programs may be less likely to have seven-on/seven-off schedules), exposure to research and cutting-edge technology, opportunities to care for tertiary and quaternary care patients, collaboration with academic peers and experts in the field, and interaction with a range of learners, including medical, pharmacy, advanced practitioner, and other students.

Understanding the motivation of candidates who apply for academic hospital medicine positions—aside from supervising/teaching residents—will be an important goal for academic hospitalist leaders to ensure future success in staffing growing programs and creating sustainable academic hospitalist careers. As resident teaching time decreases, implementing robust faculty or professional development programs to address the broader interests and needs of academic hospitalist faculty will be increasingly important. Sehgal et al. described one such program for faculty development,12 and a more recent paper outlines a faculty development program focused on quality improvement and patient safety.13 These types of programs provide opportunities for academic hospitalists to engage in academic pursuits that are independent of residency programs.

CONCLUSION

Therefore, what do we tell the eager faculty applicant? First, we should not hide from the honest answer, ie, new faculty may not get as much resident teaching time as they would like or expect. Although we want hospitalists to maintain integral involvement in residency training programs, we also want to build a culture of clinical excellence, scholarship, and continuous learning that is not dependent on directly teaching residents. We should highlight the unique opportunities of academic hospital medicine, including teaching other learners, caring for tertiary/quaternary care patients, working with colleagues who are leaders in their field, and engaging in research and quality improvement work. By capitalizing on these opportunities, we can actively redefine what makes “academic” academic and ensure that we sustain academic hospital medicine as a desirable and rewarding career.

Disclosures

The authors have nothing to disclose.

References

1. Wachter RM, Goldman L. Zero to 50,000-the 20th anniversary of the hospitalist. N Engl J Med 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Saint S, Flanders SA. Hospitalists in teaching hospitals: opportunities but not without danger. J Gen Intern Med. 2004;19(4):392-393. https://doi.org/10.1111/j.1525-1497.2004.42002.x.
3. Flanders SA, Saint S, McMahon LF, Howell JD. Where should hospitalists sit within the academic medical center? J Gen Intern Med. 2008;23(8):1269-1272. https://doi.org/10.1007/s11606-008-0682-1.
4. 5 Hospital projects announced this year worth $1B or more. ASC Communications, 2019. https://www.beckershospitalreview.com/facilities-management/5-hospital-projects-announced-this-year-worth-1b-or-more.html. Accessed August 24, 2019.
5. White A, Anders J, Anoff DL, Creamer J, Flores LA. Table 3.45 Distribution of work in academic hmgs. Philadelphia, PA: Society of Hospital Medicine; 201 8.
6. Hunt D, Burger A, Harrison R, Southern W, Boonyasai RT, Leykum L. Hospitalist staffing: To split or not to split? SGIM Forum 2013;36:6.
7. Farnan JM, Burger A, Boonyasai RT, et al. Survey of overnight academic hospitalist supervision of trainees. J Hosp Med. 2012;7(7):521-523. https://doi.org/10.1002/jhm.1961.
8. Sani SN, Wistar E, Le L, Chia D, Haber LA. Shining a light on overnight education: Hospitalist and resident impressions of the current state, barriers, and methods for improvement. Cureus 2018;10:e2939. https://doi.org/10.7759/cureus.2939.
9. O’Leary KJ, Chadha V, Fleming VM, Martin GJ, Baker DW. Medical subinternship: student experience on a resident uncovered hospitalist service. Teach Learn Med. 2008;20(1):18-21. https://doi.org/10.1080/10401330701797974.
10. Klimpl D, Franco T, Tackett S, et al. The current state of advanced practice provider fellowships in hospital medicine: A survey of program directors. J Hosp Med. 2019;14(7):401-406. https://doi.org/10.12788/jhm.3191.
11. Lackner C, Eid S, Panek T, Kisuule F. An advanced practice provider clinical fellowship as a pipeline to staffing a hospitalist program. J Hosp Med. 2019;14(6):336-339. https://doi.org/10.12788/jhm.3183.
12. Sehgal NL, Sharpe BA, Auerbach AA et al. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6(3):161-166. https://doi.org/10.1002/jhm.845.
13. van Schaik SM, Chang A, Fogh S, et al. Jumpstarting faculty development in quality improvement and patient safety education: A team-based approach. Acad Med. 2019.

Article PDF
Issue
Journal of Hospital Medicine 15(10)
Publications
Topics
Page Number
622-624. Published Online First February 19, 2020
Sections
Article PDF
Article PDF
Related Articles

How much teaching time will I get in my first year on faculty?” Leaders at academic hospitalist programs know to expect this question from almost every applicant. We also know that we will be graded on our response; the more resident-covered service time, the better. For some applicants, this question is a key litmus test. Some prospective faculty choose to pursue academic hospital medicine because of their own experiences on the wards during residency. They recall the excitement of leading a team of interns and students under the wing of a seasoned attending, replete with chalk talks, clinical pearls, and inspired learners. Teaching time is more quantifiable than mentorship quality and academic opportunity, more important than salary and patient load for some, and more familiar than relative value unit expectations.

Over the past two decades, academic hospitalist programs have steadily grown,1 but their teaching footprints have not.2,3 Although historically some academic hospitalists spent almost 100% of their clinical time on teaching services, work hour rules and diversification of resident clinical time toward outpatient and subspecialty activities have decreased the amount of general medicine ward time for residents.2 In addition, as academic medical centers broadened their clinical networks, inpatient volumes exceeded the capacity of teaching services. Finally, several large academic medical centers and healthcare networks are acquiring or building additional hospitals, increasing the number of medical beds that are staffed by hospitalists without residents.4

In our experience, as academic healthcare systems continue to grow and hospital medicine programs rapidly expand to meet clinical needs, the percentage of clinical time spent on a traditional ward teaching service continues to decrease. In several academic hospitalist programs, the majority of faculty effort is now devoted to direct care,5 with limited resident-covered ward time spread across a larger group of faculty. The 2018 State of Hospital Medicine Report suggests that our experience is not unique with academic programs caring for adults reporting that 31% of clinical work was on traditional ward teaching services, 16% on direct care services with intermittent learners, and 53% on nonteaching services.5

This current state of affairs raises a number of questions as follows:

  • How can hospitalist program leaders take advantage of existing resident teaching opportunities?
  • How should those teaching opportunities be allocated?
  • What nontraditional teaching venues exist in academic medicine?
  • How can faculty develop their teaching skills in an environment with limited traditional ward teaching time.

We believe that these changes require us to redefine what it means to be an academic hospitalist, both for existing faculty and for prospective faculty whose views of academic hospital medicine may have been shaped by role models seen only in their clinical teaching role.

 

 

MAXIMIZING RESIDENT TEACHING OPPORTUNITIES

Is reduced teaching time the new normal or will the pendulum swing back toward more resident teaching time for academic hospitalists? The former is likely the case. None of the current trends in medical education point to an expansion of residents in the inpatient setting. Although there may be some opportunities to assume general medicine attending time is currently covered by primary care physicians and subspecialists, in several programs, hospitalists already cover the overwhelming majority of general medicine teaching services.

Although there may be occasional opportunities for academic hospitalist programs to develop new teaching roles with residents or fellows (for example, by expanding to community sites with residency programs or to subspecialty teaching services, or by creating hospital medicine fellowships and resident or student electives), the reality is that we as hospitalists will need to adapt to direct care as the plurality of our work.

ALLOCATING TEACHING TIME

How should we allocate traditional teaching time among our faculty? Since it is a coveted—but relatively scarce—resource, teaching time should be allocated thoughtfully. Based on our collective experience, academic hospitalist groups have taken a variety of approaches to this challenge, including forming separate clinical groups at the same institution (a teaching faculty group and a nonteaching group),6 requiring all hospitalists to do some amount of direct care to facilitate distribution of teaching time or having merit or seniority-based teaching time allocation (based on teaching evaluations, formal teaching roles such as program director status, or years on faculty).

Each approach to assigning teaching time has its challenges. Hospitalist leaders must manage these issues through transparency about the selection process for teaching rotations and open discussion of teaching evaluations with faculty. It is also critical that the recruitment process set appropriate expectations for faculty candidates. Highlighting academic opportunities outside of teaching residents, including leadership roles, quality improvement work, and research, may encourage applicants and current hospitalists to explore more varied career trajectories. Hospitalists focusing on these other paths may elect to have less teaching time, freeing up opportunities for dedicated clinician educators.

BEYOND TRADITIONAL RESIDENT TEACHING TEAMS

What other ward-based teaching opportunities might be available for academic hospitalists who do not have the opportunity to attend on traditional resident teaching teams? As supervisory requirements for residents have been strengthened, expansion of teaching into the evening and overnight hours to supervise new admissions to the teaching services has been one approach to augment teaching footprints.7,8

In addition, nontraditional teaching teams such as attending/intern teams (without a supervising resident) or attending/subintern (fourth-year medical student) teams have been developed at some institutions.9 Although allowing for additional exposure to learners, these models require a more hands-on approach than traditional teaching teams, particularly at the start of the academic year. Finally, as hospitalist teams have grown to include advanced practice providers (APPs), some programs have established formal teaching programs to address professional development needs of these healthcare professionals.10,11

DEVELOPING HOSPITALIST EDUCATORS

How do we help junior faculty who have the potential to be talented educators succeed in teaching when they have limited opportunities to engage with residents on clinical services? One approach is to encourage hospitalists to participate in resident didactic sessions such as “morning report” and noon conference. Another approach is to focus on teaching other learners. For example, several academic medical centers provide opportunities for hospitalists to engage in student teaching, either on the wards or via classroom instruction. In addition, as mentioned previously, APPs who are new to hospital medicine are an engaged audience and represent an opportunity for hospitalist educators to utilize and hone their teaching skills. Finally, organizing lectures for nursing colleagues is another way for the faculty to practice “chalk talks” and develop teaching portfolios.

 

 

Hospitalists can also leverage their expertise to build systems in which academic hospitalists are teaching each other, creating a culture of continuous learning. These activities may include case conferences, morbidity and mortality conferences, journal clubs, clinical topic updates developed by and for hospitalists, simulation exercises, and other group learning sessions. Giving hospitalists the opportunity to teach each other allows for professional growth that is not dependent on the presence of traditional learners.

REDEFINING ACADEMIC HOSPITALISTS

Philosophically, a key question is “What makes ‘academic’ academic?” Traditionally, academic hospitalist positions were synonymous with resident teaching or, for a small number of academic hospitalists, significant funded research. In an era where teaching residents may no longer be part of the job description for many hospitalists at academic medical centers, what distinguishes these positions from 100% clinical positions and what are the implications for academic hospital medicine?

Although data regarding why hospitalists seek “nonteaching” positions at academic medical centers are lacking, we believe that these jobs remain popular due to opportunities that are perceived to be unique to academic medical centers. These include more flexible scheduling (academic programs may be less likely to have seven-on/seven-off schedules), exposure to research and cutting-edge technology, opportunities to care for tertiary and quaternary care patients, collaboration with academic peers and experts in the field, and interaction with a range of learners, including medical, pharmacy, advanced practitioner, and other students.

Understanding the motivation of candidates who apply for academic hospital medicine positions—aside from supervising/teaching residents—will be an important goal for academic hospitalist leaders to ensure future success in staffing growing programs and creating sustainable academic hospitalist careers. As resident teaching time decreases, implementing robust faculty or professional development programs to address the broader interests and needs of academic hospitalist faculty will be increasingly important. Sehgal et al. described one such program for faculty development,12 and a more recent paper outlines a faculty development program focused on quality improvement and patient safety.13 These types of programs provide opportunities for academic hospitalists to engage in academic pursuits that are independent of residency programs.

CONCLUSION

Therefore, what do we tell the eager faculty applicant? First, we should not hide from the honest answer, ie, new faculty may not get as much resident teaching time as they would like or expect. Although we want hospitalists to maintain integral involvement in residency training programs, we also want to build a culture of clinical excellence, scholarship, and continuous learning that is not dependent on directly teaching residents. We should highlight the unique opportunities of academic hospital medicine, including teaching other learners, caring for tertiary/quaternary care patients, working with colleagues who are leaders in their field, and engaging in research and quality improvement work. By capitalizing on these opportunities, we can actively redefine what makes “academic” academic and ensure that we sustain academic hospital medicine as a desirable and rewarding career.

Disclosures

The authors have nothing to disclose.

How much teaching time will I get in my first year on faculty?” Leaders at academic hospitalist programs know to expect this question from almost every applicant. We also know that we will be graded on our response; the more resident-covered service time, the better. For some applicants, this question is a key litmus test. Some prospective faculty choose to pursue academic hospital medicine because of their own experiences on the wards during residency. They recall the excitement of leading a team of interns and students under the wing of a seasoned attending, replete with chalk talks, clinical pearls, and inspired learners. Teaching time is more quantifiable than mentorship quality and academic opportunity, more important than salary and patient load for some, and more familiar than relative value unit expectations.

Over the past two decades, academic hospitalist programs have steadily grown,1 but their teaching footprints have not.2,3 Although historically some academic hospitalists spent almost 100% of their clinical time on teaching services, work hour rules and diversification of resident clinical time toward outpatient and subspecialty activities have decreased the amount of general medicine ward time for residents.2 In addition, as academic medical centers broadened their clinical networks, inpatient volumes exceeded the capacity of teaching services. Finally, several large academic medical centers and healthcare networks are acquiring or building additional hospitals, increasing the number of medical beds that are staffed by hospitalists without residents.4

In our experience, as academic healthcare systems continue to grow and hospital medicine programs rapidly expand to meet clinical needs, the percentage of clinical time spent on a traditional ward teaching service continues to decrease. In several academic hospitalist programs, the majority of faculty effort is now devoted to direct care,5 with limited resident-covered ward time spread across a larger group of faculty. The 2018 State of Hospital Medicine Report suggests that our experience is not unique with academic programs caring for adults reporting that 31% of clinical work was on traditional ward teaching services, 16% on direct care services with intermittent learners, and 53% on nonteaching services.5

This current state of affairs raises a number of questions as follows:

  • How can hospitalist program leaders take advantage of existing resident teaching opportunities?
  • How should those teaching opportunities be allocated?
  • What nontraditional teaching venues exist in academic medicine?
  • How can faculty develop their teaching skills in an environment with limited traditional ward teaching time.

We believe that these changes require us to redefine what it means to be an academic hospitalist, both for existing faculty and for prospective faculty whose views of academic hospital medicine may have been shaped by role models seen only in their clinical teaching role.

 

 

MAXIMIZING RESIDENT TEACHING OPPORTUNITIES

Is reduced teaching time the new normal or will the pendulum swing back toward more resident teaching time for academic hospitalists? The former is likely the case. None of the current trends in medical education point to an expansion of residents in the inpatient setting. Although there may be some opportunities to assume general medicine attending time is currently covered by primary care physicians and subspecialists, in several programs, hospitalists already cover the overwhelming majority of general medicine teaching services.

Although there may be occasional opportunities for academic hospitalist programs to develop new teaching roles with residents or fellows (for example, by expanding to community sites with residency programs or to subspecialty teaching services, or by creating hospital medicine fellowships and resident or student electives), the reality is that we as hospitalists will need to adapt to direct care as the plurality of our work.

ALLOCATING TEACHING TIME

How should we allocate traditional teaching time among our faculty? Since it is a coveted—but relatively scarce—resource, teaching time should be allocated thoughtfully. Based on our collective experience, academic hospitalist groups have taken a variety of approaches to this challenge, including forming separate clinical groups at the same institution (a teaching faculty group and a nonteaching group),6 requiring all hospitalists to do some amount of direct care to facilitate distribution of teaching time or having merit or seniority-based teaching time allocation (based on teaching evaluations, formal teaching roles such as program director status, or years on faculty).

Each approach to assigning teaching time has its challenges. Hospitalist leaders must manage these issues through transparency about the selection process for teaching rotations and open discussion of teaching evaluations with faculty. It is also critical that the recruitment process set appropriate expectations for faculty candidates. Highlighting academic opportunities outside of teaching residents, including leadership roles, quality improvement work, and research, may encourage applicants and current hospitalists to explore more varied career trajectories. Hospitalists focusing on these other paths may elect to have less teaching time, freeing up opportunities for dedicated clinician educators.

BEYOND TRADITIONAL RESIDENT TEACHING TEAMS

What other ward-based teaching opportunities might be available for academic hospitalists who do not have the opportunity to attend on traditional resident teaching teams? As supervisory requirements for residents have been strengthened, expansion of teaching into the evening and overnight hours to supervise new admissions to the teaching services has been one approach to augment teaching footprints.7,8

In addition, nontraditional teaching teams such as attending/intern teams (without a supervising resident) or attending/subintern (fourth-year medical student) teams have been developed at some institutions.9 Although allowing for additional exposure to learners, these models require a more hands-on approach than traditional teaching teams, particularly at the start of the academic year. Finally, as hospitalist teams have grown to include advanced practice providers (APPs), some programs have established formal teaching programs to address professional development needs of these healthcare professionals.10,11

DEVELOPING HOSPITALIST EDUCATORS

How do we help junior faculty who have the potential to be talented educators succeed in teaching when they have limited opportunities to engage with residents on clinical services? One approach is to encourage hospitalists to participate in resident didactic sessions such as “morning report” and noon conference. Another approach is to focus on teaching other learners. For example, several academic medical centers provide opportunities for hospitalists to engage in student teaching, either on the wards or via classroom instruction. In addition, as mentioned previously, APPs who are new to hospital medicine are an engaged audience and represent an opportunity for hospitalist educators to utilize and hone their teaching skills. Finally, organizing lectures for nursing colleagues is another way for the faculty to practice “chalk talks” and develop teaching portfolios.

 

 

Hospitalists can also leverage their expertise to build systems in which academic hospitalists are teaching each other, creating a culture of continuous learning. These activities may include case conferences, morbidity and mortality conferences, journal clubs, clinical topic updates developed by and for hospitalists, simulation exercises, and other group learning sessions. Giving hospitalists the opportunity to teach each other allows for professional growth that is not dependent on the presence of traditional learners.

REDEFINING ACADEMIC HOSPITALISTS

Philosophically, a key question is “What makes ‘academic’ academic?” Traditionally, academic hospitalist positions were synonymous with resident teaching or, for a small number of academic hospitalists, significant funded research. In an era where teaching residents may no longer be part of the job description for many hospitalists at academic medical centers, what distinguishes these positions from 100% clinical positions and what are the implications for academic hospital medicine?

Although data regarding why hospitalists seek “nonteaching” positions at academic medical centers are lacking, we believe that these jobs remain popular due to opportunities that are perceived to be unique to academic medical centers. These include more flexible scheduling (academic programs may be less likely to have seven-on/seven-off schedules), exposure to research and cutting-edge technology, opportunities to care for tertiary and quaternary care patients, collaboration with academic peers and experts in the field, and interaction with a range of learners, including medical, pharmacy, advanced practitioner, and other students.

Understanding the motivation of candidates who apply for academic hospital medicine positions—aside from supervising/teaching residents—will be an important goal for academic hospitalist leaders to ensure future success in staffing growing programs and creating sustainable academic hospitalist careers. As resident teaching time decreases, implementing robust faculty or professional development programs to address the broader interests and needs of academic hospitalist faculty will be increasingly important. Sehgal et al. described one such program for faculty development,12 and a more recent paper outlines a faculty development program focused on quality improvement and patient safety.13 These types of programs provide opportunities for academic hospitalists to engage in academic pursuits that are independent of residency programs.

CONCLUSION

Therefore, what do we tell the eager faculty applicant? First, we should not hide from the honest answer, ie, new faculty may not get as much resident teaching time as they would like or expect. Although we want hospitalists to maintain integral involvement in residency training programs, we also want to build a culture of clinical excellence, scholarship, and continuous learning that is not dependent on directly teaching residents. We should highlight the unique opportunities of academic hospital medicine, including teaching other learners, caring for tertiary/quaternary care patients, working with colleagues who are leaders in their field, and engaging in research and quality improvement work. By capitalizing on these opportunities, we can actively redefine what makes “academic” academic and ensure that we sustain academic hospital medicine as a desirable and rewarding career.

Disclosures

The authors have nothing to disclose.

References

1. Wachter RM, Goldman L. Zero to 50,000-the 20th anniversary of the hospitalist. N Engl J Med 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Saint S, Flanders SA. Hospitalists in teaching hospitals: opportunities but not without danger. J Gen Intern Med. 2004;19(4):392-393. https://doi.org/10.1111/j.1525-1497.2004.42002.x.
3. Flanders SA, Saint S, McMahon LF, Howell JD. Where should hospitalists sit within the academic medical center? J Gen Intern Med. 2008;23(8):1269-1272. https://doi.org/10.1007/s11606-008-0682-1.
4. 5 Hospital projects announced this year worth $1B or more. ASC Communications, 2019. https://www.beckershospitalreview.com/facilities-management/5-hospital-projects-announced-this-year-worth-1b-or-more.html. Accessed August 24, 2019.
5. White A, Anders J, Anoff DL, Creamer J, Flores LA. Table 3.45 Distribution of work in academic hmgs. Philadelphia, PA: Society of Hospital Medicine; 201 8.
6. Hunt D, Burger A, Harrison R, Southern W, Boonyasai RT, Leykum L. Hospitalist staffing: To split or not to split? SGIM Forum 2013;36:6.
7. Farnan JM, Burger A, Boonyasai RT, et al. Survey of overnight academic hospitalist supervision of trainees. J Hosp Med. 2012;7(7):521-523. https://doi.org/10.1002/jhm.1961.
8. Sani SN, Wistar E, Le L, Chia D, Haber LA. Shining a light on overnight education: Hospitalist and resident impressions of the current state, barriers, and methods for improvement. Cureus 2018;10:e2939. https://doi.org/10.7759/cureus.2939.
9. O’Leary KJ, Chadha V, Fleming VM, Martin GJ, Baker DW. Medical subinternship: student experience on a resident uncovered hospitalist service. Teach Learn Med. 2008;20(1):18-21. https://doi.org/10.1080/10401330701797974.
10. Klimpl D, Franco T, Tackett S, et al. The current state of advanced practice provider fellowships in hospital medicine: A survey of program directors. J Hosp Med. 2019;14(7):401-406. https://doi.org/10.12788/jhm.3191.
11. Lackner C, Eid S, Panek T, Kisuule F. An advanced practice provider clinical fellowship as a pipeline to staffing a hospitalist program. J Hosp Med. 2019;14(6):336-339. https://doi.org/10.12788/jhm.3183.
12. Sehgal NL, Sharpe BA, Auerbach AA et al. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6(3):161-166. https://doi.org/10.1002/jhm.845.
13. van Schaik SM, Chang A, Fogh S, et al. Jumpstarting faculty development in quality improvement and patient safety education: A team-based approach. Acad Med. 2019.

References

1. Wachter RM, Goldman L. Zero to 50,000-the 20th anniversary of the hospitalist. N Engl J Med 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Saint S, Flanders SA. Hospitalists in teaching hospitals: opportunities but not without danger. J Gen Intern Med. 2004;19(4):392-393. https://doi.org/10.1111/j.1525-1497.2004.42002.x.
3. Flanders SA, Saint S, McMahon LF, Howell JD. Where should hospitalists sit within the academic medical center? J Gen Intern Med. 2008;23(8):1269-1272. https://doi.org/10.1007/s11606-008-0682-1.
4. 5 Hospital projects announced this year worth $1B or more. ASC Communications, 2019. https://www.beckershospitalreview.com/facilities-management/5-hospital-projects-announced-this-year-worth-1b-or-more.html. Accessed August 24, 2019.
5. White A, Anders J, Anoff DL, Creamer J, Flores LA. Table 3.45 Distribution of work in academic hmgs. Philadelphia, PA: Society of Hospital Medicine; 201 8.
6. Hunt D, Burger A, Harrison R, Southern W, Boonyasai RT, Leykum L. Hospitalist staffing: To split or not to split? SGIM Forum 2013;36:6.
7. Farnan JM, Burger A, Boonyasai RT, et al. Survey of overnight academic hospitalist supervision of trainees. J Hosp Med. 2012;7(7):521-523. https://doi.org/10.1002/jhm.1961.
8. Sani SN, Wistar E, Le L, Chia D, Haber LA. Shining a light on overnight education: Hospitalist and resident impressions of the current state, barriers, and methods for improvement. Cureus 2018;10:e2939. https://doi.org/10.7759/cureus.2939.
9. O’Leary KJ, Chadha V, Fleming VM, Martin GJ, Baker DW. Medical subinternship: student experience on a resident uncovered hospitalist service. Teach Learn Med. 2008;20(1):18-21. https://doi.org/10.1080/10401330701797974.
10. Klimpl D, Franco T, Tackett S, et al. The current state of advanced practice provider fellowships in hospital medicine: A survey of program directors. J Hosp Med. 2019;14(7):401-406. https://doi.org/10.12788/jhm.3191.
11. Lackner C, Eid S, Panek T, Kisuule F. An advanced practice provider clinical fellowship as a pipeline to staffing a hospitalist program. J Hosp Med. 2019;14(6):336-339. https://doi.org/10.12788/jhm.3183.
12. Sehgal NL, Sharpe BA, Auerbach AA et al. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6(3):161-166. https://doi.org/10.1002/jhm.845.
13. van Schaik SM, Chang A, Fogh S, et al. Jumpstarting faculty development in quality improvement and patient safety education: A team-based approach. Acad Med. 2019.

Issue
Journal of Hospital Medicine 15(10)
Issue
Journal of Hospital Medicine 15(10)
Page Number
622-624. Published Online First February 19, 2020
Page Number
622-624. Published Online First February 19, 2020
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2020 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Carrie Herzke, MD, MBA; E-mail: cherzke1@jhmi.edu; Telephone: (443) 287-3631
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Page Free
Article PDF Media

Prediction of Disposition Within 48 Hours of Hospital Admission Using Patient Mobility Scores

Article Type
Changed
Thu, 04/22/2021 - 15:13

The loss of mobility during hospitalization is common and is an important reason why more than 40% of hospitalized Medicare patients require placement in a postacute facility.1,2 Discharge planning may be delayed when the medical team focuses on managing acute medical issues without recognizing a patient’s rehabilitation needs until near the time of discharge.3 For patients who require rehabilitation in a postacute facility, delays in discharge can exacerbate hospital-acquired mobility loss and prolong functional recovery.2,4 In addition, even small increases in length of stay have substantial financial impact.5 Increased efficiency in the discharge process has the potential to reduce healthcare costs, facilitate patient recovery, and reduce delays for new admissions awaiting beds.6 For effective discharge planning, a proactive, patient-centered, interdisciplinary approach that considers patient mobility status is needed.3

Systematic measurement of patient mobility that extends beyond evaluations by physical therapists is not common practice, but has the potential to facilitate early discharge planning.7,8 At our hospital, mobility assessment is performed routinely using a reliable and valid interdisciplinary assessment of mobility throughout the patient’s entire hospitalization.9 We recently showed that nurse-recorded mobility status within the first 24 hours of hospitalization was associated with discharge disposition,7 but a prediction tool to help aid clinicians in the discharge planning process would be more useful. In this study, we evaluated the predictive ability of a patient’s mobility score, obtained within 48 hours of hospital admission, to identify the need for postacute care in a diverse patient population.

METHODS

After receiving approval from the Johns Hopkins Institutional Review Board, we conducted analyses on a retrospective cohort of 821 admissions (777 unique patients admitted between January 1, 2017 and August 25, 2017) who were hospitalized for ≥72 hours on two inpatient units (medical and neurological/neurosurgical) at The Johns Hopkins Hospital (JHH). These units were chosen to reduce the potential for both selection and measurement bias. First, these units manage a diverse patient population that is likely to generalize to a general hospital population. Second, the nursing staff on these units has the most accurate and consistent documentation compliance for our predictor variable.

Mobility Measure

The Activity Measure for Post Acute Care Inpatient Mobility Short Form (AM-PAC IMSF) is a measure of functional capacity. This short form is widely used and is nicknamed “6 clicks.” It has questions for six mobility tasks, and each question is scored on a four-point Likert scale.9 Patients do not have to attempt the tasks to be scored. Clinicians can score items using clinical judgement based on observation or discussion with the patient, family, or other clinicians. The interrater reliability is very good (Intraclass Correlation Coefficient = .85-.99)9 and construct validity has been demonstrated for the inpatient hospital population (AM-PAC IMSF correlations with: functional independence measure [FIM] = .65; Katz activities of daily living [ADL] = .80; 2-minute walk = .73; 5-times sit-to-stand = −.69).9 At JHH, the AM-PAC IMSF is scored at admission by nursing staff (>90% documentation compliance on the units in this study); these admission scores were used.

 

 

Outcome and Predictors

Discharge location (postacute care facility vs home) was the primary outcome in this study, as recorded in a discrete field in the electronic medical record (EMR). To ensure the validity of this measure, we performed manual chart audits on a sample of patients (n = 300). It was confirmed that the measure entered in the discrete field in the EMR correctly identified the disposition (home vs postacute care facility) in all cases. The primary predictor was the lowest AM-PAC IMSF score obtained within 48 hours after hospital admission, reflecting the patient’s capability to mobilize after hospital admission. Raw scores were converted to scale scores (0-100) for analysis.9 Additional predictors considered included: age, sex, race, and primary diagnosis, all of which were readily available from the EMR at the time of hospital admission. We then grouped the primary diagnosis into the following categories using ICD-10 codes upon admission: Oncologic, Progressive Neurological, Sudden Onset Neurological, and Medical/Other.

Statistical Analysis

We constructed a classification tree, a machine learning approach,10 to predict discharge placement (postacute facility vs home) based on the patients’ hospital admission characteristics and AM-PAC IMSF score. The prediction model was developed using the classification tree approach, as opposed to a logistic regression model. This approach allows for the inclusion of higher-order interactions (ie, interactions of more than two predictors) which would need to be explicitly specified otherwise and a priori we did not have strong evidence from prior studies to guide the model construction. The classification tree was constructed and evaluated by dividing our sample into a 70% training set and a 30% validation set using random sampling within key strata defined by age (<65 vs ≥65 years), gender, and quartile of the AM-PAC IMSF score. The classification tree was developed using the training set. Next, measures of predictive accuracy (ie, the proportion of correctly classified patients with placement in a postacute facility [sensitivity]) and the proportion of correctly classified patients not discharged to postacute care (ie, to home, specificity), were estimated by applying the validation set to the classification tree. The R statistical package rpart11 with procedure rpart was used to construct the classification tree using standard criteria for growing (Gini index10) and pruning (misclassification error estimated by leave-1-out cross-validation12) the tree.

RESULTS

young04751218e_f1.jpg
Among the 821 admissions, 16 of 777 patients (2%) died. Given the small number of deaths, we excluded these patients from the analysis. The table describes the characteristics of the 761 unique patients during each of their 805 admissions included in the analysis. Of these, 312 (39%) were discharged to a postacute facility. Compared with patients discharged to home, patients discharged to a postacute facility were older (median, 64 vs 56 years), more likely to be admitted for a condition with sudden onset (eg, stroke, 36% vs 30%), had lower AM-PAC IMSF scores at hospital admission (median, 32 vs 41), and longer lengths of stay (median, 8 vs 6 days). The figure displays the classification tree derived from the training set and the hospital-admission characteristics described above, including the AM-PAC IMSF scores. The classification tree identified four distinct subsets of patients with the corresponding predicted discharge locations: (1) patients with AM-PAC IMSF scores ≥39: discharged home, (2) patients with AM-PAC IMSF scores ≥31 and <39 and who are <65 years of age: discharged home, (3) patients with AM-PAC IMSF scores ≥31 and <39 and who are ≥65 years of age: discharged to a postacute facility, and (4) patients with AM-PAC IMSF scores <31: discharged to a postacute facility. After applying this tree to the validation set, the specificity was 84% (95% CI: 78%-90%) and sensitivity was 58% (95% CI: 49%-68%) for predicting discharge to a postacute facility, with an overall correct classification of 73% (95% CI: 67%-78%) of the discharge locations.
young04751218e_t1.jpg

 

 

DISCUSSION

Improving the efficiency of hospital discharge planning is of great interest to hospital-based clinicians and administrators. Identifying patients early who are likely to need placement in a postacute facility is an important first step. Using a reliable and valid nursing assessment tool of patient mobility to help with discharge planning is an attractive and feasible approach. The literature on predicting disposition is very limited and has focused primarily on patients with stroke or joint replacement.13,14 Previously, we used the same measure of mobility within 24 hours of admission to show an association with discharge disposition.7 Here, we expanded upon that prior research to include mobility assessment within a 48-hour window from admission in a diverse patient population. Using a machine learning approach, we were able to predict 73% of hospital discharges correctly using only the patient’s mobility score and age. Having tools such as this simple decision tree to identify discharge locations early in a patient’s hospitalization has the potential to increase efficiency in the discharge planning process.

Despite being able to classify the discharge disposition correctly for most patients, our sensitivity for predicting postacute care need was low. There are likely other patient and system factors that could be collected near the time of hospital admission, such as the patient’s prior level of function, the difference between function at baseline and admission, their prior living situation (eg, long term care, home environment), social support, and hospital relationships with postacute care facilities that may help to improve the prediction of postacute care placement.15 We recommend that future research consider these and other potentially important predictors. However, the specificity was high enough that all patients who score positive merit evaluation for possible postacute care. While our patient sample was diverse, it did not focus on some patients who may be more likely to be discharged to a postacute facility, such as the geriatric population. This may be a potential limitation to our study and will require this tool to be tested in more patient groups. A final limitation is the grouping of all potential types of postacute care into one category since important differences exist between the care provided at skilled nursing facilities with or without rehabilitation and inpatient acute rehabilitation. Despite these limitations, this study emphasizes the value of a systematic mobility assessment and provides a simple decision tree to help providers begin early discharge planning by anticipating patient rehabilitation needs.

Acknowledgments

The authors thank Christina Lin, MD and Sophia Andrews, PT, DPT for their assistance with data validation.

References

1. Greysen SR, Patel MS. Annals for hospitalists inpatient notes-bedrest is toxic—why mobility matters in the hospital. Ann Intern Med. 2018;169(2):HO2-HO3. https://doi.org/10.7326/M18-1427.
2. Greysen SR, Stijacic Cenzer I, Boscardin WJ, Covinsky KE. Functional impairment: an unmeasured marker of Medicare costs for postacute care of older adults. J Am Geriatr Soc. 2017;65(9):1996-2002. https://doi.org/10.1111/jgs.14955.
3. Wong EL, Yam CH, Cheung AW, et al. Barriers to effective discharge planning: a qualitative study investigating the perspectives of frontline healthcare professionals. BMC Health Serv Res. 2011;11(1):242. https://doi.org/10.1186/1472-6963-11-242.
4. Greysen HM, Greysen SR. Mobility assessment in the hospital: what are the “next steps”? J Hosp Med. 2017;12(6):477-478. https://doi.org/10.12788/jhm.2759.
5. Lord RK, Mayhew CR, Korupolu R, et al. ICU early physical rehabilitation programs: financial modeling of cost savings. Crit Care Med. 2013;41(3):717-724. https://doi.org/10.1097/CCM.0b013e3182711de2.
6. McDonagh MS, Smith DH, Goddard M. Measuring appropriate use of acute beds: a systematic review of methods and results. Health Policy. 2000;53(3):157-184. https://doi.org/10.1016/S0168-8510(00)00092-0.
7. Hoyer EH, Young DL, Friedman LA, et al. Routine inpatient mobility assessment and hospital discharge planning. JAMA Intern Med. 2019;179(1):118-120. https://doi.org/10.1001/jamainternmed.2018.5145.
8. Brown CJ, Redden DT, Flood KL, Allman RM. The underrecognized epidemic of low mobility during hospitalization of older adults. J Am Geriatr Soc. 2009;57(9):1660-1665. https://doi.org/10.1111/j.1532-5415.2009.02393.x.
9. Hoyer EH, Young DL, Klein LM, et al. Toward a common language for measuring patient mobility in the hospital: reliability and construct validity of interprofessional mobility measures. Phys Ther. 2018;98(2):133-142. https://doi.org/10.1093/ptj/pzx110.
10. Breiman L, Friedman J, Olshen R, Stone C. Classification and Regression Trees. Belmont, CA: Wadsworth; 1984.
11. Therneau T, Atkinson B. rpart: recursive partitioning and regression trees. R package version. 2018;4:1-13. https://CRAN.R-project.org/package=rpart.
12. Friedman J, Hastie T, Tibshirani R. The Elements of Statistical Learning. New York, NY: Springer; 2001.
13. Stein J, Bettger JP, Sicklick A, Hedeman R, Magdon-Ismail Z, Schwamm LH. Use of a standardized assessment to predict rehabilitation care after acute stroke. Arch Phys Med Rehabil. 2015;96(2):210-217. https://doi.org/10.1016/j.apmr.2014.07.403.
14. Gholson JJ, Pugely AJ, Bedard NA, Duchman KR, Anthony CA, Callaghan JJ. Can we predict discharge status after total joint arthroplasty? A calculator to predict home discharge. J Arthroplasty. 2016;31(12):2705-2709. https://doi.org/10.1016/j.arth.2016.08.010.
15. Zimmermann BM, Koné I, Rost M, Leu A, Wangmo T, Elger BS. Factors associated with post-acute discharge location after hospital stay: a cross-sectional study from a Swiss hospital. BMC Health Serv Res. 2019;19(1):289. https://doi.org/10.1186/s12913-019-4101-6.

Article PDF
Author and Disclosure Information

1Department of Physical Medicine and Rehabilitation, Johns Hopkins University, Baltimore, Maryland; 2Department of Physical Therapy, University of Nevada Las Vegas, Las Vegas, Nevada; 3Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland; 4Division of Pulmonary and Critical Care Medicine, School of Medicine, Johns Hopkins University, Baltimore, Maryland; 5Division of General Internal Medicine, Johns Hopkins University, Baltimore, Maryland.

Disclosures

We certify that no party having a direct interest in the results of the research supporting this article has or will confer a benefit on us or on any organization with which we are associated.

Issue
Journal of Hospital Medicine 15(9)
Publications
Topics
Page Number
540-543. Published Online First December 18, 2019
Sections
Author and Disclosure Information

1Department of Physical Medicine and Rehabilitation, Johns Hopkins University, Baltimore, Maryland; 2Department of Physical Therapy, University of Nevada Las Vegas, Las Vegas, Nevada; 3Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland; 4Division of Pulmonary and Critical Care Medicine, School of Medicine, Johns Hopkins University, Baltimore, Maryland; 5Division of General Internal Medicine, Johns Hopkins University, Baltimore, Maryland.

Disclosures

We certify that no party having a direct interest in the results of the research supporting this article has or will confer a benefit on us or on any organization with which we are associated.

Author and Disclosure Information

1Department of Physical Medicine and Rehabilitation, Johns Hopkins University, Baltimore, Maryland; 2Department of Physical Therapy, University of Nevada Las Vegas, Las Vegas, Nevada; 3Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland; 4Division of Pulmonary and Critical Care Medicine, School of Medicine, Johns Hopkins University, Baltimore, Maryland; 5Division of General Internal Medicine, Johns Hopkins University, Baltimore, Maryland.

Disclosures

We certify that no party having a direct interest in the results of the research supporting this article has or will confer a benefit on us or on any organization with which we are associated.

Article PDF
Article PDF
Related Articles

The loss of mobility during hospitalization is common and is an important reason why more than 40% of hospitalized Medicare patients require placement in a postacute facility.1,2 Discharge planning may be delayed when the medical team focuses on managing acute medical issues without recognizing a patient’s rehabilitation needs until near the time of discharge.3 For patients who require rehabilitation in a postacute facility, delays in discharge can exacerbate hospital-acquired mobility loss and prolong functional recovery.2,4 In addition, even small increases in length of stay have substantial financial impact.5 Increased efficiency in the discharge process has the potential to reduce healthcare costs, facilitate patient recovery, and reduce delays for new admissions awaiting beds.6 For effective discharge planning, a proactive, patient-centered, interdisciplinary approach that considers patient mobility status is needed.3

Systematic measurement of patient mobility that extends beyond evaluations by physical therapists is not common practice, but has the potential to facilitate early discharge planning.7,8 At our hospital, mobility assessment is performed routinely using a reliable and valid interdisciplinary assessment of mobility throughout the patient’s entire hospitalization.9 We recently showed that nurse-recorded mobility status within the first 24 hours of hospitalization was associated with discharge disposition,7 but a prediction tool to help aid clinicians in the discharge planning process would be more useful. In this study, we evaluated the predictive ability of a patient’s mobility score, obtained within 48 hours of hospital admission, to identify the need for postacute care in a diverse patient population.

METHODS

After receiving approval from the Johns Hopkins Institutional Review Board, we conducted analyses on a retrospective cohort of 821 admissions (777 unique patients admitted between January 1, 2017 and August 25, 2017) who were hospitalized for ≥72 hours on two inpatient units (medical and neurological/neurosurgical) at The Johns Hopkins Hospital (JHH). These units were chosen to reduce the potential for both selection and measurement bias. First, these units manage a diverse patient population that is likely to generalize to a general hospital population. Second, the nursing staff on these units has the most accurate and consistent documentation compliance for our predictor variable.

Mobility Measure

The Activity Measure for Post Acute Care Inpatient Mobility Short Form (AM-PAC IMSF) is a measure of functional capacity. This short form is widely used and is nicknamed “6 clicks.” It has questions for six mobility tasks, and each question is scored on a four-point Likert scale.9 Patients do not have to attempt the tasks to be scored. Clinicians can score items using clinical judgement based on observation or discussion with the patient, family, or other clinicians. The interrater reliability is very good (Intraclass Correlation Coefficient = .85-.99)9 and construct validity has been demonstrated for the inpatient hospital population (AM-PAC IMSF correlations with: functional independence measure [FIM] = .65; Katz activities of daily living [ADL] = .80; 2-minute walk = .73; 5-times sit-to-stand = −.69).9 At JHH, the AM-PAC IMSF is scored at admission by nursing staff (>90% documentation compliance on the units in this study); these admission scores were used.

 

 

Outcome and Predictors

Discharge location (postacute care facility vs home) was the primary outcome in this study, as recorded in a discrete field in the electronic medical record (EMR). To ensure the validity of this measure, we performed manual chart audits on a sample of patients (n = 300). It was confirmed that the measure entered in the discrete field in the EMR correctly identified the disposition (home vs postacute care facility) in all cases. The primary predictor was the lowest AM-PAC IMSF score obtained within 48 hours after hospital admission, reflecting the patient’s capability to mobilize after hospital admission. Raw scores were converted to scale scores (0-100) for analysis.9 Additional predictors considered included: age, sex, race, and primary diagnosis, all of which were readily available from the EMR at the time of hospital admission. We then grouped the primary diagnosis into the following categories using ICD-10 codes upon admission: Oncologic, Progressive Neurological, Sudden Onset Neurological, and Medical/Other.

Statistical Analysis

We constructed a classification tree, a machine learning approach,10 to predict discharge placement (postacute facility vs home) based on the patients’ hospital admission characteristics and AM-PAC IMSF score. The prediction model was developed using the classification tree approach, as opposed to a logistic regression model. This approach allows for the inclusion of higher-order interactions (ie, interactions of more than two predictors) which would need to be explicitly specified otherwise and a priori we did not have strong evidence from prior studies to guide the model construction. The classification tree was constructed and evaluated by dividing our sample into a 70% training set and a 30% validation set using random sampling within key strata defined by age (<65 vs ≥65 years), gender, and quartile of the AM-PAC IMSF score. The classification tree was developed using the training set. Next, measures of predictive accuracy (ie, the proportion of correctly classified patients with placement in a postacute facility [sensitivity]) and the proportion of correctly classified patients not discharged to postacute care (ie, to home, specificity), were estimated by applying the validation set to the classification tree. The R statistical package rpart11 with procedure rpart was used to construct the classification tree using standard criteria for growing (Gini index10) and pruning (misclassification error estimated by leave-1-out cross-validation12) the tree.

RESULTS

young04751218e_f1.jpg
Among the 821 admissions, 16 of 777 patients (2%) died. Given the small number of deaths, we excluded these patients from the analysis. The table describes the characteristics of the 761 unique patients during each of their 805 admissions included in the analysis. Of these, 312 (39%) were discharged to a postacute facility. Compared with patients discharged to home, patients discharged to a postacute facility were older (median, 64 vs 56 years), more likely to be admitted for a condition with sudden onset (eg, stroke, 36% vs 30%), had lower AM-PAC IMSF scores at hospital admission (median, 32 vs 41), and longer lengths of stay (median, 8 vs 6 days). The figure displays the classification tree derived from the training set and the hospital-admission characteristics described above, including the AM-PAC IMSF scores. The classification tree identified four distinct subsets of patients with the corresponding predicted discharge locations: (1) patients with AM-PAC IMSF scores ≥39: discharged home, (2) patients with AM-PAC IMSF scores ≥31 and <39 and who are <65 years of age: discharged home, (3) patients with AM-PAC IMSF scores ≥31 and <39 and who are ≥65 years of age: discharged to a postacute facility, and (4) patients with AM-PAC IMSF scores <31: discharged to a postacute facility. After applying this tree to the validation set, the specificity was 84% (95% CI: 78%-90%) and sensitivity was 58% (95% CI: 49%-68%) for predicting discharge to a postacute facility, with an overall correct classification of 73% (95% CI: 67%-78%) of the discharge locations.
young04751218e_t1.jpg

 

 

DISCUSSION

Improving the efficiency of hospital discharge planning is of great interest to hospital-based clinicians and administrators. Identifying patients early who are likely to need placement in a postacute facility is an important first step. Using a reliable and valid nursing assessment tool of patient mobility to help with discharge planning is an attractive and feasible approach. The literature on predicting disposition is very limited and has focused primarily on patients with stroke or joint replacement.13,14 Previously, we used the same measure of mobility within 24 hours of admission to show an association with discharge disposition.7 Here, we expanded upon that prior research to include mobility assessment within a 48-hour window from admission in a diverse patient population. Using a machine learning approach, we were able to predict 73% of hospital discharges correctly using only the patient’s mobility score and age. Having tools such as this simple decision tree to identify discharge locations early in a patient’s hospitalization has the potential to increase efficiency in the discharge planning process.

Despite being able to classify the discharge disposition correctly for most patients, our sensitivity for predicting postacute care need was low. There are likely other patient and system factors that could be collected near the time of hospital admission, such as the patient’s prior level of function, the difference between function at baseline and admission, their prior living situation (eg, long term care, home environment), social support, and hospital relationships with postacute care facilities that may help to improve the prediction of postacute care placement.15 We recommend that future research consider these and other potentially important predictors. However, the specificity was high enough that all patients who score positive merit evaluation for possible postacute care. While our patient sample was diverse, it did not focus on some patients who may be more likely to be discharged to a postacute facility, such as the geriatric population. This may be a potential limitation to our study and will require this tool to be tested in more patient groups. A final limitation is the grouping of all potential types of postacute care into one category since important differences exist between the care provided at skilled nursing facilities with or without rehabilitation and inpatient acute rehabilitation. Despite these limitations, this study emphasizes the value of a systematic mobility assessment and provides a simple decision tree to help providers begin early discharge planning by anticipating patient rehabilitation needs.

Acknowledgments

The authors thank Christina Lin, MD and Sophia Andrews, PT, DPT for their assistance with data validation.

The loss of mobility during hospitalization is common and is an important reason why more than 40% of hospitalized Medicare patients require placement in a postacute facility.1,2 Discharge planning may be delayed when the medical team focuses on managing acute medical issues without recognizing a patient’s rehabilitation needs until near the time of discharge.3 For patients who require rehabilitation in a postacute facility, delays in discharge can exacerbate hospital-acquired mobility loss and prolong functional recovery.2,4 In addition, even small increases in length of stay have substantial financial impact.5 Increased efficiency in the discharge process has the potential to reduce healthcare costs, facilitate patient recovery, and reduce delays for new admissions awaiting beds.6 For effective discharge planning, a proactive, patient-centered, interdisciplinary approach that considers patient mobility status is needed.3

Systematic measurement of patient mobility that extends beyond evaluations by physical therapists is not common practice, but has the potential to facilitate early discharge planning.7,8 At our hospital, mobility assessment is performed routinely using a reliable and valid interdisciplinary assessment of mobility throughout the patient’s entire hospitalization.9 We recently showed that nurse-recorded mobility status within the first 24 hours of hospitalization was associated with discharge disposition,7 but a prediction tool to help aid clinicians in the discharge planning process would be more useful. In this study, we evaluated the predictive ability of a patient’s mobility score, obtained within 48 hours of hospital admission, to identify the need for postacute care in a diverse patient population.

METHODS

After receiving approval from the Johns Hopkins Institutional Review Board, we conducted analyses on a retrospective cohort of 821 admissions (777 unique patients admitted between January 1, 2017 and August 25, 2017) who were hospitalized for ≥72 hours on two inpatient units (medical and neurological/neurosurgical) at The Johns Hopkins Hospital (JHH). These units were chosen to reduce the potential for both selection and measurement bias. First, these units manage a diverse patient population that is likely to generalize to a general hospital population. Second, the nursing staff on these units has the most accurate and consistent documentation compliance for our predictor variable.

Mobility Measure

The Activity Measure for Post Acute Care Inpatient Mobility Short Form (AM-PAC IMSF) is a measure of functional capacity. This short form is widely used and is nicknamed “6 clicks.” It has questions for six mobility tasks, and each question is scored on a four-point Likert scale.9 Patients do not have to attempt the tasks to be scored. Clinicians can score items using clinical judgement based on observation or discussion with the patient, family, or other clinicians. The interrater reliability is very good (Intraclass Correlation Coefficient = .85-.99)9 and construct validity has been demonstrated for the inpatient hospital population (AM-PAC IMSF correlations with: functional independence measure [FIM] = .65; Katz activities of daily living [ADL] = .80; 2-minute walk = .73; 5-times sit-to-stand = −.69).9 At JHH, the AM-PAC IMSF is scored at admission by nursing staff (>90% documentation compliance on the units in this study); these admission scores were used.

 

 

Outcome and Predictors

Discharge location (postacute care facility vs home) was the primary outcome in this study, as recorded in a discrete field in the electronic medical record (EMR). To ensure the validity of this measure, we performed manual chart audits on a sample of patients (n = 300). It was confirmed that the measure entered in the discrete field in the EMR correctly identified the disposition (home vs postacute care facility) in all cases. The primary predictor was the lowest AM-PAC IMSF score obtained within 48 hours after hospital admission, reflecting the patient’s capability to mobilize after hospital admission. Raw scores were converted to scale scores (0-100) for analysis.9 Additional predictors considered included: age, sex, race, and primary diagnosis, all of which were readily available from the EMR at the time of hospital admission. We then grouped the primary diagnosis into the following categories using ICD-10 codes upon admission: Oncologic, Progressive Neurological, Sudden Onset Neurological, and Medical/Other.

Statistical Analysis

We constructed a classification tree, a machine learning approach,10 to predict discharge placement (postacute facility vs home) based on the patients’ hospital admission characteristics and AM-PAC IMSF score. The prediction model was developed using the classification tree approach, as opposed to a logistic regression model. This approach allows for the inclusion of higher-order interactions (ie, interactions of more than two predictors) which would need to be explicitly specified otherwise and a priori we did not have strong evidence from prior studies to guide the model construction. The classification tree was constructed and evaluated by dividing our sample into a 70% training set and a 30% validation set using random sampling within key strata defined by age (<65 vs ≥65 years), gender, and quartile of the AM-PAC IMSF score. The classification tree was developed using the training set. Next, measures of predictive accuracy (ie, the proportion of correctly classified patients with placement in a postacute facility [sensitivity]) and the proportion of correctly classified patients not discharged to postacute care (ie, to home, specificity), were estimated by applying the validation set to the classification tree. The R statistical package rpart11 with procedure rpart was used to construct the classification tree using standard criteria for growing (Gini index10) and pruning (misclassification error estimated by leave-1-out cross-validation12) the tree.

RESULTS

young04751218e_f1.jpg
Among the 821 admissions, 16 of 777 patients (2%) died. Given the small number of deaths, we excluded these patients from the analysis. The table describes the characteristics of the 761 unique patients during each of their 805 admissions included in the analysis. Of these, 312 (39%) were discharged to a postacute facility. Compared with patients discharged to home, patients discharged to a postacute facility were older (median, 64 vs 56 years), more likely to be admitted for a condition with sudden onset (eg, stroke, 36% vs 30%), had lower AM-PAC IMSF scores at hospital admission (median, 32 vs 41), and longer lengths of stay (median, 8 vs 6 days). The figure displays the classification tree derived from the training set and the hospital-admission characteristics described above, including the AM-PAC IMSF scores. The classification tree identified four distinct subsets of patients with the corresponding predicted discharge locations: (1) patients with AM-PAC IMSF scores ≥39: discharged home, (2) patients with AM-PAC IMSF scores ≥31 and <39 and who are <65 years of age: discharged home, (3) patients with AM-PAC IMSF scores ≥31 and <39 and who are ≥65 years of age: discharged to a postacute facility, and (4) patients with AM-PAC IMSF scores <31: discharged to a postacute facility. After applying this tree to the validation set, the specificity was 84% (95% CI: 78%-90%) and sensitivity was 58% (95% CI: 49%-68%) for predicting discharge to a postacute facility, with an overall correct classification of 73% (95% CI: 67%-78%) of the discharge locations.
young04751218e_t1.jpg

 

 

DISCUSSION

Improving the efficiency of hospital discharge planning is of great interest to hospital-based clinicians and administrators. Identifying patients early who are likely to need placement in a postacute facility is an important first step. Using a reliable and valid nursing assessment tool of patient mobility to help with discharge planning is an attractive and feasible approach. The literature on predicting disposition is very limited and has focused primarily on patients with stroke or joint replacement.13,14 Previously, we used the same measure of mobility within 24 hours of admission to show an association with discharge disposition.7 Here, we expanded upon that prior research to include mobility assessment within a 48-hour window from admission in a diverse patient population. Using a machine learning approach, we were able to predict 73% of hospital discharges correctly using only the patient’s mobility score and age. Having tools such as this simple decision tree to identify discharge locations early in a patient’s hospitalization has the potential to increase efficiency in the discharge planning process.

Despite being able to classify the discharge disposition correctly for most patients, our sensitivity for predicting postacute care need was low. There are likely other patient and system factors that could be collected near the time of hospital admission, such as the patient’s prior level of function, the difference between function at baseline and admission, their prior living situation (eg, long term care, home environment), social support, and hospital relationships with postacute care facilities that may help to improve the prediction of postacute care placement.15 We recommend that future research consider these and other potentially important predictors. However, the specificity was high enough that all patients who score positive merit evaluation for possible postacute care. While our patient sample was diverse, it did not focus on some patients who may be more likely to be discharged to a postacute facility, such as the geriatric population. This may be a potential limitation to our study and will require this tool to be tested in more patient groups. A final limitation is the grouping of all potential types of postacute care into one category since important differences exist between the care provided at skilled nursing facilities with or without rehabilitation and inpatient acute rehabilitation. Despite these limitations, this study emphasizes the value of a systematic mobility assessment and provides a simple decision tree to help providers begin early discharge planning by anticipating patient rehabilitation needs.

Acknowledgments

The authors thank Christina Lin, MD and Sophia Andrews, PT, DPT for their assistance with data validation.

References

1. Greysen SR, Patel MS. Annals for hospitalists inpatient notes-bedrest is toxic—why mobility matters in the hospital. Ann Intern Med. 2018;169(2):HO2-HO3. https://doi.org/10.7326/M18-1427.
2. Greysen SR, Stijacic Cenzer I, Boscardin WJ, Covinsky KE. Functional impairment: an unmeasured marker of Medicare costs for postacute care of older adults. J Am Geriatr Soc. 2017;65(9):1996-2002. https://doi.org/10.1111/jgs.14955.
3. Wong EL, Yam CH, Cheung AW, et al. Barriers to effective discharge planning: a qualitative study investigating the perspectives of frontline healthcare professionals. BMC Health Serv Res. 2011;11(1):242. https://doi.org/10.1186/1472-6963-11-242.
4. Greysen HM, Greysen SR. Mobility assessment in the hospital: what are the “next steps”? J Hosp Med. 2017;12(6):477-478. https://doi.org/10.12788/jhm.2759.
5. Lord RK, Mayhew CR, Korupolu R, et al. ICU early physical rehabilitation programs: financial modeling of cost savings. Crit Care Med. 2013;41(3):717-724. https://doi.org/10.1097/CCM.0b013e3182711de2.
6. McDonagh MS, Smith DH, Goddard M. Measuring appropriate use of acute beds: a systematic review of methods and results. Health Policy. 2000;53(3):157-184. https://doi.org/10.1016/S0168-8510(00)00092-0.
7. Hoyer EH, Young DL, Friedman LA, et al. Routine inpatient mobility assessment and hospital discharge planning. JAMA Intern Med. 2019;179(1):118-120. https://doi.org/10.1001/jamainternmed.2018.5145.
8. Brown CJ, Redden DT, Flood KL, Allman RM. The underrecognized epidemic of low mobility during hospitalization of older adults. J Am Geriatr Soc. 2009;57(9):1660-1665. https://doi.org/10.1111/j.1532-5415.2009.02393.x.
9. Hoyer EH, Young DL, Klein LM, et al. Toward a common language for measuring patient mobility in the hospital: reliability and construct validity of interprofessional mobility measures. Phys Ther. 2018;98(2):133-142. https://doi.org/10.1093/ptj/pzx110.
10. Breiman L, Friedman J, Olshen R, Stone C. Classification and Regression Trees. Belmont, CA: Wadsworth; 1984.
11. Therneau T, Atkinson B. rpart: recursive partitioning and regression trees. R package version. 2018;4:1-13. https://CRAN.R-project.org/package=rpart.
12. Friedman J, Hastie T, Tibshirani R. The Elements of Statistical Learning. New York, NY: Springer; 2001.
13. Stein J, Bettger JP, Sicklick A, Hedeman R, Magdon-Ismail Z, Schwamm LH. Use of a standardized assessment to predict rehabilitation care after acute stroke. Arch Phys Med Rehabil. 2015;96(2):210-217. https://doi.org/10.1016/j.apmr.2014.07.403.
14. Gholson JJ, Pugely AJ, Bedard NA, Duchman KR, Anthony CA, Callaghan JJ. Can we predict discharge status after total joint arthroplasty? A calculator to predict home discharge. J Arthroplasty. 2016;31(12):2705-2709. https://doi.org/10.1016/j.arth.2016.08.010.
15. Zimmermann BM, Koné I, Rost M, Leu A, Wangmo T, Elger BS. Factors associated with post-acute discharge location after hospital stay: a cross-sectional study from a Swiss hospital. BMC Health Serv Res. 2019;19(1):289. https://doi.org/10.1186/s12913-019-4101-6.

References

1. Greysen SR, Patel MS. Annals for hospitalists inpatient notes-bedrest is toxic—why mobility matters in the hospital. Ann Intern Med. 2018;169(2):HO2-HO3. https://doi.org/10.7326/M18-1427.
2. Greysen SR, Stijacic Cenzer I, Boscardin WJ, Covinsky KE. Functional impairment: an unmeasured marker of Medicare costs for postacute care of older adults. J Am Geriatr Soc. 2017;65(9):1996-2002. https://doi.org/10.1111/jgs.14955.
3. Wong EL, Yam CH, Cheung AW, et al. Barriers to effective discharge planning: a qualitative study investigating the perspectives of frontline healthcare professionals. BMC Health Serv Res. 2011;11(1):242. https://doi.org/10.1186/1472-6963-11-242.
4. Greysen HM, Greysen SR. Mobility assessment in the hospital: what are the “next steps”? J Hosp Med. 2017;12(6):477-478. https://doi.org/10.12788/jhm.2759.
5. Lord RK, Mayhew CR, Korupolu R, et al. ICU early physical rehabilitation programs: financial modeling of cost savings. Crit Care Med. 2013;41(3):717-724. https://doi.org/10.1097/CCM.0b013e3182711de2.
6. McDonagh MS, Smith DH, Goddard M. Measuring appropriate use of acute beds: a systematic review of methods and results. Health Policy. 2000;53(3):157-184. https://doi.org/10.1016/S0168-8510(00)00092-0.
7. Hoyer EH, Young DL, Friedman LA, et al. Routine inpatient mobility assessment and hospital discharge planning. JAMA Intern Med. 2019;179(1):118-120. https://doi.org/10.1001/jamainternmed.2018.5145.
8. Brown CJ, Redden DT, Flood KL, Allman RM. The underrecognized epidemic of low mobility during hospitalization of older adults. J Am Geriatr Soc. 2009;57(9):1660-1665. https://doi.org/10.1111/j.1532-5415.2009.02393.x.
9. Hoyer EH, Young DL, Klein LM, et al. Toward a common language for measuring patient mobility in the hospital: reliability and construct validity of interprofessional mobility measures. Phys Ther. 2018;98(2):133-142. https://doi.org/10.1093/ptj/pzx110.
10. Breiman L, Friedman J, Olshen R, Stone C. Classification and Regression Trees. Belmont, CA: Wadsworth; 1984.
11. Therneau T, Atkinson B. rpart: recursive partitioning and regression trees. R package version. 2018;4:1-13. https://CRAN.R-project.org/package=rpart.
12. Friedman J, Hastie T, Tibshirani R. The Elements of Statistical Learning. New York, NY: Springer; 2001.
13. Stein J, Bettger JP, Sicklick A, Hedeman R, Magdon-Ismail Z, Schwamm LH. Use of a standardized assessment to predict rehabilitation care after acute stroke. Arch Phys Med Rehabil. 2015;96(2):210-217. https://doi.org/10.1016/j.apmr.2014.07.403.
14. Gholson JJ, Pugely AJ, Bedard NA, Duchman KR, Anthony CA, Callaghan JJ. Can we predict discharge status after total joint arthroplasty? A calculator to predict home discharge. J Arthroplasty. 2016;31(12):2705-2709. https://doi.org/10.1016/j.arth.2016.08.010.
15. Zimmermann BM, Koné I, Rost M, Leu A, Wangmo T, Elger BS. Factors associated with post-acute discharge location after hospital stay: a cross-sectional study from a Swiss hospital. BMC Health Serv Res. 2019;19(1):289. https://doi.org/10.1186/s12913-019-4101-6.

Issue
Journal of Hospital Medicine 15(9)
Issue
Journal of Hospital Medicine 15(9)
Page Number
540-543. Published Online First December 18, 2019
Page Number
540-543. Published Online First December 18, 2019
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Erik H Hoyer, MD; E-mail: ehoyer1@jhmi.edu; Telephone: 410-502-2441; Twitter: @HopkinsAMP
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Peek Free
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
Article PDF Media
Image
Teambase ID
1800190C.SIG
Disable zoom
Off

Contemporary Rates of Preoperative Cardiac Testing Prior to Inpatient Hip Fracture Surgery

Article Type
Changed
Thu, 05/16/2019 - 22:21

Hip fracture is a common reason for unexpected, urgent inpatient surgery in older patients. In 2005, the incidence of hip fracture was 369.0 and 793.5 per 100,000 in men and women respectively.1 These numbers declined over the preceding decade, potentially as a result of bisphosphonate use. Age- and risk-adjusted 30-day mortality rates for men and women in 2005 were approximately 10% and 5%, respectively.

Evidence suggests that timely surgical repair of hip fractures improves outcomes, although the optimal timing is controversial. Guidelines from the American College of Surgeons Committee on Trauma from 2015 recommend surgical intervention within 48 hours for geriatric hip fracures.2 A 2008 systematic review found that operative delay beyond 48 hours was associated with a 41% increase in 30-day all-cause mortality and a 32% increase in one-year all-cause mortality.3 Recent evidence suggests that the rate of complications begins to increase with delays beyond 24 hours.4

There has been a focus over the past decade on overuse of preoperative testing for low- and intermediate-risk surgeries.5-7 Beginning in 2012, the American Board of Internal Medicine initiated the Choosing Wisely® campaign in which numerous societies issued recommendations on reducing utilization of various diagnostic tests, a number of which have focused on preoperative tests. Two groups—the American Society of Anesthesiologists (ASA) and the American Society of Echocardiography (ASE)— issued specific recommendations on preoperative cardiac testing.8 In February 2013, the ASE recommended avoiding preoperative echocardiograms in patients without a history or symptoms of heart disease. In October 2013, the ASA recommended against transthoracic echocardiogram (TTE), transesophageal echocardiogram (TEE), or stress testing for low- or intermediate-risk noncardiac surgery for patients with stable cardiac disease.

Finally, in 2014, the American College of Cardiology (ACC)/American Heart Association (AHA) issued updated perioperative guidelines for patients undergoing noncardiac surgeries.9 They recommended preoperative stress testing only in a small subset of cases (patients with an elevated perioperative risk of major adverse cardiac event, a poor or unknown functional capacity, or those in whom stress testing would impact perioperative care).

Given the high cost of preoperative cardiac testing, the potential for delays in care that can adversely impact outcomes, and the recent recommendations, we sought to characterize the rates of inpatient preoperative cardiac testing prior to hip fracture surgery in recent years and to see whether recent recommendations to curb use of these tests were temporally associated with changing rates.

METHODS

Overview

We utilized two datasets—the Healthcare Cost and Utilization Project (HCUP) State Inpatient Databases (SID) and the American Hospital Association (AHA) Annual Survey—to characterize preoperative cardiac testing. SID data from Maryland, New Jersey, and Washington State from 2011 through September 2015 were used (the ICD coding system changed from ICD9 to ICD10 on October 1). This was combined with AHA data for these years. We included all hospitalizations with a primary ICD9 procedure code for hip fracture repair—78.55, 78.65, 79.05, 79.15, 79.25, 79.35, 79.45, 79.55, 79.65, 79.75, 79.85, and 79.95. We excluded all observations that involved an interhospital transfer. This study was exempt from institutional review board approval.

 

 

Measurement and Outcomes

We summarized demographic data for the hospitalizations that met the inclusion criteria as well as the associated hospitals. The primary outcome was the percentage of patients undergoing TTE, stress test, and cardiac catheterization during a hospitalization with a primary procedure code of hip fracture repair. Random effects logistic regression models for each type of diagnostic test were developed to determine the factors that might impact test utilization. In addition to running each test as a separate model, we also performed an analysis in which the outcome was performance of any of these three cardiac tests. Random effects were used to account for clustering of testing within hospitals. Variables included time (3-month intervals), state, age (continuous variable), gender, length of stay, payer (Medicare/Medicaid/private insurance/self-pay/other), hospital teaching status (major teaching/minor teaching/nonteaching), hospital size according to number of beds (continuous variable), and mortality score. Major teaching hospitals are defined as members of the Council of Teaching Hospitals. Minor teaching hospitals are defined as (1) those with one or more postgraduate training programs recognized by the American Council on Graduate Medical Education, (2) those with a medical school affiliation reported to the American Medical Association, or (3) those with an internship or residency approved by the American Osteopathic Association.

The SID has a specific binary indicator variable for each of the three diagnostic tests we evaluated. The use of the diagnostic test is evaluated through both UB-92 revenue codes and ICD9 procedure codes, with the presence of either leading to the indicator variable being positive.10 Finally, we performed a sensitivity analysis to evaluate the significance of changing utilization trends by interrupted time series analysis. A level of 0.05 was used to determine statistical significance. Analyses were done in STATA 15 (College Station, Texas).

RESULTS

The dataset included 75,144 hospitalizations with a primary procedure code of hip fracture over the study period (Table). The number of hospitalizations per year was fairly consistent over the study period in each state, although there were fewer hospitalizations for 2015 as this included only January through September. The mean age was 72.8 years, and 67% were female. The primary payer was Medicare for 71.7% of hospitalizations. Hospitalizations occurred at 181 hospitals, the plurality of which (42.9%) were minor teaching hospitals. The proportions of hospitalizations that included a TTE, stress test, and cardiac catheterization were 12.6%, 1.1%, and 0.5%, respectively. Overall, 13.5% of patients underwent any cardiac testing.

jhm014040224_t1.jpg

There was a statistically significantly lower rate of stress tests (odds ratio [OR], 0.32; 95% CI, 0.19-0.54) and cardiac catheterizations (OR, 0.46; 95% CI, 0.27-0.79) in Washington than in Maryland and New Jersey. Female gender was associated with significantly lower adjusted ORs for stress tests (OR, 0.74; 95% CI, 0.63-0.86) and cardiac catheterizations (OR, 0.73; 95% CI, 0.59-0.91), and increasing age was associated with higher adjusted ORs for each test (TTE, OR, 1.033; 95% CI, 1.031-1.035; stress tests, OR, 1.007; 95% CI, 1.001-1.013; cardiac catheterizations, OR, 1.011; 95% CI, 1.003-1.019). Private insurance was associated with a lower likelihood of stress tests (OR, 0.65; 95% CI, 0.50-0.85) and cardiac catheterizations (OR, 0.67; 95% CI,0.46-0.98), and self-pay was associated with a lower likelihood of TTE (OR, 0.76; 95% CI, 0.61-0.95) and stress test (OR, 0.43; 95% CI, 0.21-0.90), all compared with Medicare.

Larger hospitals were associated with a greater likelihood of cardiac catheterizations (OR, 1.18; 95% CI, 1.03-1.36) and a lower likelihood of TTE (OR, 0.89; 95% CI, 0.82-0.96). An unweighted average of these tests between 2011 and October 2015 showed a modest increase in TTEs and a modest decrease in stress tests and cardiac catheterizations (Figure). A multivariable random effects regression for use of TTEs revealed a significantly increasing trend from 2011 to 2014 (OR, 1.04, P < .0001), but the decreasing trend for 2015 was not statistically significant when analyzed according to quarters or months (for which data from only New Jersey and Washington are available).

jhm014040224_f1.jpg


In the combined model with any cardiac testing as the outcome, the likelihood of testing was lower in Washington (OR, 0.56; 95% CI, 0.31-0.995). Primary payer status of self-pay was associated with a lower likelihood of cardiac testing (OR, 0.73; 95% CI, 0.58-0.90). Female gender was associated with a lower likelihood of testing (OR, 0.93; 95% CI, 0.88-0.98), and high mortality score was associated with a higher likelihood of testing (OR, 1.030; 95% CI, 1.027-1.033). TTEs were the major driver of this model as these were the most heavily utilized test.

 

 

DISCUSSION

There has been limited research into how often preoperative cardiac testing occurs in the inpatient setting. Our aim was to study its prevalence prior to hip fracture surgery during a time period when multiple recommendations had been issued to limit its use. We found rates of ischemic testing (stress tests and cardiac catheterizations) to be appropriately, and perhaps surprisingly, low. Our results on ischemic testing rates are consistent with previous studies, which have focused on the outpatient setting where much of the preoperative workup for nonurgent surgeries occurs. The rate of TTEs was higher than in previous studies of the outpatient preoperative setting, although it is unclear what an optimal rate of TTEs is.

A recent study examining outpatient preoperative stress tests within the 30 days before cataract surgeries, knee arthroscopies, or shoulder arthroscopies found a rate of 2.1% for Medicare fee-for-service patients in 2009 with little regional variation.11 Another evaluation using 2009 Medicare claims data found rates of preoperative TTEs and stress tests to be 0.8% and 0.7%, respectively.12 They included TTEs and stress tests performed within 30 days of a low- or intermediate-risk surgery. A study analyzing the rate of preoperative TTEs between 2009 and 2014 found that rates varied from 2.0% to 3.4% for commercially insured patients aged 50-64 years and Medicare-advantage patients, respectively, in 2009.13 These rates decreased by 7.0% and 12.6% from 2009 to 2014. These studies, like ours, suggest that preoperative cardiac testing has not been a major source of wasteful spending. One explanation for the higher rate of TTEs we observed in the inpatient setting might be that primary care physicians in the outpatient setting are more likely to have historical cardiac testing results compared with physicians in a hospital.

We found that the rate of stress testing and cardiac catheterization in Washington was significantly lower than that in Maryland and New Jersey. This is consistent with a number of measures of healthcare utilization – total Medicare reimbursement in the last six months of life, mean number of hospital days in the last six months of life, and healthcare intensity index—for all of which Washington was below the national mean and Maryland and New Jersey were above it.14

Finally, we found evidence of a lower rate of preoperative stress tests and cardiac catheterizations for women despite controlling for age and mortality score. Of course, we did not control directly for cardiovascular comorbidities; as a result, there could be residual confounding. However, these results are consistent with previous findings of gender bias in both pharmacologic management of coronary artery disease (CAD)15 and diagnostic testing for suspected CAD.16

We focused on hospitalizations with a primary procedure code to surgically treat hip fracture. We are unable to tell if the cardiac testing of these patients had occurred before or after the procedure. However, we suspect that the vast majority were completed for preoperative evaluation. It is likely that a small subset were done to diagnose and manage cardiac complications that either accompanied the hip fracture or occurred postoperatively. Another limitation is that we cannot determine if a patient had one of these tests recently in the emergency department or as an outpatient.

We also chose to include only patients who actually had hip fracture surgery. It is possible that the testing rate is higher for all patients admitted for hip fracture and that some of these patients did not have surgery because of abnormal cardiac testing. However, we suspect that this is a very small fraction given the high degree of morbidity and mortality associated with untreated hip fracture.

 

 

CONCLUSION

We found a low rate of preoperative cardiac testing in patients hospitalized for hip fracture surgery both in the years before and after the issuance of recommendations intended to curb its use. Although it is reassuring that the volume of low-value testing is lower than we expected, these findings highlight the importance of targeting utilization improvement efforts toward low-value tests and procedures that are more heavily used, since further curbing the use of infrequently utilized tests and procedures will have only a modest impact on overall healthcare expenditure. Our findings highlight the necessity that professional organizations ensure that they focus on true areas of inappropriate utilization. These are the areas in which improvements will have a major impact on healthcare spending. Further research should aim to quantify unwarranted cardiac testing for other inpatient surgeries that are less urgent, as the urgency of hip fracture repair may be driving the relatively low utilization of inpatient cardiac testing.

Disclosures

The authors have nothing to disclose.

Funding

This project was supported by the Johns Hopkins Hospitalist Scholars Fund and the Johns Hopkins School of Medicine Biostatistics, Epidemiology and Data Management (BEAD) Core.

 

Files
References

1. Brauer CA, Coca-Perraillon M, Cutler DM, Rosen A. Incidence and mortality of hip fractures in the United States. JAMA. 2009;302(14):1573-1579. PubMed
2. ACS TQIP - Best Practices in the Management of Orthopaedic Trauma. https://www.facs.org/~/media/files/quality programs/trauma/tqip/tqip bpgs in the management of orthopaedic traumafinal.ashx. Published 2015. Accessed July 13, 2018.
3. Shiga T, Wajima Z, Ohe Y. Is operative delay associated with increased mortality of hip fracture patients? Systematic review, meta-analysis, and meta-regression. Can J Anesth. 2008;55(3):146-154. PubMed
4. Pincus D, Ravi B, Wasserstein D, et al. Association between wait time and 30-day mortality in adults undergoing hip fracture surgery. JAMA. 2017;318(20):1994. PubMed
5. Clair CM, Shah M, Diver EJ, et al. Adherence to evidence-based guidelines for preoperative testing in women undergoing gynecologic surgery. Obstet Gynecol. 2010;116(3):694-700. PubMed
6. Chen CL, Lin GA, Bardach NS, et al. Preoperative medical testing in Medicare patients undergoing cataract surgery. N Engl J Med. 2015;372(16):1530-1538. PubMed
7. Benarroch-Gampel J, Sheffield KM, Duncan CB, et al. Preoperative laboratory testing in patients undergoing elective, low-risk ambulatory surgery. Ann Surg. 2012; 256(3):518-528. PubMed
8. Choosing Wisely - An Initiative of the ABIM Foundation. http://www.choosingwisely.org/clinician-lists. Accessed July 16, 2018.
9. Fleisher LA, Fleischmann KE, Auerbach AD, et al. 2014 ACC/AHA Guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery. JACC. 2014;64(22):e278 LP-e333. PubMed
10. HCUP Methods Series - Development of Utilization Flags for Use with UB-92 Administrative Data; Report # 2006-04. https://www.hcup-us.ahrq.gov/reports/methods/2006_4.pdf.
11. Kerr EA, Chen J, Sussman JB, Klamerus ML, Nallamothu BK. Stress testing before low-risk surgery - so many recommendations, so little overuse. JAMA Intern Med. 2015;175(4):645-647. PubMed
12. Schwartz AL, Landon BE, Elshaug AG, Chernew ME, McWilliams JM. Measuring low-value care in medicare. JAMA Intern Med. 2014;174(7):1067-1076. PubMed
13. Carter EA, Morin PE, Lind KD. Costs and trends in utilization of low-value services among older adults with commercial insurance or Medicare advantage. Med Care. 2017;55(11):931-939. PubMed
14. The Dartmouth Atlas of Health Care. http://www.dartmouthatlas.org. Accessed December 7, 2017.
15. Williams D, Bennett K, Feely J. Evidence for an age and gender bias in the secondary prevention of ischaemic heart disease in primary care. Br J Clin Pharmacol. 2003;55(6):604-608. PubMed
16. Chang AM, Mumma B, Sease KL, Robey JL, Shofer FS, Hollander JE. Gender bias in cardiovascular testing persists after adjustment for presenting characteristics and cardiac risk. Acad Emerg Med. 2007;14(7):599-605. PubMed

Article PDF
Issue
Journal of Hospital Medicine 14(4)
Publications
Topics
Page Number
224-228. Published online first February 20, 2019
Sections
Files
Files
Article PDF
Article PDF

Hip fracture is a common reason for unexpected, urgent inpatient surgery in older patients. In 2005, the incidence of hip fracture was 369.0 and 793.5 per 100,000 in men and women respectively.1 These numbers declined over the preceding decade, potentially as a result of bisphosphonate use. Age- and risk-adjusted 30-day mortality rates for men and women in 2005 were approximately 10% and 5%, respectively.

Evidence suggests that timely surgical repair of hip fractures improves outcomes, although the optimal timing is controversial. Guidelines from the American College of Surgeons Committee on Trauma from 2015 recommend surgical intervention within 48 hours for geriatric hip fracures.2 A 2008 systematic review found that operative delay beyond 48 hours was associated with a 41% increase in 30-day all-cause mortality and a 32% increase in one-year all-cause mortality.3 Recent evidence suggests that the rate of complications begins to increase with delays beyond 24 hours.4

There has been a focus over the past decade on overuse of preoperative testing for low- and intermediate-risk surgeries.5-7 Beginning in 2012, the American Board of Internal Medicine initiated the Choosing Wisely® campaign in which numerous societies issued recommendations on reducing utilization of various diagnostic tests, a number of which have focused on preoperative tests. Two groups—the American Society of Anesthesiologists (ASA) and the American Society of Echocardiography (ASE)— issued specific recommendations on preoperative cardiac testing.8 In February 2013, the ASE recommended avoiding preoperative echocardiograms in patients without a history or symptoms of heart disease. In October 2013, the ASA recommended against transthoracic echocardiogram (TTE), transesophageal echocardiogram (TEE), or stress testing for low- or intermediate-risk noncardiac surgery for patients with stable cardiac disease.

Finally, in 2014, the American College of Cardiology (ACC)/American Heart Association (AHA) issued updated perioperative guidelines for patients undergoing noncardiac surgeries.9 They recommended preoperative stress testing only in a small subset of cases (patients with an elevated perioperative risk of major adverse cardiac event, a poor or unknown functional capacity, or those in whom stress testing would impact perioperative care).

Given the high cost of preoperative cardiac testing, the potential for delays in care that can adversely impact outcomes, and the recent recommendations, we sought to characterize the rates of inpatient preoperative cardiac testing prior to hip fracture surgery in recent years and to see whether recent recommendations to curb use of these tests were temporally associated with changing rates.

METHODS

Overview

We utilized two datasets—the Healthcare Cost and Utilization Project (HCUP) State Inpatient Databases (SID) and the American Hospital Association (AHA) Annual Survey—to characterize preoperative cardiac testing. SID data from Maryland, New Jersey, and Washington State from 2011 through September 2015 were used (the ICD coding system changed from ICD9 to ICD10 on October 1). This was combined with AHA data for these years. We included all hospitalizations with a primary ICD9 procedure code for hip fracture repair—78.55, 78.65, 79.05, 79.15, 79.25, 79.35, 79.45, 79.55, 79.65, 79.75, 79.85, and 79.95. We excluded all observations that involved an interhospital transfer. This study was exempt from institutional review board approval.

 

 

Measurement and Outcomes

We summarized demographic data for the hospitalizations that met the inclusion criteria as well as the associated hospitals. The primary outcome was the percentage of patients undergoing TTE, stress test, and cardiac catheterization during a hospitalization with a primary procedure code of hip fracture repair. Random effects logistic regression models for each type of diagnostic test were developed to determine the factors that might impact test utilization. In addition to running each test as a separate model, we also performed an analysis in which the outcome was performance of any of these three cardiac tests. Random effects were used to account for clustering of testing within hospitals. Variables included time (3-month intervals), state, age (continuous variable), gender, length of stay, payer (Medicare/Medicaid/private insurance/self-pay/other), hospital teaching status (major teaching/minor teaching/nonteaching), hospital size according to number of beds (continuous variable), and mortality score. Major teaching hospitals are defined as members of the Council of Teaching Hospitals. Minor teaching hospitals are defined as (1) those with one or more postgraduate training programs recognized by the American Council on Graduate Medical Education, (2) those with a medical school affiliation reported to the American Medical Association, or (3) those with an internship or residency approved by the American Osteopathic Association.

The SID has a specific binary indicator variable for each of the three diagnostic tests we evaluated. The use of the diagnostic test is evaluated through both UB-92 revenue codes and ICD9 procedure codes, with the presence of either leading to the indicator variable being positive.10 Finally, we performed a sensitivity analysis to evaluate the significance of changing utilization trends by interrupted time series analysis. A level of 0.05 was used to determine statistical significance. Analyses were done in STATA 15 (College Station, Texas).

RESULTS

The dataset included 75,144 hospitalizations with a primary procedure code of hip fracture over the study period (Table). The number of hospitalizations per year was fairly consistent over the study period in each state, although there were fewer hospitalizations for 2015 as this included only January through September. The mean age was 72.8 years, and 67% were female. The primary payer was Medicare for 71.7% of hospitalizations. Hospitalizations occurred at 181 hospitals, the plurality of which (42.9%) were minor teaching hospitals. The proportions of hospitalizations that included a TTE, stress test, and cardiac catheterization were 12.6%, 1.1%, and 0.5%, respectively. Overall, 13.5% of patients underwent any cardiac testing.

jhm014040224_t1.jpg

There was a statistically significantly lower rate of stress tests (odds ratio [OR], 0.32; 95% CI, 0.19-0.54) and cardiac catheterizations (OR, 0.46; 95% CI, 0.27-0.79) in Washington than in Maryland and New Jersey. Female gender was associated with significantly lower adjusted ORs for stress tests (OR, 0.74; 95% CI, 0.63-0.86) and cardiac catheterizations (OR, 0.73; 95% CI, 0.59-0.91), and increasing age was associated with higher adjusted ORs for each test (TTE, OR, 1.033; 95% CI, 1.031-1.035; stress tests, OR, 1.007; 95% CI, 1.001-1.013; cardiac catheterizations, OR, 1.011; 95% CI, 1.003-1.019). Private insurance was associated with a lower likelihood of stress tests (OR, 0.65; 95% CI, 0.50-0.85) and cardiac catheterizations (OR, 0.67; 95% CI,0.46-0.98), and self-pay was associated with a lower likelihood of TTE (OR, 0.76; 95% CI, 0.61-0.95) and stress test (OR, 0.43; 95% CI, 0.21-0.90), all compared with Medicare.

Larger hospitals were associated with a greater likelihood of cardiac catheterizations (OR, 1.18; 95% CI, 1.03-1.36) and a lower likelihood of TTE (OR, 0.89; 95% CI, 0.82-0.96). An unweighted average of these tests between 2011 and October 2015 showed a modest increase in TTEs and a modest decrease in stress tests and cardiac catheterizations (Figure). A multivariable random effects regression for use of TTEs revealed a significantly increasing trend from 2011 to 2014 (OR, 1.04, P < .0001), but the decreasing trend for 2015 was not statistically significant when analyzed according to quarters or months (for which data from only New Jersey and Washington are available).

jhm014040224_f1.jpg


In the combined model with any cardiac testing as the outcome, the likelihood of testing was lower in Washington (OR, 0.56; 95% CI, 0.31-0.995). Primary payer status of self-pay was associated with a lower likelihood of cardiac testing (OR, 0.73; 95% CI, 0.58-0.90). Female gender was associated with a lower likelihood of testing (OR, 0.93; 95% CI, 0.88-0.98), and high mortality score was associated with a higher likelihood of testing (OR, 1.030; 95% CI, 1.027-1.033). TTEs were the major driver of this model as these were the most heavily utilized test.

 

 

DISCUSSION

There has been limited research into how often preoperative cardiac testing occurs in the inpatient setting. Our aim was to study its prevalence prior to hip fracture surgery during a time period when multiple recommendations had been issued to limit its use. We found rates of ischemic testing (stress tests and cardiac catheterizations) to be appropriately, and perhaps surprisingly, low. Our results on ischemic testing rates are consistent with previous studies, which have focused on the outpatient setting where much of the preoperative workup for nonurgent surgeries occurs. The rate of TTEs was higher than in previous studies of the outpatient preoperative setting, although it is unclear what an optimal rate of TTEs is.

A recent study examining outpatient preoperative stress tests within the 30 days before cataract surgeries, knee arthroscopies, or shoulder arthroscopies found a rate of 2.1% for Medicare fee-for-service patients in 2009 with little regional variation.11 Another evaluation using 2009 Medicare claims data found rates of preoperative TTEs and stress tests to be 0.8% and 0.7%, respectively.12 They included TTEs and stress tests performed within 30 days of a low- or intermediate-risk surgery. A study analyzing the rate of preoperative TTEs between 2009 and 2014 found that rates varied from 2.0% to 3.4% for commercially insured patients aged 50-64 years and Medicare-advantage patients, respectively, in 2009.13 These rates decreased by 7.0% and 12.6% from 2009 to 2014. These studies, like ours, suggest that preoperative cardiac testing has not been a major source of wasteful spending. One explanation for the higher rate of TTEs we observed in the inpatient setting might be that primary care physicians in the outpatient setting are more likely to have historical cardiac testing results compared with physicians in a hospital.

We found that the rate of stress testing and cardiac catheterization in Washington was significantly lower than that in Maryland and New Jersey. This is consistent with a number of measures of healthcare utilization – total Medicare reimbursement in the last six months of life, mean number of hospital days in the last six months of life, and healthcare intensity index—for all of which Washington was below the national mean and Maryland and New Jersey were above it.14

Finally, we found evidence of a lower rate of preoperative stress tests and cardiac catheterizations for women despite controlling for age and mortality score. Of course, we did not control directly for cardiovascular comorbidities; as a result, there could be residual confounding. However, these results are consistent with previous findings of gender bias in both pharmacologic management of coronary artery disease (CAD)15 and diagnostic testing for suspected CAD.16

We focused on hospitalizations with a primary procedure code to surgically treat hip fracture. We are unable to tell if the cardiac testing of these patients had occurred before or after the procedure. However, we suspect that the vast majority were completed for preoperative evaluation. It is likely that a small subset were done to diagnose and manage cardiac complications that either accompanied the hip fracture or occurred postoperatively. Another limitation is that we cannot determine if a patient had one of these tests recently in the emergency department or as an outpatient.

We also chose to include only patients who actually had hip fracture surgery. It is possible that the testing rate is higher for all patients admitted for hip fracture and that some of these patients did not have surgery because of abnormal cardiac testing. However, we suspect that this is a very small fraction given the high degree of morbidity and mortality associated with untreated hip fracture.

 

 

CONCLUSION

We found a low rate of preoperative cardiac testing in patients hospitalized for hip fracture surgery both in the years before and after the issuance of recommendations intended to curb its use. Although it is reassuring that the volume of low-value testing is lower than we expected, these findings highlight the importance of targeting utilization improvement efforts toward low-value tests and procedures that are more heavily used, since further curbing the use of infrequently utilized tests and procedures will have only a modest impact on overall healthcare expenditure. Our findings highlight the necessity that professional organizations ensure that they focus on true areas of inappropriate utilization. These are the areas in which improvements will have a major impact on healthcare spending. Further research should aim to quantify unwarranted cardiac testing for other inpatient surgeries that are less urgent, as the urgency of hip fracture repair may be driving the relatively low utilization of inpatient cardiac testing.

Disclosures

The authors have nothing to disclose.

Funding

This project was supported by the Johns Hopkins Hospitalist Scholars Fund and the Johns Hopkins School of Medicine Biostatistics, Epidemiology and Data Management (BEAD) Core.

 

Hip fracture is a common reason for unexpected, urgent inpatient surgery in older patients. In 2005, the incidence of hip fracture was 369.0 and 793.5 per 100,000 in men and women respectively.1 These numbers declined over the preceding decade, potentially as a result of bisphosphonate use. Age- and risk-adjusted 30-day mortality rates for men and women in 2005 were approximately 10% and 5%, respectively.

Evidence suggests that timely surgical repair of hip fractures improves outcomes, although the optimal timing is controversial. Guidelines from the American College of Surgeons Committee on Trauma from 2015 recommend surgical intervention within 48 hours for geriatric hip fracures.2 A 2008 systematic review found that operative delay beyond 48 hours was associated with a 41% increase in 30-day all-cause mortality and a 32% increase in one-year all-cause mortality.3 Recent evidence suggests that the rate of complications begins to increase with delays beyond 24 hours.4

There has been a focus over the past decade on overuse of preoperative testing for low- and intermediate-risk surgeries.5-7 Beginning in 2012, the American Board of Internal Medicine initiated the Choosing Wisely® campaign in which numerous societies issued recommendations on reducing utilization of various diagnostic tests, a number of which have focused on preoperative tests. Two groups—the American Society of Anesthesiologists (ASA) and the American Society of Echocardiography (ASE)— issued specific recommendations on preoperative cardiac testing.8 In February 2013, the ASE recommended avoiding preoperative echocardiograms in patients without a history or symptoms of heart disease. In October 2013, the ASA recommended against transthoracic echocardiogram (TTE), transesophageal echocardiogram (TEE), or stress testing for low- or intermediate-risk noncardiac surgery for patients with stable cardiac disease.

Finally, in 2014, the American College of Cardiology (ACC)/American Heart Association (AHA) issued updated perioperative guidelines for patients undergoing noncardiac surgeries.9 They recommended preoperative stress testing only in a small subset of cases (patients with an elevated perioperative risk of major adverse cardiac event, a poor or unknown functional capacity, or those in whom stress testing would impact perioperative care).

Given the high cost of preoperative cardiac testing, the potential for delays in care that can adversely impact outcomes, and the recent recommendations, we sought to characterize the rates of inpatient preoperative cardiac testing prior to hip fracture surgery in recent years and to see whether recent recommendations to curb use of these tests were temporally associated with changing rates.

METHODS

Overview

We utilized two datasets—the Healthcare Cost and Utilization Project (HCUP) State Inpatient Databases (SID) and the American Hospital Association (AHA) Annual Survey—to characterize preoperative cardiac testing. SID data from Maryland, New Jersey, and Washington State from 2011 through September 2015 were used (the ICD coding system changed from ICD9 to ICD10 on October 1). This was combined with AHA data for these years. We included all hospitalizations with a primary ICD9 procedure code for hip fracture repair—78.55, 78.65, 79.05, 79.15, 79.25, 79.35, 79.45, 79.55, 79.65, 79.75, 79.85, and 79.95. We excluded all observations that involved an interhospital transfer. This study was exempt from institutional review board approval.

 

 

Measurement and Outcomes

We summarized demographic data for the hospitalizations that met the inclusion criteria as well as the associated hospitals. The primary outcome was the percentage of patients undergoing TTE, stress test, and cardiac catheterization during a hospitalization with a primary procedure code of hip fracture repair. Random effects logistic regression models for each type of diagnostic test were developed to determine the factors that might impact test utilization. In addition to running each test as a separate model, we also performed an analysis in which the outcome was performance of any of these three cardiac tests. Random effects were used to account for clustering of testing within hospitals. Variables included time (3-month intervals), state, age (continuous variable), gender, length of stay, payer (Medicare/Medicaid/private insurance/self-pay/other), hospital teaching status (major teaching/minor teaching/nonteaching), hospital size according to number of beds (continuous variable), and mortality score. Major teaching hospitals are defined as members of the Council of Teaching Hospitals. Minor teaching hospitals are defined as (1) those with one or more postgraduate training programs recognized by the American Council on Graduate Medical Education, (2) those with a medical school affiliation reported to the American Medical Association, or (3) those with an internship or residency approved by the American Osteopathic Association.

The SID has a specific binary indicator variable for each of the three diagnostic tests we evaluated. The use of the diagnostic test is evaluated through both UB-92 revenue codes and ICD9 procedure codes, with the presence of either leading to the indicator variable being positive.10 Finally, we performed a sensitivity analysis to evaluate the significance of changing utilization trends by interrupted time series analysis. A level of 0.05 was used to determine statistical significance. Analyses were done in STATA 15 (College Station, Texas).

RESULTS

The dataset included 75,144 hospitalizations with a primary procedure code of hip fracture over the study period (Table). The number of hospitalizations per year was fairly consistent over the study period in each state, although there were fewer hospitalizations for 2015 as this included only January through September. The mean age was 72.8 years, and 67% were female. The primary payer was Medicare for 71.7% of hospitalizations. Hospitalizations occurred at 181 hospitals, the plurality of which (42.9%) were minor teaching hospitals. The proportions of hospitalizations that included a TTE, stress test, and cardiac catheterization were 12.6%, 1.1%, and 0.5%, respectively. Overall, 13.5% of patients underwent any cardiac testing.

jhm014040224_t1.jpg

There was a statistically significantly lower rate of stress tests (odds ratio [OR], 0.32; 95% CI, 0.19-0.54) and cardiac catheterizations (OR, 0.46; 95% CI, 0.27-0.79) in Washington than in Maryland and New Jersey. Female gender was associated with significantly lower adjusted ORs for stress tests (OR, 0.74; 95% CI, 0.63-0.86) and cardiac catheterizations (OR, 0.73; 95% CI, 0.59-0.91), and increasing age was associated with higher adjusted ORs for each test (TTE, OR, 1.033; 95% CI, 1.031-1.035; stress tests, OR, 1.007; 95% CI, 1.001-1.013; cardiac catheterizations, OR, 1.011; 95% CI, 1.003-1.019). Private insurance was associated with a lower likelihood of stress tests (OR, 0.65; 95% CI, 0.50-0.85) and cardiac catheterizations (OR, 0.67; 95% CI,0.46-0.98), and self-pay was associated with a lower likelihood of TTE (OR, 0.76; 95% CI, 0.61-0.95) and stress test (OR, 0.43; 95% CI, 0.21-0.90), all compared with Medicare.

Larger hospitals were associated with a greater likelihood of cardiac catheterizations (OR, 1.18; 95% CI, 1.03-1.36) and a lower likelihood of TTE (OR, 0.89; 95% CI, 0.82-0.96). An unweighted average of these tests between 2011 and October 2015 showed a modest increase in TTEs and a modest decrease in stress tests and cardiac catheterizations (Figure). A multivariable random effects regression for use of TTEs revealed a significantly increasing trend from 2011 to 2014 (OR, 1.04, P < .0001), but the decreasing trend for 2015 was not statistically significant when analyzed according to quarters or months (for which data from only New Jersey and Washington are available).

jhm014040224_f1.jpg


In the combined model with any cardiac testing as the outcome, the likelihood of testing was lower in Washington (OR, 0.56; 95% CI, 0.31-0.995). Primary payer status of self-pay was associated with a lower likelihood of cardiac testing (OR, 0.73; 95% CI, 0.58-0.90). Female gender was associated with a lower likelihood of testing (OR, 0.93; 95% CI, 0.88-0.98), and high mortality score was associated with a higher likelihood of testing (OR, 1.030; 95% CI, 1.027-1.033). TTEs were the major driver of this model as these were the most heavily utilized test.

 

 

DISCUSSION

There has been limited research into how often preoperative cardiac testing occurs in the inpatient setting. Our aim was to study its prevalence prior to hip fracture surgery during a time period when multiple recommendations had been issued to limit its use. We found rates of ischemic testing (stress tests and cardiac catheterizations) to be appropriately, and perhaps surprisingly, low. Our results on ischemic testing rates are consistent with previous studies, which have focused on the outpatient setting where much of the preoperative workup for nonurgent surgeries occurs. The rate of TTEs was higher than in previous studies of the outpatient preoperative setting, although it is unclear what an optimal rate of TTEs is.

A recent study examining outpatient preoperative stress tests within the 30 days before cataract surgeries, knee arthroscopies, or shoulder arthroscopies found a rate of 2.1% for Medicare fee-for-service patients in 2009 with little regional variation.11 Another evaluation using 2009 Medicare claims data found rates of preoperative TTEs and stress tests to be 0.8% and 0.7%, respectively.12 They included TTEs and stress tests performed within 30 days of a low- or intermediate-risk surgery. A study analyzing the rate of preoperative TTEs between 2009 and 2014 found that rates varied from 2.0% to 3.4% for commercially insured patients aged 50-64 years and Medicare-advantage patients, respectively, in 2009.13 These rates decreased by 7.0% and 12.6% from 2009 to 2014. These studies, like ours, suggest that preoperative cardiac testing has not been a major source of wasteful spending. One explanation for the higher rate of TTEs we observed in the inpatient setting might be that primary care physicians in the outpatient setting are more likely to have historical cardiac testing results compared with physicians in a hospital.

We found that the rate of stress testing and cardiac catheterization in Washington was significantly lower than that in Maryland and New Jersey. This is consistent with a number of measures of healthcare utilization – total Medicare reimbursement in the last six months of life, mean number of hospital days in the last six months of life, and healthcare intensity index—for all of which Washington was below the national mean and Maryland and New Jersey were above it.14

Finally, we found evidence of a lower rate of preoperative stress tests and cardiac catheterizations for women despite controlling for age and mortality score. Of course, we did not control directly for cardiovascular comorbidities; as a result, there could be residual confounding. However, these results are consistent with previous findings of gender bias in both pharmacologic management of coronary artery disease (CAD)15 and diagnostic testing for suspected CAD.16

We focused on hospitalizations with a primary procedure code to surgically treat hip fracture. We are unable to tell if the cardiac testing of these patients had occurred before or after the procedure. However, we suspect that the vast majority were completed for preoperative evaluation. It is likely that a small subset were done to diagnose and manage cardiac complications that either accompanied the hip fracture or occurred postoperatively. Another limitation is that we cannot determine if a patient had one of these tests recently in the emergency department or as an outpatient.

We also chose to include only patients who actually had hip fracture surgery. It is possible that the testing rate is higher for all patients admitted for hip fracture and that some of these patients did not have surgery because of abnormal cardiac testing. However, we suspect that this is a very small fraction given the high degree of morbidity and mortality associated with untreated hip fracture.

 

 

CONCLUSION

We found a low rate of preoperative cardiac testing in patients hospitalized for hip fracture surgery both in the years before and after the issuance of recommendations intended to curb its use. Although it is reassuring that the volume of low-value testing is lower than we expected, these findings highlight the importance of targeting utilization improvement efforts toward low-value tests and procedures that are more heavily used, since further curbing the use of infrequently utilized tests and procedures will have only a modest impact on overall healthcare expenditure. Our findings highlight the necessity that professional organizations ensure that they focus on true areas of inappropriate utilization. These are the areas in which improvements will have a major impact on healthcare spending. Further research should aim to quantify unwarranted cardiac testing for other inpatient surgeries that are less urgent, as the urgency of hip fracture repair may be driving the relatively low utilization of inpatient cardiac testing.

Disclosures

The authors have nothing to disclose.

Funding

This project was supported by the Johns Hopkins Hospitalist Scholars Fund and the Johns Hopkins School of Medicine Biostatistics, Epidemiology and Data Management (BEAD) Core.

 

References

1. Brauer CA, Coca-Perraillon M, Cutler DM, Rosen A. Incidence and mortality of hip fractures in the United States. JAMA. 2009;302(14):1573-1579. PubMed
2. ACS TQIP - Best Practices in the Management of Orthopaedic Trauma. https://www.facs.org/~/media/files/quality programs/trauma/tqip/tqip bpgs in the management of orthopaedic traumafinal.ashx. Published 2015. Accessed July 13, 2018.
3. Shiga T, Wajima Z, Ohe Y. Is operative delay associated with increased mortality of hip fracture patients? Systematic review, meta-analysis, and meta-regression. Can J Anesth. 2008;55(3):146-154. PubMed
4. Pincus D, Ravi B, Wasserstein D, et al. Association between wait time and 30-day mortality in adults undergoing hip fracture surgery. JAMA. 2017;318(20):1994. PubMed
5. Clair CM, Shah M, Diver EJ, et al. Adherence to evidence-based guidelines for preoperative testing in women undergoing gynecologic surgery. Obstet Gynecol. 2010;116(3):694-700. PubMed
6. Chen CL, Lin GA, Bardach NS, et al. Preoperative medical testing in Medicare patients undergoing cataract surgery. N Engl J Med. 2015;372(16):1530-1538. PubMed
7. Benarroch-Gampel J, Sheffield KM, Duncan CB, et al. Preoperative laboratory testing in patients undergoing elective, low-risk ambulatory surgery. Ann Surg. 2012; 256(3):518-528. PubMed
8. Choosing Wisely - An Initiative of the ABIM Foundation. http://www.choosingwisely.org/clinician-lists. Accessed July 16, 2018.
9. Fleisher LA, Fleischmann KE, Auerbach AD, et al. 2014 ACC/AHA Guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery. JACC. 2014;64(22):e278 LP-e333. PubMed
10. HCUP Methods Series - Development of Utilization Flags for Use with UB-92 Administrative Data; Report # 2006-04. https://www.hcup-us.ahrq.gov/reports/methods/2006_4.pdf.
11. Kerr EA, Chen J, Sussman JB, Klamerus ML, Nallamothu BK. Stress testing before low-risk surgery - so many recommendations, so little overuse. JAMA Intern Med. 2015;175(4):645-647. PubMed
12. Schwartz AL, Landon BE, Elshaug AG, Chernew ME, McWilliams JM. Measuring low-value care in medicare. JAMA Intern Med. 2014;174(7):1067-1076. PubMed
13. Carter EA, Morin PE, Lind KD. Costs and trends in utilization of low-value services among older adults with commercial insurance or Medicare advantage. Med Care. 2017;55(11):931-939. PubMed
14. The Dartmouth Atlas of Health Care. http://www.dartmouthatlas.org. Accessed December 7, 2017.
15. Williams D, Bennett K, Feely J. Evidence for an age and gender bias in the secondary prevention of ischaemic heart disease in primary care. Br J Clin Pharmacol. 2003;55(6):604-608. PubMed
16. Chang AM, Mumma B, Sease KL, Robey JL, Shofer FS, Hollander JE. Gender bias in cardiovascular testing persists after adjustment for presenting characteristics and cardiac risk. Acad Emerg Med. 2007;14(7):599-605. PubMed

References

1. Brauer CA, Coca-Perraillon M, Cutler DM, Rosen A. Incidence and mortality of hip fractures in the United States. JAMA. 2009;302(14):1573-1579. PubMed
2. ACS TQIP - Best Practices in the Management of Orthopaedic Trauma. https://www.facs.org/~/media/files/quality programs/trauma/tqip/tqip bpgs in the management of orthopaedic traumafinal.ashx. Published 2015. Accessed July 13, 2018.
3. Shiga T, Wajima Z, Ohe Y. Is operative delay associated with increased mortality of hip fracture patients? Systematic review, meta-analysis, and meta-regression. Can J Anesth. 2008;55(3):146-154. PubMed
4. Pincus D, Ravi B, Wasserstein D, et al. Association between wait time and 30-day mortality in adults undergoing hip fracture surgery. JAMA. 2017;318(20):1994. PubMed
5. Clair CM, Shah M, Diver EJ, et al. Adherence to evidence-based guidelines for preoperative testing in women undergoing gynecologic surgery. Obstet Gynecol. 2010;116(3):694-700. PubMed
6. Chen CL, Lin GA, Bardach NS, et al. Preoperative medical testing in Medicare patients undergoing cataract surgery. N Engl J Med. 2015;372(16):1530-1538. PubMed
7. Benarroch-Gampel J, Sheffield KM, Duncan CB, et al. Preoperative laboratory testing in patients undergoing elective, low-risk ambulatory surgery. Ann Surg. 2012; 256(3):518-528. PubMed
8. Choosing Wisely - An Initiative of the ABIM Foundation. http://www.choosingwisely.org/clinician-lists. Accessed July 16, 2018.
9. Fleisher LA, Fleischmann KE, Auerbach AD, et al. 2014 ACC/AHA Guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery. JACC. 2014;64(22):e278 LP-e333. PubMed
10. HCUP Methods Series - Development of Utilization Flags for Use with UB-92 Administrative Data; Report # 2006-04. https://www.hcup-us.ahrq.gov/reports/methods/2006_4.pdf.
11. Kerr EA, Chen J, Sussman JB, Klamerus ML, Nallamothu BK. Stress testing before low-risk surgery - so many recommendations, so little overuse. JAMA Intern Med. 2015;175(4):645-647. PubMed
12. Schwartz AL, Landon BE, Elshaug AG, Chernew ME, McWilliams JM. Measuring low-value care in medicare. JAMA Intern Med. 2014;174(7):1067-1076. PubMed
13. Carter EA, Morin PE, Lind KD. Costs and trends in utilization of low-value services among older adults with commercial insurance or Medicare advantage. Med Care. 2017;55(11):931-939. PubMed
14. The Dartmouth Atlas of Health Care. http://www.dartmouthatlas.org. Accessed December 7, 2017.
15. Williams D, Bennett K, Feely J. Evidence for an age and gender bias in the secondary prevention of ischaemic heart disease in primary care. Br J Clin Pharmacol. 2003;55(6):604-608. PubMed
16. Chang AM, Mumma B, Sease KL, Robey JL, Shofer FS, Hollander JE. Gender bias in cardiovascular testing persists after adjustment for presenting characteristics and cardiac risk. Acad Emerg Med. 2007;14(7):599-605. PubMed

Issue
Journal of Hospital Medicine 14(4)
Issue
Journal of Hospital Medicine 14(4)
Page Number
224-228. Published online first February 20, 2019
Page Number
224-228. Published online first February 20, 2019
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Michael I Ellenbogen, MD; E-mail: mellenb6@jhmi.edu; Telephone: 443-287-4362
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Inpatient Mobility Technicians: One Step Forward?

Article Type
Changed
Sun, 05/26/2019 - 00:23

Prolonged bedrest with minimum mobility is associated with worse outcomes for hospitalized patients, particularly the elderly.1,2 Immobility accelerates loss of independent function and leads to complications such as deep vein thrombosis, pressure ulcers, and even death.3,4 Increasing activity and mobility early in hospitalization, even among critically ill patients, has proven safe.5 Patients with intravascular devices, urinary catheters, and even those requiring mechanical ventilation or extracorporeal membranous oxygenation can safely perform exercise and out-of-bed activities.5

Although the remedy for immobility and bedrest seems obvious, implementing workflows and strategies to increase inpatient mobility has proven challenging. Physical therapists—often the first solution considered to mobilize patients—are a limited resource and are often coordinating with other team members on care planning activities such as facilitating discharge, arranging for equipment, and educating patients and families, rather than assisting with routine mobility needs.6 Nurses share responsibility for patient activity, but they also have broad patient-care responsibilities competing for their time.7 Additionally, some nurses may feel they do not have the necessary training to safely mobilize patients.8,9

In this context, the work by Rothberg et al. is a welcome addition to the literature. In this single-blind randomized pilot trial, 102 inpatients aged 60 years and older were randomly assigned to either of two groups: intervention (ambulation protocol) or usual care. In the intervention arm, dedicated mobility technicians—ie, redeployed patient-care nursing assistants trained in safe patient-handling practices—were tasked to help patients walk three times daily. Patients in the intervention group took significantly more steps on average compared with those receiving usual care (994 versus 668). Additionally, patients with greater exposure to the mobility technicians (>2 days) had significantly higher step counts and were more likely to achieve >900 steps per day, below which patients are likely to experience functional decline.10 This study highlights the feasibility of using trained mobility technicians rather than more expensive providers (eg, physical therapists, occupational therapists, or nurses) to enhance inpatient ambulation.

The authors confirmed previously known findings that inpatient mobility, which was assessed in this study by accelerometers, predicts post-hospital patient disposition. Although consumer grade accelerometer devices (eg, Fitbit©), have limitations and may not count steps accurately for hospitalized patients who walk slowly or have gait abnormalities,11 Rothberg et al. still found that higher step count was associated with discharge home rather than to a facility. Discharge planning in the hospital is often delayed because clinicians fail to recognize impaired mobility until after resolution of acute medical/surgical issues.12 The use of routinely collected mobility measurements, such as step count, to inform decisions around care coordination and discharge planning may ultimately prove helpful for hospital throughput.

Despite the increased mobility observed in the intervention group, discharge disposition after hospitalization and hospital length of stay (LOS) did not differ between groups, whether analyzed according to per-protocol or intention-to-treat analysis. Although LOS and discharge disposition are known to be associated with patient functional status, they are also influenced by other factors, such as social support, health insurance, medical status, and patient or family preferences.13-16 Furthermore, illness severity may confound the association between step count and outcomes: sicker patients walk less, stay longer, and are more likely to need postacute rehabilitation. Thus, the effect size of a mobility intervention may be smaller than expected based on observational data, leading to underpowering. Another possibility is that the intervention did not affect these clinical outcomes because patients in the intervention group only received the intervention for an average of one-third of their hospitalization period and the mobility goal of three times per day was not consistently achieved. Mobility technician involvement was often delayed because the study required physical therapy evaluations to determine patient appropriateness before the mobility intervention was initiated. This aspect of study design belies a commonplace cultural practice to defer inpatient mobilization until a physical therapist has first evaluated the patient. Moreover, limiting mobility interventions to a single provider, such as a mobility technician, can mean that patients are less likely to be mobilized if that resource is not available. Establishing an interdisciplinary culture of mobility is more likely to be successful.17 One possible strategy is to start with nurse-performed systematic assessments of functional ability to set daily mobility goals that any appropriate provider, including a mobility technician, could help to implement.18,19

Although studies designed to increase hospital mobility have yielded mixed results,20 and larger high-quality clinical trials are needed to demonstrate clear and consistent benefits on patient-centered and operational outcomes, we applaud research and quality improvement efforts (including the current study) that promote inpatient mobility through strategies and measurements that do not require intensive physical therapist involvement. Mobility technicians may represent one step forward in enhancing a culture of mobility.

 

 

Disclosures

The authors certify that no party having a direct interest in the results of the research supporting this article has or will confer a benefit on us or on any organization with which we are associated.

 

References

1. Brown CJ, Redden DT, Flood KL, Allman RM. The underrecognized epidemic of low mobility during hospitalization of older adults. J Am Geriatr Soc. 2009;57(9):1660-1665. doi:10.1111/j.1532-5415.2009.02393.x PubMed
2. Greysen SR. Activating hospitalized older patients to confront the epidemic of low mobility. JAMA Intern Med. 2016;176(7):928. doi:10.1001/jamainternmed.2016.1874 PubMed
3. Covinsky KE, Pierluissi E, Johnston CB. Hospitalization-associated disability: “she was probably able to ambulate, but I’m not sure”. JAMA. 2011;306(16):1782-1793. doi:10.1001/jama.2011.1556 PubMed
4. Wu X, Li Z, Cao J, et al. The association between major complications of immobility during hospitalization and quality of life among bedridden patients: a 3 month prospective multi-center study. PLOS ONE. 2018;13(10):e0205729. doi:10.1371/journal.pone.0205729 PubMed
5. Nydahl P, Sricharoenchai T, Chandra S, et al. Safety of patient mobilization and rehabilitation in the intensive care unit: systematic review with meta-analysis. Ann Am Thorac Soc. 2017;14(5):766-777. doi:10.1513/AnnalsATS.201611-843SR PubMed
6. Masley PM, Havrilko C-L, Mahnensmith MR, Aubert M, Jette DU, Coffin-Zadai C. Physical Therapist practice in the acute care setting: a qualitative study. Phys Ther. 2011;91(6):906-922. doi:10.2522/ptj.20100296 PubMed
7. Young DL, Seltzer J, Glover M, et al. Identifying barriers to nurse-facilitated patient mobility in the intensive care unit. Am J Crit Care Off Publ Am Assoc Crit-Care Nurses. 2018;27(3):186-193. doi:10.4037/ajcc2018368 PubMed
8. Brown CJ, Williams BR, Woodby LL, Davis LL, Allman RM. Barriers to mobility during hospitalization from the perspectives of older patients and their nurses and physicians. J Hosp Med Off Publ Soc Hosp Med. 2007;2(5):305-313. doi:10.1002/jhm.209 PubMed
9. Hoyer EH, Brotman DJ, Chan KS, Needham DM. Barriers to early mobility of hospitalized general medicine patients: survey development and results. Am J Phys Med Rehabil. 2015;94(4):304-312. doi:10.1097/PHM.0000000000000185 PubMed
10. Agmon M, Zisberg A, Gil E, Rand D, Gur-Yaish N, Azriel M. Association Between 900 Steps a Day and Functional Decline in Older Hospitalized Patients. JAMA Intern Med. 2017;177(2):272. doi:10.1001/jamainternmed.2016.7266 PubMed
11. Anderson JL, Green AJ, Yoward LS, Hall HK. Validity and reliability of accelerometry in identification of lying, sitting, standing or purposeful activity in adult hospital inpatients recovering from acute or critical illness: a systematic review. Clin Rehabil. 2018;32(2):233-242. doi:10.1177/0269215517724850 PubMed
12. Roberts DE, Holloway RG, George BP. Post-acute care discharge delays for neurology inpatients: Opportunity to improve patient flow. Neurol Clin Pract. July 2018:8(4):302-310. doi:10.1212/CPJ.0000000000000492 PubMed
13. Hoyer EH, Friedman M, Lavezza A, et al. Promoting mobility and reducing length of stay in hospitalized general medicine patients: A quality-improvement project. J Hosp Med. 2016;11(5):341-34 7. doi:10.1002/jhm.2546 PubMed
14. Surkan MJ, Gibson W. Interventions to mobilize elderly patients and reduce length of hospital stay. Can J Cardiol. 2018;34(7):881-888. doi:10.1016/j.cjca.2018.04.033 PubMed
15. Ota H, Kawai H, Sato M, Ito K, Fujishima S, Suzuki H. Effect of early mobilization on discharge disposition of mechanically ventilated patients. J Phys Ther Sci. 2015;27(3):859-864. doi:10.1589/jpts.27.859 PubMed
16. Hoyer EH, Young DL, Friedman LA, et al. Routine inpatient mobility assessment and hospital discharge planning. JAMA Intern Med. 2018. doi:10.1001/jamainternmed.2018.5145 PubMed
17. Czaplijski T, Marshburn D, Hobbs T, Bankard S, Bennett W. Creating a culture of mobility: an interdisciplinary approach for hospitalized patients. Hosp Top. 2014;92(3):74-79. doi:10.1080/00185868.2014.937971 PubMed
18. Hoyer EH, Young DL, Klein LM, et al. Toward a common language for measuring patient mobility in the hospital: reliability and construct validity of interprofessional mobility measures. Phys Ther. 2018;98(2):133-142.. doi:10.1093/ptj/pzx110 PubMed
19. Klein LM, Young D, Feng D, et al. Increasing patient mobility through an individualized goal-centered hospital mobility program: a quasi-experimental quality improvement project. Nurs Outlook. 2018;66(3):254-262. doi:10.1016/j.outlook.2018.02.006 PubMed
20. Kanach FA, Pastva AM, Hall KS, Pavon JM, Morey MC. Effects of structured exercise interventions for older adults hospitalized with acute medical illness: a systematic review. J Aging Phys Act. 2018;26(2):284-303. doi:10.1123/japa.2016-0372 PubMed

Article PDF
Issue
Journal of Hospital Medicine 14(5)
Publications
Topics
Page Number
321-322. Published online first February 20, 2019.
Sections
Article PDF
Article PDF
Related Articles

Prolonged bedrest with minimum mobility is associated with worse outcomes for hospitalized patients, particularly the elderly.1,2 Immobility accelerates loss of independent function and leads to complications such as deep vein thrombosis, pressure ulcers, and even death.3,4 Increasing activity and mobility early in hospitalization, even among critically ill patients, has proven safe.5 Patients with intravascular devices, urinary catheters, and even those requiring mechanical ventilation or extracorporeal membranous oxygenation can safely perform exercise and out-of-bed activities.5

Although the remedy for immobility and bedrest seems obvious, implementing workflows and strategies to increase inpatient mobility has proven challenging. Physical therapists—often the first solution considered to mobilize patients—are a limited resource and are often coordinating with other team members on care planning activities such as facilitating discharge, arranging for equipment, and educating patients and families, rather than assisting with routine mobility needs.6 Nurses share responsibility for patient activity, but they also have broad patient-care responsibilities competing for their time.7 Additionally, some nurses may feel they do not have the necessary training to safely mobilize patients.8,9

In this context, the work by Rothberg et al. is a welcome addition to the literature. In this single-blind randomized pilot trial, 102 inpatients aged 60 years and older were randomly assigned to either of two groups: intervention (ambulation protocol) or usual care. In the intervention arm, dedicated mobility technicians—ie, redeployed patient-care nursing assistants trained in safe patient-handling practices—were tasked to help patients walk three times daily. Patients in the intervention group took significantly more steps on average compared with those receiving usual care (994 versus 668). Additionally, patients with greater exposure to the mobility technicians (>2 days) had significantly higher step counts and were more likely to achieve >900 steps per day, below which patients are likely to experience functional decline.10 This study highlights the feasibility of using trained mobility technicians rather than more expensive providers (eg, physical therapists, occupational therapists, or nurses) to enhance inpatient ambulation.

The authors confirmed previously known findings that inpatient mobility, which was assessed in this study by accelerometers, predicts post-hospital patient disposition. Although consumer grade accelerometer devices (eg, Fitbit©), have limitations and may not count steps accurately for hospitalized patients who walk slowly or have gait abnormalities,11 Rothberg et al. still found that higher step count was associated with discharge home rather than to a facility. Discharge planning in the hospital is often delayed because clinicians fail to recognize impaired mobility until after resolution of acute medical/surgical issues.12 The use of routinely collected mobility measurements, such as step count, to inform decisions around care coordination and discharge planning may ultimately prove helpful for hospital throughput.

Despite the increased mobility observed in the intervention group, discharge disposition after hospitalization and hospital length of stay (LOS) did not differ between groups, whether analyzed according to per-protocol or intention-to-treat analysis. Although LOS and discharge disposition are known to be associated with patient functional status, they are also influenced by other factors, such as social support, health insurance, medical status, and patient or family preferences.13-16 Furthermore, illness severity may confound the association between step count and outcomes: sicker patients walk less, stay longer, and are more likely to need postacute rehabilitation. Thus, the effect size of a mobility intervention may be smaller than expected based on observational data, leading to underpowering. Another possibility is that the intervention did not affect these clinical outcomes because patients in the intervention group only received the intervention for an average of one-third of their hospitalization period and the mobility goal of three times per day was not consistently achieved. Mobility technician involvement was often delayed because the study required physical therapy evaluations to determine patient appropriateness before the mobility intervention was initiated. This aspect of study design belies a commonplace cultural practice to defer inpatient mobilization until a physical therapist has first evaluated the patient. Moreover, limiting mobility interventions to a single provider, such as a mobility technician, can mean that patients are less likely to be mobilized if that resource is not available. Establishing an interdisciplinary culture of mobility is more likely to be successful.17 One possible strategy is to start with nurse-performed systematic assessments of functional ability to set daily mobility goals that any appropriate provider, including a mobility technician, could help to implement.18,19

Although studies designed to increase hospital mobility have yielded mixed results,20 and larger high-quality clinical trials are needed to demonstrate clear and consistent benefits on patient-centered and operational outcomes, we applaud research and quality improvement efforts (including the current study) that promote inpatient mobility through strategies and measurements that do not require intensive physical therapist involvement. Mobility technicians may represent one step forward in enhancing a culture of mobility.

 

 

Disclosures

The authors certify that no party having a direct interest in the results of the research supporting this article has or will confer a benefit on us or on any organization with which we are associated.

 

Prolonged bedrest with minimum mobility is associated with worse outcomes for hospitalized patients, particularly the elderly.1,2 Immobility accelerates loss of independent function and leads to complications such as deep vein thrombosis, pressure ulcers, and even death.3,4 Increasing activity and mobility early in hospitalization, even among critically ill patients, has proven safe.5 Patients with intravascular devices, urinary catheters, and even those requiring mechanical ventilation or extracorporeal membranous oxygenation can safely perform exercise and out-of-bed activities.5

Although the remedy for immobility and bedrest seems obvious, implementing workflows and strategies to increase inpatient mobility has proven challenging. Physical therapists—often the first solution considered to mobilize patients—are a limited resource and are often coordinating with other team members on care planning activities such as facilitating discharge, arranging for equipment, and educating patients and families, rather than assisting with routine mobility needs.6 Nurses share responsibility for patient activity, but they also have broad patient-care responsibilities competing for their time.7 Additionally, some nurses may feel they do not have the necessary training to safely mobilize patients.8,9

In this context, the work by Rothberg et al. is a welcome addition to the literature. In this single-blind randomized pilot trial, 102 inpatients aged 60 years and older were randomly assigned to either of two groups: intervention (ambulation protocol) or usual care. In the intervention arm, dedicated mobility technicians—ie, redeployed patient-care nursing assistants trained in safe patient-handling practices—were tasked to help patients walk three times daily. Patients in the intervention group took significantly more steps on average compared with those receiving usual care (994 versus 668). Additionally, patients with greater exposure to the mobility technicians (>2 days) had significantly higher step counts and were more likely to achieve >900 steps per day, below which patients are likely to experience functional decline.10 This study highlights the feasibility of using trained mobility technicians rather than more expensive providers (eg, physical therapists, occupational therapists, or nurses) to enhance inpatient ambulation.

The authors confirmed previously known findings that inpatient mobility, which was assessed in this study by accelerometers, predicts post-hospital patient disposition. Although consumer grade accelerometer devices (eg, Fitbit©), have limitations and may not count steps accurately for hospitalized patients who walk slowly or have gait abnormalities,11 Rothberg et al. still found that higher step count was associated with discharge home rather than to a facility. Discharge planning in the hospital is often delayed because clinicians fail to recognize impaired mobility until after resolution of acute medical/surgical issues.12 The use of routinely collected mobility measurements, such as step count, to inform decisions around care coordination and discharge planning may ultimately prove helpful for hospital throughput.

Despite the increased mobility observed in the intervention group, discharge disposition after hospitalization and hospital length of stay (LOS) did not differ between groups, whether analyzed according to per-protocol or intention-to-treat analysis. Although LOS and discharge disposition are known to be associated with patient functional status, they are also influenced by other factors, such as social support, health insurance, medical status, and patient or family preferences.13-16 Furthermore, illness severity may confound the association between step count and outcomes: sicker patients walk less, stay longer, and are more likely to need postacute rehabilitation. Thus, the effect size of a mobility intervention may be smaller than expected based on observational data, leading to underpowering. Another possibility is that the intervention did not affect these clinical outcomes because patients in the intervention group only received the intervention for an average of one-third of their hospitalization period and the mobility goal of three times per day was not consistently achieved. Mobility technician involvement was often delayed because the study required physical therapy evaluations to determine patient appropriateness before the mobility intervention was initiated. This aspect of study design belies a commonplace cultural practice to defer inpatient mobilization until a physical therapist has first evaluated the patient. Moreover, limiting mobility interventions to a single provider, such as a mobility technician, can mean that patients are less likely to be mobilized if that resource is not available. Establishing an interdisciplinary culture of mobility is more likely to be successful.17 One possible strategy is to start with nurse-performed systematic assessments of functional ability to set daily mobility goals that any appropriate provider, including a mobility technician, could help to implement.18,19

Although studies designed to increase hospital mobility have yielded mixed results,20 and larger high-quality clinical trials are needed to demonstrate clear and consistent benefits on patient-centered and operational outcomes, we applaud research and quality improvement efforts (including the current study) that promote inpatient mobility through strategies and measurements that do not require intensive physical therapist involvement. Mobility technicians may represent one step forward in enhancing a culture of mobility.

 

 

Disclosures

The authors certify that no party having a direct interest in the results of the research supporting this article has or will confer a benefit on us or on any organization with which we are associated.

 

References

1. Brown CJ, Redden DT, Flood KL, Allman RM. The underrecognized epidemic of low mobility during hospitalization of older adults. J Am Geriatr Soc. 2009;57(9):1660-1665. doi:10.1111/j.1532-5415.2009.02393.x PubMed
2. Greysen SR. Activating hospitalized older patients to confront the epidemic of low mobility. JAMA Intern Med. 2016;176(7):928. doi:10.1001/jamainternmed.2016.1874 PubMed
3. Covinsky KE, Pierluissi E, Johnston CB. Hospitalization-associated disability: “she was probably able to ambulate, but I’m not sure”. JAMA. 2011;306(16):1782-1793. doi:10.1001/jama.2011.1556 PubMed
4. Wu X, Li Z, Cao J, et al. The association between major complications of immobility during hospitalization and quality of life among bedridden patients: a 3 month prospective multi-center study. PLOS ONE. 2018;13(10):e0205729. doi:10.1371/journal.pone.0205729 PubMed
5. Nydahl P, Sricharoenchai T, Chandra S, et al. Safety of patient mobilization and rehabilitation in the intensive care unit: systematic review with meta-analysis. Ann Am Thorac Soc. 2017;14(5):766-777. doi:10.1513/AnnalsATS.201611-843SR PubMed
6. Masley PM, Havrilko C-L, Mahnensmith MR, Aubert M, Jette DU, Coffin-Zadai C. Physical Therapist practice in the acute care setting: a qualitative study. Phys Ther. 2011;91(6):906-922. doi:10.2522/ptj.20100296 PubMed
7. Young DL, Seltzer J, Glover M, et al. Identifying barriers to nurse-facilitated patient mobility in the intensive care unit. Am J Crit Care Off Publ Am Assoc Crit-Care Nurses. 2018;27(3):186-193. doi:10.4037/ajcc2018368 PubMed
8. Brown CJ, Williams BR, Woodby LL, Davis LL, Allman RM. Barriers to mobility during hospitalization from the perspectives of older patients and their nurses and physicians. J Hosp Med Off Publ Soc Hosp Med. 2007;2(5):305-313. doi:10.1002/jhm.209 PubMed
9. Hoyer EH, Brotman DJ, Chan KS, Needham DM. Barriers to early mobility of hospitalized general medicine patients: survey development and results. Am J Phys Med Rehabil. 2015;94(4):304-312. doi:10.1097/PHM.0000000000000185 PubMed
10. Agmon M, Zisberg A, Gil E, Rand D, Gur-Yaish N, Azriel M. Association Between 900 Steps a Day and Functional Decline in Older Hospitalized Patients. JAMA Intern Med. 2017;177(2):272. doi:10.1001/jamainternmed.2016.7266 PubMed
11. Anderson JL, Green AJ, Yoward LS, Hall HK. Validity and reliability of accelerometry in identification of lying, sitting, standing or purposeful activity in adult hospital inpatients recovering from acute or critical illness: a systematic review. Clin Rehabil. 2018;32(2):233-242. doi:10.1177/0269215517724850 PubMed
12. Roberts DE, Holloway RG, George BP. Post-acute care discharge delays for neurology inpatients: Opportunity to improve patient flow. Neurol Clin Pract. July 2018:8(4):302-310. doi:10.1212/CPJ.0000000000000492 PubMed
13. Hoyer EH, Friedman M, Lavezza A, et al. Promoting mobility and reducing length of stay in hospitalized general medicine patients: A quality-improvement project. J Hosp Med. 2016;11(5):341-34 7. doi:10.1002/jhm.2546 PubMed
14. Surkan MJ, Gibson W. Interventions to mobilize elderly patients and reduce length of hospital stay. Can J Cardiol. 2018;34(7):881-888. doi:10.1016/j.cjca.2018.04.033 PubMed
15. Ota H, Kawai H, Sato M, Ito K, Fujishima S, Suzuki H. Effect of early mobilization on discharge disposition of mechanically ventilated patients. J Phys Ther Sci. 2015;27(3):859-864. doi:10.1589/jpts.27.859 PubMed
16. Hoyer EH, Young DL, Friedman LA, et al. Routine inpatient mobility assessment and hospital discharge planning. JAMA Intern Med. 2018. doi:10.1001/jamainternmed.2018.5145 PubMed
17. Czaplijski T, Marshburn D, Hobbs T, Bankard S, Bennett W. Creating a culture of mobility: an interdisciplinary approach for hospitalized patients. Hosp Top. 2014;92(3):74-79. doi:10.1080/00185868.2014.937971 PubMed
18. Hoyer EH, Young DL, Klein LM, et al. Toward a common language for measuring patient mobility in the hospital: reliability and construct validity of interprofessional mobility measures. Phys Ther. 2018;98(2):133-142.. doi:10.1093/ptj/pzx110 PubMed
19. Klein LM, Young D, Feng D, et al. Increasing patient mobility through an individualized goal-centered hospital mobility program: a quasi-experimental quality improvement project. Nurs Outlook. 2018;66(3):254-262. doi:10.1016/j.outlook.2018.02.006 PubMed
20. Kanach FA, Pastva AM, Hall KS, Pavon JM, Morey MC. Effects of structured exercise interventions for older adults hospitalized with acute medical illness: a systematic review. J Aging Phys Act. 2018;26(2):284-303. doi:10.1123/japa.2016-0372 PubMed

References

1. Brown CJ, Redden DT, Flood KL, Allman RM. The underrecognized epidemic of low mobility during hospitalization of older adults. J Am Geriatr Soc. 2009;57(9):1660-1665. doi:10.1111/j.1532-5415.2009.02393.x PubMed
2. Greysen SR. Activating hospitalized older patients to confront the epidemic of low mobility. JAMA Intern Med. 2016;176(7):928. doi:10.1001/jamainternmed.2016.1874 PubMed
3. Covinsky KE, Pierluissi E, Johnston CB. Hospitalization-associated disability: “she was probably able to ambulate, but I’m not sure”. JAMA. 2011;306(16):1782-1793. doi:10.1001/jama.2011.1556 PubMed
4. Wu X, Li Z, Cao J, et al. The association between major complications of immobility during hospitalization and quality of life among bedridden patients: a 3 month prospective multi-center study. PLOS ONE. 2018;13(10):e0205729. doi:10.1371/journal.pone.0205729 PubMed
5. Nydahl P, Sricharoenchai T, Chandra S, et al. Safety of patient mobilization and rehabilitation in the intensive care unit: systematic review with meta-analysis. Ann Am Thorac Soc. 2017;14(5):766-777. doi:10.1513/AnnalsATS.201611-843SR PubMed
6. Masley PM, Havrilko C-L, Mahnensmith MR, Aubert M, Jette DU, Coffin-Zadai C. Physical Therapist practice in the acute care setting: a qualitative study. Phys Ther. 2011;91(6):906-922. doi:10.2522/ptj.20100296 PubMed
7. Young DL, Seltzer J, Glover M, et al. Identifying barriers to nurse-facilitated patient mobility in the intensive care unit. Am J Crit Care Off Publ Am Assoc Crit-Care Nurses. 2018;27(3):186-193. doi:10.4037/ajcc2018368 PubMed
8. Brown CJ, Williams BR, Woodby LL, Davis LL, Allman RM. Barriers to mobility during hospitalization from the perspectives of older patients and their nurses and physicians. J Hosp Med Off Publ Soc Hosp Med. 2007;2(5):305-313. doi:10.1002/jhm.209 PubMed
9. Hoyer EH, Brotman DJ, Chan KS, Needham DM. Barriers to early mobility of hospitalized general medicine patients: survey development and results. Am J Phys Med Rehabil. 2015;94(4):304-312. doi:10.1097/PHM.0000000000000185 PubMed
10. Agmon M, Zisberg A, Gil E, Rand D, Gur-Yaish N, Azriel M. Association Between 900 Steps a Day and Functional Decline in Older Hospitalized Patients. JAMA Intern Med. 2017;177(2):272. doi:10.1001/jamainternmed.2016.7266 PubMed
11. Anderson JL, Green AJ, Yoward LS, Hall HK. Validity and reliability of accelerometry in identification of lying, sitting, standing or purposeful activity in adult hospital inpatients recovering from acute or critical illness: a systematic review. Clin Rehabil. 2018;32(2):233-242. doi:10.1177/0269215517724850 PubMed
12. Roberts DE, Holloway RG, George BP. Post-acute care discharge delays for neurology inpatients: Opportunity to improve patient flow. Neurol Clin Pract. July 2018:8(4):302-310. doi:10.1212/CPJ.0000000000000492 PubMed
13. Hoyer EH, Friedman M, Lavezza A, et al. Promoting mobility and reducing length of stay in hospitalized general medicine patients: A quality-improvement project. J Hosp Med. 2016;11(5):341-34 7. doi:10.1002/jhm.2546 PubMed
14. Surkan MJ, Gibson W. Interventions to mobilize elderly patients and reduce length of hospital stay. Can J Cardiol. 2018;34(7):881-888. doi:10.1016/j.cjca.2018.04.033 PubMed
15. Ota H, Kawai H, Sato M, Ito K, Fujishima S, Suzuki H. Effect of early mobilization on discharge disposition of mechanically ventilated patients. J Phys Ther Sci. 2015;27(3):859-864. doi:10.1589/jpts.27.859 PubMed
16. Hoyer EH, Young DL, Friedman LA, et al. Routine inpatient mobility assessment and hospital discharge planning. JAMA Intern Med. 2018. doi:10.1001/jamainternmed.2018.5145 PubMed
17. Czaplijski T, Marshburn D, Hobbs T, Bankard S, Bennett W. Creating a culture of mobility: an interdisciplinary approach for hospitalized patients. Hosp Top. 2014;92(3):74-79. doi:10.1080/00185868.2014.937971 PubMed
18. Hoyer EH, Young DL, Klein LM, et al. Toward a common language for measuring patient mobility in the hospital: reliability and construct validity of interprofessional mobility measures. Phys Ther. 2018;98(2):133-142.. doi:10.1093/ptj/pzx110 PubMed
19. Klein LM, Young D, Feng D, et al. Increasing patient mobility through an individualized goal-centered hospital mobility program: a quasi-experimental quality improvement project. Nurs Outlook. 2018;66(3):254-262. doi:10.1016/j.outlook.2018.02.006 PubMed
20. Kanach FA, Pastva AM, Hall KS, Pavon JM, Morey MC. Effects of structured exercise interventions for older adults hospitalized with acute medical illness: a systematic review. J Aging Phys Act. 2018;26(2):284-303. doi:10.1123/japa.2016-0372 PubMed

Issue
Journal of Hospital Medicine 14(5)
Issue
Journal of Hospital Medicine 14(5)
Page Number
321-322. Published online first February 20, 2019.
Page Number
321-322. Published online first February 20, 2019.
Publications
Publications
Topics
Article Type
Sections
Article Source

©2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Erik H Hoyer, MD; E-mail: ehoyer1@jhmi.edu; Telephone: 410-502-2441; Twitter: @HopkinsAMP
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media