CV disease and mortality risk higher with younger age of type 2 diabetes diagnosis

Article Type
Changed
Tue, 05/03/2022 - 15:15

Individuals who are younger when diagnosed with type 2 diabetes are at greater risk of cardiovascular disease and death, compared with those diagnosed at an older age, according to a retrospective study involving almost 2 million people.

Dr. Naveed Sattar

People diagnosed with type 2 diabetes at age 40 or younger were at greatest risk of most outcomes, reported lead author Naveed Sattar, MD, PhD, professor of metabolic medicine, University of Glasgow, Scotland, and his colleagues. “Treatment target recommendations in regards to the risk factor control may need to be more aggressive in people developing diabetes at younger ages,” they wrote in Circulation

In contrast, developing type 2 diabetes over the age of 80 years had little impact on risks.

“[R]eassessment of treatment goals in elderly might be useful,” the investigators wrote. “Diabetes screening needs for the elderly (above 80) should also be reevaluated.”

The study involved 318,083 patients with type 2 diabetes registered  in the Swedish National Diabetes Registry between 1998 and 2012. Each patient was matched with 5 individuals from the general population based on sex, age, and country of residence, providing a control population of 1,575,108. Outcomes assessed included non-cardiovascular mortality, cardiovascular mortality, all causemortality, hospitalization for heart failure, coronary heart disease, stroke, atrial fibrillation, and acute myocardial infarction. Patients were followed for cardiovascular outcomes from 1998 to December 2013, while mortality surveillance continued through 2014.

In comparison with controls, patients 40 years or less had the highest excess risk of the most outcomes. *Excess risk of heart failure was elevated almost 5-fold (hazard ratio (HR), R 4.77), and risk of coronary heart disease wasn’t far behind (HR, 4.33). Risks of acute MI (HR, 3.41), stroke (HR, 3.58), and atrial fibrillation (HR, 1.95) were also elevated. Cardiovascular-related mortality was increased almost 3-fold (HR, 2.72), while total mortality (HR, 2.05) and non-cardiovascular mortality (HR, 1.95) were raised to a lesser degree.

“Thereafter, incremental risks generally declined with each higher decade age at diagnosis” of type 2 diabetes,” the investigators wrote.

After 80 years of age, all relative mortality risk factors dropped to less than 1, indicating lower risk than controls. Although non-fatal outcomes were still greater than 1 in this age group, these risks were “substantially attenuated compared with relative incremental risks in those diagnosed with T2DM at younger ages,” the investigators wrote.

The study was funded by the Swedish Association of Local Authorities Regions, the Swedish Heart and Lung Foundation, and the Swedish Research Council.

The investigators disclosed financial relationships with Amgen, AstraZeneca, Eli Lilly, and other pharmaceutical companies.

SOURCE: Sattar et al. Circulation. 2019 Apr 8. doi:10.1161/CIRCULATIONAHA.118.037885.

Publications
Topics
Sections

Individuals who are younger when diagnosed with type 2 diabetes are at greater risk of cardiovascular disease and death, compared with those diagnosed at an older age, according to a retrospective study involving almost 2 million people.

Dr. Naveed Sattar

People diagnosed with type 2 diabetes at age 40 or younger were at greatest risk of most outcomes, reported lead author Naveed Sattar, MD, PhD, professor of metabolic medicine, University of Glasgow, Scotland, and his colleagues. “Treatment target recommendations in regards to the risk factor control may need to be more aggressive in people developing diabetes at younger ages,” they wrote in Circulation

In contrast, developing type 2 diabetes over the age of 80 years had little impact on risks.

“[R]eassessment of treatment goals in elderly might be useful,” the investigators wrote. “Diabetes screening needs for the elderly (above 80) should also be reevaluated.”

The study involved 318,083 patients with type 2 diabetes registered  in the Swedish National Diabetes Registry between 1998 and 2012. Each patient was matched with 5 individuals from the general population based on sex, age, and country of residence, providing a control population of 1,575,108. Outcomes assessed included non-cardiovascular mortality, cardiovascular mortality, all causemortality, hospitalization for heart failure, coronary heart disease, stroke, atrial fibrillation, and acute myocardial infarction. Patients were followed for cardiovascular outcomes from 1998 to December 2013, while mortality surveillance continued through 2014.

In comparison with controls, patients 40 years or less had the highest excess risk of the most outcomes. *Excess risk of heart failure was elevated almost 5-fold (hazard ratio (HR), R 4.77), and risk of coronary heart disease wasn’t far behind (HR, 4.33). Risks of acute MI (HR, 3.41), stroke (HR, 3.58), and atrial fibrillation (HR, 1.95) were also elevated. Cardiovascular-related mortality was increased almost 3-fold (HR, 2.72), while total mortality (HR, 2.05) and non-cardiovascular mortality (HR, 1.95) were raised to a lesser degree.

“Thereafter, incremental risks generally declined with each higher decade age at diagnosis” of type 2 diabetes,” the investigators wrote.

After 80 years of age, all relative mortality risk factors dropped to less than 1, indicating lower risk than controls. Although non-fatal outcomes were still greater than 1 in this age group, these risks were “substantially attenuated compared with relative incremental risks in those diagnosed with T2DM at younger ages,” the investigators wrote.

The study was funded by the Swedish Association of Local Authorities Regions, the Swedish Heart and Lung Foundation, and the Swedish Research Council.

The investigators disclosed financial relationships with Amgen, AstraZeneca, Eli Lilly, and other pharmaceutical companies.

SOURCE: Sattar et al. Circulation. 2019 Apr 8. doi:10.1161/CIRCULATIONAHA.118.037885.

Individuals who are younger when diagnosed with type 2 diabetes are at greater risk of cardiovascular disease and death, compared with those diagnosed at an older age, according to a retrospective study involving almost 2 million people.

Dr. Naveed Sattar

People diagnosed with type 2 diabetes at age 40 or younger were at greatest risk of most outcomes, reported lead author Naveed Sattar, MD, PhD, professor of metabolic medicine, University of Glasgow, Scotland, and his colleagues. “Treatment target recommendations in regards to the risk factor control may need to be more aggressive in people developing diabetes at younger ages,” they wrote in Circulation

In contrast, developing type 2 diabetes over the age of 80 years had little impact on risks.

“[R]eassessment of treatment goals in elderly might be useful,” the investigators wrote. “Diabetes screening needs for the elderly (above 80) should also be reevaluated.”

The study involved 318,083 patients with type 2 diabetes registered  in the Swedish National Diabetes Registry between 1998 and 2012. Each patient was matched with 5 individuals from the general population based on sex, age, and country of residence, providing a control population of 1,575,108. Outcomes assessed included non-cardiovascular mortality, cardiovascular mortality, all causemortality, hospitalization for heart failure, coronary heart disease, stroke, atrial fibrillation, and acute myocardial infarction. Patients were followed for cardiovascular outcomes from 1998 to December 2013, while mortality surveillance continued through 2014.

In comparison with controls, patients 40 years or less had the highest excess risk of the most outcomes. *Excess risk of heart failure was elevated almost 5-fold (hazard ratio (HR), R 4.77), and risk of coronary heart disease wasn’t far behind (HR, 4.33). Risks of acute MI (HR, 3.41), stroke (HR, 3.58), and atrial fibrillation (HR, 1.95) were also elevated. Cardiovascular-related mortality was increased almost 3-fold (HR, 2.72), while total mortality (HR, 2.05) and non-cardiovascular mortality (HR, 1.95) were raised to a lesser degree.

“Thereafter, incremental risks generally declined with each higher decade age at diagnosis” of type 2 diabetes,” the investigators wrote.

After 80 years of age, all relative mortality risk factors dropped to less than 1, indicating lower risk than controls. Although non-fatal outcomes were still greater than 1 in this age group, these risks were “substantially attenuated compared with relative incremental risks in those diagnosed with T2DM at younger ages,” the investigators wrote.

The study was funded by the Swedish Association of Local Authorities Regions, the Swedish Heart and Lung Foundation, and the Swedish Research Council.

The investigators disclosed financial relationships with Amgen, AstraZeneca, Eli Lilly, and other pharmaceutical companies.

SOURCE: Sattar et al. Circulation. 2019 Apr 8. doi:10.1161/CIRCULATIONAHA.118.037885.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM CIRCULATION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

Key clinical point: Patients who are younger when diagnosed with type 2 diabetes mellitus (T2DM) are at greater risk of cardiovascular disease and death than patients diagnosed at an older age.

Major finding: Patients diagnosed with T2DM at age 40 or younger had twice the risk of death from any cause, compared with age-matched controls (hazard ratio, 2.05).

Study details: A retrospective analysis of type 2 diabetes and associations with cardiovascular and mortality risks, using data from 318,083 patients in the Swedish National Diabetes Registry.

Disclosures: The study was funded by the Swedish Association of Local Authorities Regions, the Swedish Heart and Lung Foundation, and the Swedish Research Council. The investigators disclosed financial relationships with Amgen, Astra-Zeneca, Eli Lilly, and others.

Source: Sattar et al. Circulation. 2019 Apr 8. doi:10.1161/CIRCULATIONAHA.118.037885. 

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Managing Eating Disorders on a General Pediatrics Unit: A Centralized Video Monitoring Pilot

Article Type
Changed
Sun, 06/30/2019 - 20:02

Hospitalizations for nutritional rehabilitation of patients with restrictive eating disorders are increasing.1 Among primary mental health admissions at free-standing children’s hospitals, eating disorders represent 5.5% of hospitalizations and are associated with the longest length of stay (LOS; mean 14.3 days) and costliest care (mean $46,130).2 Admission is necessary to ensure initial weight restoration and monitoring for symptoms of refeeding syndrome, including electrolyte shifts and vital sign abnormalities.3-5

Supervision is generally considered an essential element of caring for hospitalized patients with eating disorders, who may experience difficulty adhering to nutritional treatment, perform excessive movement or exercise, or demonstrate purging or self-harming behaviors. Supervision is presumed to prevent counterproductive behaviors, facilitating weight gain and earlier discharge to psychiatric treatment. Best practices for patient supervision to address these challenges have not been established but often include meal time or continuous one-to-one supervision by nursing assistants (NAs) or other staff.6,7 While meal supervision has been shown to decrease medical LOS, it is costly, reduces staff availability for the care of other patient care, and can be a barrier to caring for patients with eating disorders in many institutions.8

Although not previously used in patients with eating disorders, centralized video monitoring (CVM) may provide an additional mode of supervision. CVM is an emerging technology consisting of real-time video streaming, without video recording, enabling tracking of patient movement, redirection of behaviors, and communication with unit nurses when necessary. CVM has been used in multiple patient safety initiatives to reduce falls, address staffing shortages, reduce costs,9,10 supervise patients at risk for self-harm or elopement, and prevent controlled medication diversion.10,11

We sought to pilot a novel use of CVM to replace our institution’s standard practice of continuous one-to-one nursing assistant (NA) supervision of patients admitted for medical stabilization of an eating disorder. Our objective was to evaluate the supervision cost and feasibility of CVM, using LOS and days to weight gain as balancing measures.

METHODS

Setting and Participants

This retrospective cohort study included patients 12-18 years old admitted to the pediatric hospital medicine service on a general unit of an academic quaternary care children’s hospital for medical stabilization of an eating disorder between September 2013 and March 2017. Patients were identified using administrative data based on primary or secondary diagnosis of anorexia nervosa, eating disorder not other wise specified, or another specified eating disorder (ICD 9 3071, 20759, or ICD 10 f5000, 5001, f5089, f509).12,13 This research study was considered exempt by the University of Wisconsin School of Medicine and Public Health’s Institutional Review Board.

Supervision Interventions

A standard medical stabilization protocol was used for patients admitted with an eating disorder throughout the study period (Appendix). All patients received continuous one-to-one NA supervision until they reached the target calorie intake and demonstrated the ability to follow the nutritional meal protocol. Beginning July 2015, patients received continuous CVM supervision unless they expressed suicidal ideation (SI), which triggered one-to-one NA supervision until they no longer endorsed suicidality.

 

 

Centralized Video Monitoring Implementation

Institutional CVM technology was AvaSys TeleSitter Solution (AvaSure, Inc). Our institution purchased CVM devices for use in adult settings, and one was assigned for pediatric CVM. Mobile CVM video carts were deployed to patient rooms and generated live video streams, without recorded capture, which were supervised by CVM technicians. These technicians were NAs hired and trained specifically for this role; worked four-, eight-, and 12-hour shifts; and observed up to eight camera feeds on a single monitor in a centralized room. Patients and family members could refuse CVM, which would trigger one-to-one NA supervision. Patients were not observed by CVM while in the restroom; staff were notified by either the patient or technician, and one-to-one supervision was provided. CVM had two-way audio communication, which allowed technicians to redirect patients verbally. Technicians could contact nursing staff directly by phone when additional intervention was needed.

Supervision Costs

NA supervision costs were estimated at $19/hour, based upon institutional human resources average NA salaries at that time. No additional mealtime supervision was included, as in-person supervision was already occurring.

CVM supervision costs were defined as the sum of the device cost plus CVM technician costs and two hours of one-to-one NA mealtime supervision per day. The CVM device cost was estimated at $2.10/hour, assuming a 10-year machine life expectancy (single unit cost $82,893 in 2015, 3,944 hours of use in fiscal year of 2018). CVM technician costs were $19/hour, based upon institutional human resources average CVM technician salaries at that time. Because technicians monitored an average of six patients simultaneously during this study, one-sixth of a CVM technician’s salary (ie, $3.17/hour) was used for each hour of CVM monitoring. Patients with mixed (NA and CVM) supervision were analyzed with those having CVM supervision. These patients’ costs were the sum of their NA supervision costs plus their CVM supervision costs.

Data Collection

Descriptive variables including age, gender, race/ethnicity, insurance, and LOS were collected from administrative data. The duration and type of supervision for all patients were collected from daily staffing logs. The eating disorder protocol standardized the process of obtaining daily weights (Appendix). Days to weight gain following admission were defined as the total number of days from admission to the first day of weight gain that was followed by another day of weight gain or maintaining the same weight. CVM acceptability and feasibility were assessed by family refusal of CVM, conversion from CVM to NA, technological failure, complaints, and unplanned discontinuation, which were prospectively documented by the unit nurse manager.

Data Analysis

Patient and hospitalization characteristics were summarized. A sample size of at least 14 in each group was estimated as necessary to detect a 50% reduction in supervision cost between the groups using alpha = 0.05, a power of 80%, a mean cost of $4,400 in the NA group, and a standard deviation of $1,600.Wilcoxon rank-sum tests were used to assess differences in median supervision cost between NA and CVM use. Differences in mean LOS and days to weight gain between NA and CVM use were assessed with t-tests because these data were normally distributed.

 

 

RESULTS

Patient Characteristics and Supervision Costs

The study included 37 consecutive admissions (NA = 23 and CVM = 14) with 35 unique patients. Patients were female, primarily non-Hispanic White, and privately insured (Table 1). Median supervision cost for the NA was statistically significantly more expensive at $4,104/admission versus $1,166/admission for CVM (P < .001, Table 2).

Balancing Measures, Acceptability, and Feasibility

Mean LOS was 11.7 days for NA and 9.8 days for CVM (P = .27; Table 2). The mean number of days to weight gain was 3.1 and 3.6 days, respectively (P = .28). No patients converted from CVM to NA supervision. One patient with SI converted to CVM after SI resolved and two patients required ongoing NA supervision due to continued SI. There were no reported refusals, technology failures, or unplanned discontinuations of CVM. One patient/family reported excessive CVM redirection of behavior.

DISCUSSION

This is the first description of CVM use in adolescent patients or patients with eating disorders. Our results suggest that CVM appears feasible and less costly in this population than one-to-one NA supervision, without statistically significant differences in LOS or time to weight gain. Patients with CVM with any NA supervision (except mealtime alone) were analyzed in the CVM group; therefore, this study may underestimate cost savings from CVM supervision. This innovative use of CVM may represent an opportunity for hospitals to repurpose monitoring technology for more efficient supervision of patients with eating disorders.

This pediatric pilot study adds to the growing body of literature in adult patients suggesting CVM supervision may be a feasible inpatient cost-reduction strategy.9,10 One single-center study demonstrated that the use of CVM with adult inpatients led to fewer unsafe behaviors, eg, patient removal of intravenous catheters and oxygen therapy. Personnel savings exceeded the original investment cost of the monitor within one fiscal quarter.9 Results of another study suggest that CVM use with hospitalized adults who required supervision to prevent falls was associated with improved patient and family satisfaction.14 In the absence of a gold standard for supervision of patients hospitalized with eating disorders, CVM technology is a tool that may balance cost, care quality, and patient experience. Given the upfront investment in CVM units, this technology may be most appropriate for institutions already using CVM for other inpatient indications.



Although our institutional cost of CVM use was similar to that reported by other institutions,11,15 the single-center design of this pilot study limits the generalizability of our findings. Unadjusted results of this observational study may be confounded by indication bias. As this was a pilot study, it was powered to detect a clinically significant difference in cost between NA and CVM supervision. While statistically significant differences were not seen in LOS or weight gain, this pilot study was not powered to detect potential differences or to adjust for all potential confounders (eg, other mental health conditions or comorbidities, eating disorder type, previous hospitalizations). Future studies should include these considerations in estimating sample sizes. The ability to conduct a robust cost-effectiveness analysis was also limited by cost data availability and reliance on staffing assumptions to calculate supervision costs. However, these findings will be important for valid effect size estimates for future interventional studies that rigorously evaluate CVM effectiveness and safety. Patients and families were not formally surveyed about their experiences with CVM, and the patient and family experience is another important outcome to consider in future studies.

 

 

CONCLUSION

The results of this pilot study suggest that supervision costs for patients admitted for medical stabilization of eating disorders were statistically significantly lower with CVM when compared with one-to-one NA supervision, without a change in hospitalization LOS or time to weight gain. These findings are particularly important as hospitals seek opportunities to reduce costs while providing safe and effective care. Future efforts should focus on evaluating clinical outcomes and patient experiences with this technology and strategies to maximize efficiency to offset the initial device cost.

Disclosures

The authors have no financial relationships relevant to this article to disclose. The authors have no conflicts of interest relevant to this article to disclose.

Files
References

1. Zhao Y, Encinosa W. An update on hospitalizations for eating disorders, 1999 to 2009: statistical brief #120. In: Healthcare Cost and Utilization Project (HCUP) Statistical Briefs. Rockville, MD: Agency for Healthcare Research and Quality (US); 2006. PubMed
2. Bardach NS, Coker TR, Zima BT, et al. Common and costly hospitalizations for pediatric mental health disorders. Pediatrics. 2014;133(4):602-609. doi: 10.1542/peds.2013-3165. PubMed
3. Society for Adolescent H, Medicine, Golden NH, et al. Position Paper of the Society for Adolescent Health and Medicine: medical management of restrictive eating disorders in adolescents and young adults. J Adolesc Health. 2015;56(1):121-125. doi: 10.1016/j.jadohealth.2014.10.259. PubMed
4. Katzman DK. Medical complications in adolescents with anorexia nervosa: a review of the literature. Int J Eat Disord. 2005;37(S1):S52-S59; discussion S87-S59. doi: 10.1002/eat.20118. PubMed
5. Strandjord SE, Sieke EH, Richmond M, Khadilkar A, Rome ES. Medical stabilization of adolescents with nutritional insufficiency: a clinical care path. Eat Weight Disord. 2016;21(3):403-410. doi: 10.1007/s40519-015-0245-5. PubMed
6. Kells M, Davidson K, Hitchko L, O’Neil K, Schubert-Bob P, McCabe M. Examining supervised meals in patients with restrictive eating disorders. Appl Nurs Res. 2013;26(2):76-79. doi: 10.1016/j.apnr.2012.06.003. PubMed
7. Leclerc A, Turrini T, Sherwood K, Katzman DK. Evaluation of a nutrition rehabilitation protocol in hospitalized adolescents with restrictive eating disorders. J Adolesc Health. 2013;53(5):585-589. doi: 10.1016/j.jadohealth.2013.06.001. PubMed
8. Kells M, Schubert-Bob P, Nagle K, et al. Meal supervision during medical hospitalization for eating disorders. Clin Nurs Res. 2017;26(4):525-537. doi: 10.1177/1054773816637598. PubMed
9. Jeffers S, Searcey P, Boyle K, et al. Centralized video monitoring for patient safety: a Denver Health Lean journey. Nurs Econ. 2013;31(6):298-306. PubMed
10. Sand-Jecklin K, Johnson JR, Tylka S. Protecting patient safety: can video monitoring prevent falls in high-risk patient populations? J Nurs Care Qual. 2016;31(2):131-138. doi: 10.1097/NCQ.0000000000000163. PubMed
11. Burtson PL, Vento L. Sitter reduction through mobile video monitoring: a nurse-driven sitter protocol and administrative oversight. J Nurs Adm. 2015;45(7-8):363-369. doi: 10.1097/NNA.0000000000000216. PubMed
12. Prevention CfDCa. ICD-9-CM Guidelines, 9th ed. https://www.cdc.gov/nchs/data/icd/icd9cm_guidelines_2011.pdf. Accessed April 11, 2018.
13. Prevention CfDca. IDC-9-CM Code Conversion Table. https://www.cdc.gov/nchs/data/icd/icd-9-cm_fy14_cnvtbl_final.pdf. Accessed April 11, 2018.
14. Cournan M, Fusco-Gessick B, Wright L. Improving patient safety through video monitoring. Rehabil Nurs. 2016. doi: 10.1002/rnj.308. PubMed
15. Rochefort CM, Ward L, Ritchie JA, Girard N, Tamblyn RM. Patient and nurse staffing characteristics associated with high sitter use costs. J Adv Nurs. 2012;68(8):1758-1767. doi: 10.1111/j.1365-2648.2011.05864.x. PubMed

Article PDF
Issue
Journal of Hospital Medicine 14(6)
Publications
Topics
Page Number
357-360. Published online first April 8, 2019.
Sections
Files
Files
Article PDF
Article PDF

Hospitalizations for nutritional rehabilitation of patients with restrictive eating disorders are increasing.1 Among primary mental health admissions at free-standing children’s hospitals, eating disorders represent 5.5% of hospitalizations and are associated with the longest length of stay (LOS; mean 14.3 days) and costliest care (mean $46,130).2 Admission is necessary to ensure initial weight restoration and monitoring for symptoms of refeeding syndrome, including electrolyte shifts and vital sign abnormalities.3-5

Supervision is generally considered an essential element of caring for hospitalized patients with eating disorders, who may experience difficulty adhering to nutritional treatment, perform excessive movement or exercise, or demonstrate purging or self-harming behaviors. Supervision is presumed to prevent counterproductive behaviors, facilitating weight gain and earlier discharge to psychiatric treatment. Best practices for patient supervision to address these challenges have not been established but often include meal time or continuous one-to-one supervision by nursing assistants (NAs) or other staff.6,7 While meal supervision has been shown to decrease medical LOS, it is costly, reduces staff availability for the care of other patient care, and can be a barrier to caring for patients with eating disorders in many institutions.8

Although not previously used in patients with eating disorders, centralized video monitoring (CVM) may provide an additional mode of supervision. CVM is an emerging technology consisting of real-time video streaming, without video recording, enabling tracking of patient movement, redirection of behaviors, and communication with unit nurses when necessary. CVM has been used in multiple patient safety initiatives to reduce falls, address staffing shortages, reduce costs,9,10 supervise patients at risk for self-harm or elopement, and prevent controlled medication diversion.10,11

We sought to pilot a novel use of CVM to replace our institution’s standard practice of continuous one-to-one nursing assistant (NA) supervision of patients admitted for medical stabilization of an eating disorder. Our objective was to evaluate the supervision cost and feasibility of CVM, using LOS and days to weight gain as balancing measures.

METHODS

Setting and Participants

This retrospective cohort study included patients 12-18 years old admitted to the pediatric hospital medicine service on a general unit of an academic quaternary care children’s hospital for medical stabilization of an eating disorder between September 2013 and March 2017. Patients were identified using administrative data based on primary or secondary diagnosis of anorexia nervosa, eating disorder not other wise specified, or another specified eating disorder (ICD 9 3071, 20759, or ICD 10 f5000, 5001, f5089, f509).12,13 This research study was considered exempt by the University of Wisconsin School of Medicine and Public Health’s Institutional Review Board.

Supervision Interventions

A standard medical stabilization protocol was used for patients admitted with an eating disorder throughout the study period (Appendix). All patients received continuous one-to-one NA supervision until they reached the target calorie intake and demonstrated the ability to follow the nutritional meal protocol. Beginning July 2015, patients received continuous CVM supervision unless they expressed suicidal ideation (SI), which triggered one-to-one NA supervision until they no longer endorsed suicidality.

 

 

Centralized Video Monitoring Implementation

Institutional CVM technology was AvaSys TeleSitter Solution (AvaSure, Inc). Our institution purchased CVM devices for use in adult settings, and one was assigned for pediatric CVM. Mobile CVM video carts were deployed to patient rooms and generated live video streams, without recorded capture, which were supervised by CVM technicians. These technicians were NAs hired and trained specifically for this role; worked four-, eight-, and 12-hour shifts; and observed up to eight camera feeds on a single monitor in a centralized room. Patients and family members could refuse CVM, which would trigger one-to-one NA supervision. Patients were not observed by CVM while in the restroom; staff were notified by either the patient or technician, and one-to-one supervision was provided. CVM had two-way audio communication, which allowed technicians to redirect patients verbally. Technicians could contact nursing staff directly by phone when additional intervention was needed.

Supervision Costs

NA supervision costs were estimated at $19/hour, based upon institutional human resources average NA salaries at that time. No additional mealtime supervision was included, as in-person supervision was already occurring.

CVM supervision costs were defined as the sum of the device cost plus CVM technician costs and two hours of one-to-one NA mealtime supervision per day. The CVM device cost was estimated at $2.10/hour, assuming a 10-year machine life expectancy (single unit cost $82,893 in 2015, 3,944 hours of use in fiscal year of 2018). CVM technician costs were $19/hour, based upon institutional human resources average CVM technician salaries at that time. Because technicians monitored an average of six patients simultaneously during this study, one-sixth of a CVM technician’s salary (ie, $3.17/hour) was used for each hour of CVM monitoring. Patients with mixed (NA and CVM) supervision were analyzed with those having CVM supervision. These patients’ costs were the sum of their NA supervision costs plus their CVM supervision costs.

Data Collection

Descriptive variables including age, gender, race/ethnicity, insurance, and LOS were collected from administrative data. The duration and type of supervision for all patients were collected from daily staffing logs. The eating disorder protocol standardized the process of obtaining daily weights (Appendix). Days to weight gain following admission were defined as the total number of days from admission to the first day of weight gain that was followed by another day of weight gain or maintaining the same weight. CVM acceptability and feasibility were assessed by family refusal of CVM, conversion from CVM to NA, technological failure, complaints, and unplanned discontinuation, which were prospectively documented by the unit nurse manager.

Data Analysis

Patient and hospitalization characteristics were summarized. A sample size of at least 14 in each group was estimated as necessary to detect a 50% reduction in supervision cost between the groups using alpha = 0.05, a power of 80%, a mean cost of $4,400 in the NA group, and a standard deviation of $1,600.Wilcoxon rank-sum tests were used to assess differences in median supervision cost between NA and CVM use. Differences in mean LOS and days to weight gain between NA and CVM use were assessed with t-tests because these data were normally distributed.

 

 

RESULTS

Patient Characteristics and Supervision Costs

The study included 37 consecutive admissions (NA = 23 and CVM = 14) with 35 unique patients. Patients were female, primarily non-Hispanic White, and privately insured (Table 1). Median supervision cost for the NA was statistically significantly more expensive at $4,104/admission versus $1,166/admission for CVM (P < .001, Table 2).

Balancing Measures, Acceptability, and Feasibility

Mean LOS was 11.7 days for NA and 9.8 days for CVM (P = .27; Table 2). The mean number of days to weight gain was 3.1 and 3.6 days, respectively (P = .28). No patients converted from CVM to NA supervision. One patient with SI converted to CVM after SI resolved and two patients required ongoing NA supervision due to continued SI. There were no reported refusals, technology failures, or unplanned discontinuations of CVM. One patient/family reported excessive CVM redirection of behavior.

DISCUSSION

This is the first description of CVM use in adolescent patients or patients with eating disorders. Our results suggest that CVM appears feasible and less costly in this population than one-to-one NA supervision, without statistically significant differences in LOS or time to weight gain. Patients with CVM with any NA supervision (except mealtime alone) were analyzed in the CVM group; therefore, this study may underestimate cost savings from CVM supervision. This innovative use of CVM may represent an opportunity for hospitals to repurpose monitoring technology for more efficient supervision of patients with eating disorders.

This pediatric pilot study adds to the growing body of literature in adult patients suggesting CVM supervision may be a feasible inpatient cost-reduction strategy.9,10 One single-center study demonstrated that the use of CVM with adult inpatients led to fewer unsafe behaviors, eg, patient removal of intravenous catheters and oxygen therapy. Personnel savings exceeded the original investment cost of the monitor within one fiscal quarter.9 Results of another study suggest that CVM use with hospitalized adults who required supervision to prevent falls was associated with improved patient and family satisfaction.14 In the absence of a gold standard for supervision of patients hospitalized with eating disorders, CVM technology is a tool that may balance cost, care quality, and patient experience. Given the upfront investment in CVM units, this technology may be most appropriate for institutions already using CVM for other inpatient indications.



Although our institutional cost of CVM use was similar to that reported by other institutions,11,15 the single-center design of this pilot study limits the generalizability of our findings. Unadjusted results of this observational study may be confounded by indication bias. As this was a pilot study, it was powered to detect a clinically significant difference in cost between NA and CVM supervision. While statistically significant differences were not seen in LOS or weight gain, this pilot study was not powered to detect potential differences or to adjust for all potential confounders (eg, other mental health conditions or comorbidities, eating disorder type, previous hospitalizations). Future studies should include these considerations in estimating sample sizes. The ability to conduct a robust cost-effectiveness analysis was also limited by cost data availability and reliance on staffing assumptions to calculate supervision costs. However, these findings will be important for valid effect size estimates for future interventional studies that rigorously evaluate CVM effectiveness and safety. Patients and families were not formally surveyed about their experiences with CVM, and the patient and family experience is another important outcome to consider in future studies.

 

 

CONCLUSION

The results of this pilot study suggest that supervision costs for patients admitted for medical stabilization of eating disorders were statistically significantly lower with CVM when compared with one-to-one NA supervision, without a change in hospitalization LOS or time to weight gain. These findings are particularly important as hospitals seek opportunities to reduce costs while providing safe and effective care. Future efforts should focus on evaluating clinical outcomes and patient experiences with this technology and strategies to maximize efficiency to offset the initial device cost.

Disclosures

The authors have no financial relationships relevant to this article to disclose. The authors have no conflicts of interest relevant to this article to disclose.

Hospitalizations for nutritional rehabilitation of patients with restrictive eating disorders are increasing.1 Among primary mental health admissions at free-standing children’s hospitals, eating disorders represent 5.5% of hospitalizations and are associated with the longest length of stay (LOS; mean 14.3 days) and costliest care (mean $46,130).2 Admission is necessary to ensure initial weight restoration and monitoring for symptoms of refeeding syndrome, including electrolyte shifts and vital sign abnormalities.3-5

Supervision is generally considered an essential element of caring for hospitalized patients with eating disorders, who may experience difficulty adhering to nutritional treatment, perform excessive movement or exercise, or demonstrate purging or self-harming behaviors. Supervision is presumed to prevent counterproductive behaviors, facilitating weight gain and earlier discharge to psychiatric treatment. Best practices for patient supervision to address these challenges have not been established but often include meal time or continuous one-to-one supervision by nursing assistants (NAs) or other staff.6,7 While meal supervision has been shown to decrease medical LOS, it is costly, reduces staff availability for the care of other patient care, and can be a barrier to caring for patients with eating disorders in many institutions.8

Although not previously used in patients with eating disorders, centralized video monitoring (CVM) may provide an additional mode of supervision. CVM is an emerging technology consisting of real-time video streaming, without video recording, enabling tracking of patient movement, redirection of behaviors, and communication with unit nurses when necessary. CVM has been used in multiple patient safety initiatives to reduce falls, address staffing shortages, reduce costs,9,10 supervise patients at risk for self-harm or elopement, and prevent controlled medication diversion.10,11

We sought to pilot a novel use of CVM to replace our institution’s standard practice of continuous one-to-one nursing assistant (NA) supervision of patients admitted for medical stabilization of an eating disorder. Our objective was to evaluate the supervision cost and feasibility of CVM, using LOS and days to weight gain as balancing measures.

METHODS

Setting and Participants

This retrospective cohort study included patients 12-18 years old admitted to the pediatric hospital medicine service on a general unit of an academic quaternary care children’s hospital for medical stabilization of an eating disorder between September 2013 and March 2017. Patients were identified using administrative data based on primary or secondary diagnosis of anorexia nervosa, eating disorder not other wise specified, or another specified eating disorder (ICD 9 3071, 20759, or ICD 10 f5000, 5001, f5089, f509).12,13 This research study was considered exempt by the University of Wisconsin School of Medicine and Public Health’s Institutional Review Board.

Supervision Interventions

A standard medical stabilization protocol was used for patients admitted with an eating disorder throughout the study period (Appendix). All patients received continuous one-to-one NA supervision until they reached the target calorie intake and demonstrated the ability to follow the nutritional meal protocol. Beginning July 2015, patients received continuous CVM supervision unless they expressed suicidal ideation (SI), which triggered one-to-one NA supervision until they no longer endorsed suicidality.

 

 

Centralized Video Monitoring Implementation

Institutional CVM technology was AvaSys TeleSitter Solution (AvaSure, Inc). Our institution purchased CVM devices for use in adult settings, and one was assigned for pediatric CVM. Mobile CVM video carts were deployed to patient rooms and generated live video streams, without recorded capture, which were supervised by CVM technicians. These technicians were NAs hired and trained specifically for this role; worked four-, eight-, and 12-hour shifts; and observed up to eight camera feeds on a single monitor in a centralized room. Patients and family members could refuse CVM, which would trigger one-to-one NA supervision. Patients were not observed by CVM while in the restroom; staff were notified by either the patient or technician, and one-to-one supervision was provided. CVM had two-way audio communication, which allowed technicians to redirect patients verbally. Technicians could contact nursing staff directly by phone when additional intervention was needed.

Supervision Costs

NA supervision costs were estimated at $19/hour, based upon institutional human resources average NA salaries at that time. No additional mealtime supervision was included, as in-person supervision was already occurring.

CVM supervision costs were defined as the sum of the device cost plus CVM technician costs and two hours of one-to-one NA mealtime supervision per day. The CVM device cost was estimated at $2.10/hour, assuming a 10-year machine life expectancy (single unit cost $82,893 in 2015, 3,944 hours of use in fiscal year of 2018). CVM technician costs were $19/hour, based upon institutional human resources average CVM technician salaries at that time. Because technicians monitored an average of six patients simultaneously during this study, one-sixth of a CVM technician’s salary (ie, $3.17/hour) was used for each hour of CVM monitoring. Patients with mixed (NA and CVM) supervision were analyzed with those having CVM supervision. These patients’ costs were the sum of their NA supervision costs plus their CVM supervision costs.

Data Collection

Descriptive variables including age, gender, race/ethnicity, insurance, and LOS were collected from administrative data. The duration and type of supervision for all patients were collected from daily staffing logs. The eating disorder protocol standardized the process of obtaining daily weights (Appendix). Days to weight gain following admission were defined as the total number of days from admission to the first day of weight gain that was followed by another day of weight gain or maintaining the same weight. CVM acceptability and feasibility were assessed by family refusal of CVM, conversion from CVM to NA, technological failure, complaints, and unplanned discontinuation, which were prospectively documented by the unit nurse manager.

Data Analysis

Patient and hospitalization characteristics were summarized. A sample size of at least 14 in each group was estimated as necessary to detect a 50% reduction in supervision cost between the groups using alpha = 0.05, a power of 80%, a mean cost of $4,400 in the NA group, and a standard deviation of $1,600.Wilcoxon rank-sum tests were used to assess differences in median supervision cost between NA and CVM use. Differences in mean LOS and days to weight gain between NA and CVM use were assessed with t-tests because these data were normally distributed.

 

 

RESULTS

Patient Characteristics and Supervision Costs

The study included 37 consecutive admissions (NA = 23 and CVM = 14) with 35 unique patients. Patients were female, primarily non-Hispanic White, and privately insured (Table 1). Median supervision cost for the NA was statistically significantly more expensive at $4,104/admission versus $1,166/admission for CVM (P < .001, Table 2).

Balancing Measures, Acceptability, and Feasibility

Mean LOS was 11.7 days for NA and 9.8 days for CVM (P = .27; Table 2). The mean number of days to weight gain was 3.1 and 3.6 days, respectively (P = .28). No patients converted from CVM to NA supervision. One patient with SI converted to CVM after SI resolved and two patients required ongoing NA supervision due to continued SI. There were no reported refusals, technology failures, or unplanned discontinuations of CVM. One patient/family reported excessive CVM redirection of behavior.

DISCUSSION

This is the first description of CVM use in adolescent patients or patients with eating disorders. Our results suggest that CVM appears feasible and less costly in this population than one-to-one NA supervision, without statistically significant differences in LOS or time to weight gain. Patients with CVM with any NA supervision (except mealtime alone) were analyzed in the CVM group; therefore, this study may underestimate cost savings from CVM supervision. This innovative use of CVM may represent an opportunity for hospitals to repurpose monitoring technology for more efficient supervision of patients with eating disorders.

This pediatric pilot study adds to the growing body of literature in adult patients suggesting CVM supervision may be a feasible inpatient cost-reduction strategy.9,10 One single-center study demonstrated that the use of CVM with adult inpatients led to fewer unsafe behaviors, eg, patient removal of intravenous catheters and oxygen therapy. Personnel savings exceeded the original investment cost of the monitor within one fiscal quarter.9 Results of another study suggest that CVM use with hospitalized adults who required supervision to prevent falls was associated with improved patient and family satisfaction.14 In the absence of a gold standard for supervision of patients hospitalized with eating disorders, CVM technology is a tool that may balance cost, care quality, and patient experience. Given the upfront investment in CVM units, this technology may be most appropriate for institutions already using CVM for other inpatient indications.



Although our institutional cost of CVM use was similar to that reported by other institutions,11,15 the single-center design of this pilot study limits the generalizability of our findings. Unadjusted results of this observational study may be confounded by indication bias. As this was a pilot study, it was powered to detect a clinically significant difference in cost between NA and CVM supervision. While statistically significant differences were not seen in LOS or weight gain, this pilot study was not powered to detect potential differences or to adjust for all potential confounders (eg, other mental health conditions or comorbidities, eating disorder type, previous hospitalizations). Future studies should include these considerations in estimating sample sizes. The ability to conduct a robust cost-effectiveness analysis was also limited by cost data availability and reliance on staffing assumptions to calculate supervision costs. However, these findings will be important for valid effect size estimates for future interventional studies that rigorously evaluate CVM effectiveness and safety. Patients and families were not formally surveyed about their experiences with CVM, and the patient and family experience is another important outcome to consider in future studies.

 

 

CONCLUSION

The results of this pilot study suggest that supervision costs for patients admitted for medical stabilization of eating disorders were statistically significantly lower with CVM when compared with one-to-one NA supervision, without a change in hospitalization LOS or time to weight gain. These findings are particularly important as hospitals seek opportunities to reduce costs while providing safe and effective care. Future efforts should focus on evaluating clinical outcomes and patient experiences with this technology and strategies to maximize efficiency to offset the initial device cost.

Disclosures

The authors have no financial relationships relevant to this article to disclose. The authors have no conflicts of interest relevant to this article to disclose.

References

1. Zhao Y, Encinosa W. An update on hospitalizations for eating disorders, 1999 to 2009: statistical brief #120. In: Healthcare Cost and Utilization Project (HCUP) Statistical Briefs. Rockville, MD: Agency for Healthcare Research and Quality (US); 2006. PubMed
2. Bardach NS, Coker TR, Zima BT, et al. Common and costly hospitalizations for pediatric mental health disorders. Pediatrics. 2014;133(4):602-609. doi: 10.1542/peds.2013-3165. PubMed
3. Society for Adolescent H, Medicine, Golden NH, et al. Position Paper of the Society for Adolescent Health and Medicine: medical management of restrictive eating disorders in adolescents and young adults. J Adolesc Health. 2015;56(1):121-125. doi: 10.1016/j.jadohealth.2014.10.259. PubMed
4. Katzman DK. Medical complications in adolescents with anorexia nervosa: a review of the literature. Int J Eat Disord. 2005;37(S1):S52-S59; discussion S87-S59. doi: 10.1002/eat.20118. PubMed
5. Strandjord SE, Sieke EH, Richmond M, Khadilkar A, Rome ES. Medical stabilization of adolescents with nutritional insufficiency: a clinical care path. Eat Weight Disord. 2016;21(3):403-410. doi: 10.1007/s40519-015-0245-5. PubMed
6. Kells M, Davidson K, Hitchko L, O’Neil K, Schubert-Bob P, McCabe M. Examining supervised meals in patients with restrictive eating disorders. Appl Nurs Res. 2013;26(2):76-79. doi: 10.1016/j.apnr.2012.06.003. PubMed
7. Leclerc A, Turrini T, Sherwood K, Katzman DK. Evaluation of a nutrition rehabilitation protocol in hospitalized adolescents with restrictive eating disorders. J Adolesc Health. 2013;53(5):585-589. doi: 10.1016/j.jadohealth.2013.06.001. PubMed
8. Kells M, Schubert-Bob P, Nagle K, et al. Meal supervision during medical hospitalization for eating disorders. Clin Nurs Res. 2017;26(4):525-537. doi: 10.1177/1054773816637598. PubMed
9. Jeffers S, Searcey P, Boyle K, et al. Centralized video monitoring for patient safety: a Denver Health Lean journey. Nurs Econ. 2013;31(6):298-306. PubMed
10. Sand-Jecklin K, Johnson JR, Tylka S. Protecting patient safety: can video monitoring prevent falls in high-risk patient populations? J Nurs Care Qual. 2016;31(2):131-138. doi: 10.1097/NCQ.0000000000000163. PubMed
11. Burtson PL, Vento L. Sitter reduction through mobile video monitoring: a nurse-driven sitter protocol and administrative oversight. J Nurs Adm. 2015;45(7-8):363-369. doi: 10.1097/NNA.0000000000000216. PubMed
12. Prevention CfDCa. ICD-9-CM Guidelines, 9th ed. https://www.cdc.gov/nchs/data/icd/icd9cm_guidelines_2011.pdf. Accessed April 11, 2018.
13. Prevention CfDca. IDC-9-CM Code Conversion Table. https://www.cdc.gov/nchs/data/icd/icd-9-cm_fy14_cnvtbl_final.pdf. Accessed April 11, 2018.
14. Cournan M, Fusco-Gessick B, Wright L. Improving patient safety through video monitoring. Rehabil Nurs. 2016. doi: 10.1002/rnj.308. PubMed
15. Rochefort CM, Ward L, Ritchie JA, Girard N, Tamblyn RM. Patient and nurse staffing characteristics associated with high sitter use costs. J Adv Nurs. 2012;68(8):1758-1767. doi: 10.1111/j.1365-2648.2011.05864.x. PubMed

References

1. Zhao Y, Encinosa W. An update on hospitalizations for eating disorders, 1999 to 2009: statistical brief #120. In: Healthcare Cost and Utilization Project (HCUP) Statistical Briefs. Rockville, MD: Agency for Healthcare Research and Quality (US); 2006. PubMed
2. Bardach NS, Coker TR, Zima BT, et al. Common and costly hospitalizations for pediatric mental health disorders. Pediatrics. 2014;133(4):602-609. doi: 10.1542/peds.2013-3165. PubMed
3. Society for Adolescent H, Medicine, Golden NH, et al. Position Paper of the Society for Adolescent Health and Medicine: medical management of restrictive eating disorders in adolescents and young adults. J Adolesc Health. 2015;56(1):121-125. doi: 10.1016/j.jadohealth.2014.10.259. PubMed
4. Katzman DK. Medical complications in adolescents with anorexia nervosa: a review of the literature. Int J Eat Disord. 2005;37(S1):S52-S59; discussion S87-S59. doi: 10.1002/eat.20118. PubMed
5. Strandjord SE, Sieke EH, Richmond M, Khadilkar A, Rome ES. Medical stabilization of adolescents with nutritional insufficiency: a clinical care path. Eat Weight Disord. 2016;21(3):403-410. doi: 10.1007/s40519-015-0245-5. PubMed
6. Kells M, Davidson K, Hitchko L, O’Neil K, Schubert-Bob P, McCabe M. Examining supervised meals in patients with restrictive eating disorders. Appl Nurs Res. 2013;26(2):76-79. doi: 10.1016/j.apnr.2012.06.003. PubMed
7. Leclerc A, Turrini T, Sherwood K, Katzman DK. Evaluation of a nutrition rehabilitation protocol in hospitalized adolescents with restrictive eating disorders. J Adolesc Health. 2013;53(5):585-589. doi: 10.1016/j.jadohealth.2013.06.001. PubMed
8. Kells M, Schubert-Bob P, Nagle K, et al. Meal supervision during medical hospitalization for eating disorders. Clin Nurs Res. 2017;26(4):525-537. doi: 10.1177/1054773816637598. PubMed
9. Jeffers S, Searcey P, Boyle K, et al. Centralized video monitoring for patient safety: a Denver Health Lean journey. Nurs Econ. 2013;31(6):298-306. PubMed
10. Sand-Jecklin K, Johnson JR, Tylka S. Protecting patient safety: can video monitoring prevent falls in high-risk patient populations? J Nurs Care Qual. 2016;31(2):131-138. doi: 10.1097/NCQ.0000000000000163. PubMed
11. Burtson PL, Vento L. Sitter reduction through mobile video monitoring: a nurse-driven sitter protocol and administrative oversight. J Nurs Adm. 2015;45(7-8):363-369. doi: 10.1097/NNA.0000000000000216. PubMed
12. Prevention CfDCa. ICD-9-CM Guidelines, 9th ed. https://www.cdc.gov/nchs/data/icd/icd9cm_guidelines_2011.pdf. Accessed April 11, 2018.
13. Prevention CfDca. IDC-9-CM Code Conversion Table. https://www.cdc.gov/nchs/data/icd/icd-9-cm_fy14_cnvtbl_final.pdf. Accessed April 11, 2018.
14. Cournan M, Fusco-Gessick B, Wright L. Improving patient safety through video monitoring. Rehabil Nurs. 2016. doi: 10.1002/rnj.308. PubMed
15. Rochefort CM, Ward L, Ritchie JA, Girard N, Tamblyn RM. Patient and nurse staffing characteristics associated with high sitter use costs. J Adv Nurs. 2012;68(8):1758-1767. doi: 10.1111/j.1365-2648.2011.05864.x. PubMed

Issue
Journal of Hospital Medicine 14(6)
Issue
Journal of Hospital Medicine 14(6)
Page Number
357-360. Published online first April 8, 2019.
Page Number
357-360. Published online first April 8, 2019.
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Kristin A Shadman, MD; E-mail: kshadman@pediatrics.wisc.edu; Telephone: 608-265-8561.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Interhospital Transfer: Transfer Processes and Patient Outcomes

Article Type
Changed
Sun, 08/04/2019 - 22:57

The transfer of patients between acute care hospitals (interhospital transfer [IHT]) occurs regularly among patients with a variety of diagnoses, in theory, to gain access to unique specialty services and/or a higher level of care, among other reasons.1,2

However, the practice of IHT is variable and nonstandardized,3,4 and existing data largely suggests that transferred patients experience worse outcomes, including longer length of stay, higher hospitalization costs, longer ICU time, and greater mortality, even with rigorous adjustment for confounding by indication.5,6 Though there are many possible reasons for these findings, existing literature suggests that there may be aspects of the transfer process itself which contribute to these outcomes.2,6,7

Understanding which aspects of the transfer process contribute to poor patient outcomes is a key first step toward the development of targeted quality improvement initiatives to improve this process of care. In this study, we aim to examine the association between select characteristics of the transfer process, including the timing of transfer and workload of the admitting physician team, and clinical outcomes among patients undergoing IHT.

METHODS

Data and Study Population

We performed a retrospective analysis of patients ≥age 18 years who transferred to Brigham and Women’s Hospital (BWH), a 777-bed tertiary care hospital, from another acute care hospital between January 2005, and September 2013. Dates of inclusion were purposefully chosen prior to BWH implementation of a new electronic health records system to avoid potential information bias. As at most academic medical centers, night coverage at BWH differs by service and includes a combination of long-call admitting teams and night float coverage. On weekends, many services are less well staffed, and some procedures may only be available if needed emergently. Some services have caps on the daily number of admissions or total patient census, but none have caps on the number of discharges per day. Patients were excluded from analysis if they left BWH against medical advice, were transferred from closely affiliated hospitals with shared personnel and electronic health records (Brigham and Women’s Faulkner Hospital, Dana Farber Cancer Institute), transferred from inpatient psychiatric or inpatient hospice facilities, or transferred to obstetrics or nursery services. Data were obtained from administrative sources and the research patient data repository (RPDR), a centralized clinical data repository that gathers data from various hospital legacy systems and stores them in one data warehouse.8 Our study was approved by the Partners Institutional Review Board (IRB) with a waiver of patient consent.

Transfer Process Characteristics

Predictors included select characteristics of the transfer process, including (1) Day of week of transfer, dichotomized into Friday through Sunday (“weekend”), versus Monday through Thursday (“weekday”);9 Friday was included with “weekend” given the suggestion of increased volume of transfers in advance of the weekend; (2) Time of arrival of the transferred patient, categorized into “daytime” (7 am-5 pm), “evening” (5 pm -10 pm), and “nighttime” (10 pm -7 am), with daytime as the reference group; (3) Admitting team “busyness” on day of patient transfer, defined as the total number of additional patient admissions and patient discharges performed by the admitting team on the calendar day of patient arrival, as has been used in prior research,10 and categorized into quartiles with lowest quartile as the reference group. Service-specific quartiles were calculated and used for stratified analyses (described below); and (4) “Time delay” between patient acceptance for transfer and patient arrival at BWH, categorized into 0-12 hours, 12-24 hours, 24-48 hours, and >48 hours, with 12-24 hours as the reference group (anticipating that time delay of 0-12 hours would be reflective of “sicker” patients in need of expedited transfer).

 

 

Outcomes

Outcomes included transfer to the intensive care unit (ICU) within 48 hours of arrival and 30-day mortality from date of index admission.5,6

Patient Characteristics

Covariates for adjustment included: patient age, sex, race, Elixhauser comorbidity score,11 Diagnosis-Related Group (DRG)-weight, insurance status, year of admission, number of preadmission medications, and service of admission.

Statistical Analyses

We used descriptive statistics to display baseline characteristics and performed a series of univariable and multivariable logistic regression models to obtain the adjusted odds of each transfer process characteristic on each outcome, adjusting for all covariates (proc logistic, SAS Statistical Software, Cary, North Carolina). For analyses of ICU transfer within 48 hours of arrival, all patients initially admitted to the ICU at time of transfer were excluded.

In the secondary analyses, we used a combined day-of-week and time-of-day variable (ie, Monday day, Monday evening, Monday night, Tuesday day, and so on, with Monday day as the reference group) to obtain a more detailed evaluation of timing of transfer on patient outcomes. We also performed stratified analyses to evaluate each transfer process characteristic on adjusted odds of 30-day mortality stratified by service of admission (ie, at the time of transfer to BWH), adjusting for all covariates. For all analyses, two-sided P values < .05 were considered significant.

RESULTS

Overall, 24,352 patients met our inclusion criteria and underwent IHT, of whom 2,174 (8.9%) died within 30 days. Of the 22,910 transferred patients originally admitted to a non-ICU service, 5,464 (23.8%) underwent ICU transfer within 48 hours of arrival. Cohort characteristics are shown in Table 1.

Multivariable regression analyses demonstrated no significant association between weekend (versus weekday) transfer or increased time delay between patient acceptance and arrival (>48 hours) and adjusted odds of ICU transfer within 48 hours or 30-day mortality. However, they did demonstrate that nighttime (versus daytime) transfer was associated with greater adjusted odds of both ICU transfer and 30-day mortality. Increased admitting team busyness was associated with lower adjusted odds of ICU transfer but was not significantly associated with adjusted odds of 30-day mortality (Table 2). As expected, decreased time delay between patient acceptance and arrival (0-12 hours) was associated with increased adjusted odds of both ICU transfer (adjusted OR 2.68; 95% CI 2.29, 3.15) and 30-day mortality (adjusted OR 1.25; 95% CI 1.03, 1.53) compared with 12-24 hours (results not shown). Time delay >48 hours was not associated with either outcome.

Regression analyses with the combined day/time variable demonstrated that compared with Monday daytime transfer, Sunday night transfer was significantly associated with increased adjusted odds of 30-day mortality, and Friday night transfer was associated with a trend toward increased 30-day mortality (adjusted OR [aOR] 1.88; 95% CI 1.25, 2.82, and aOR 1.43; 95% CI 0.99, 2.06, respectively). We also found that all nighttime transfers (ie, Monday through Sunday night) were associated with increased adjusted odds of ICU transfer within 48 hours (as compared with Monday daytime transfer). Other days/time analyses were not significant.

Univariable and multivariable analyses stratified by service were performed (Appendix). Multivariable stratified analyses demonstrated that weekend transfer, nighttime transfer, and increased admitting team busyness were associated with increased adjusted odds of 30-day mortality among cardiothoracic (CT) and gastrointestinal (GI) surgical service patients. Increased admitting team busyness was also associated with increased mortality among ICU service patients but was associated with decreased mortality among cardiology service patients. An increased time delay between patient acceptance and arrival was associated with decreased mortality among CT and GI surgical service patients (Figure; Appendix). Other adjusted stratified outcomes were not significant.

 

 

DISCUSSION

In this study of 24,352 patients undergoing IHT, we found no significant association between weekend transfer or increased time delay between transfer acceptance and arrival and patient outcomes in the cohort as a whole; but we found that nighttime transfer is associated with increased adjusted odds of both ICU transfer within 48 hours and 30-day mortality. Our analyses combining day-of-week and time-of-day demonstrate that Sunday night transfer is particularly associated with increased adjusted odds of 30-day mortality (as compared with Monday daytime transfer), and show a trend toward increased mortality with Friday night transfers. These detailed analyses otherwise reinforce that nighttime transfer across all nights of the week is associated with increased adjusted odds of ICU transfer within 48 hours. We also found that increased admitting team busyness on the day of patient transfer is associated with decreased odds of ICU transfer, though this may solely be reflective of higher turnover services (ie, cardiology) caring for lower acuity patients, as suggested by secondary analyses stratified by service. In addition, secondary analyses demonstrated differential associations between weekend transfers, nighttime transfers, and increased team busyness on the odds of 30-day mortality based on service of transfer. These analyses showed that patients transferred to higher acuity services requiring procedural care, including CT surgery, GI surgery, and Medical ICU, do worse under all three circumstances as compared with patients transferred to other services. Secondary analyses also demonstrated that increased time delay between patient acceptance and arrival is inversely associated with 30-day mortality among CT and GI surgery service patients, likely reflecting lower acuity patients (ie, less sick patients are less rapidly transferred).

There are several possible explanations for these findings. Patients transferred to surgical services at night may reflect a more urgent need for surgery and include a sicker cohort of patients, possibly explaining these findings. Alternatively, or in addition, both weekend and nighttime hospital admission expose patients to similar potential risks, ie, limited resources available during off-peak hours. Our findings could, therefore, reflect the possibility that patients transferred to higher acuity services in need of procedural care are most vulnerable to off-peak timing of transfer. Similar data looking at patients admitted through the emergency room (ER) find the strongest effect of off-peak admissions on patients in need of procedures, including GI hemorrhage,12 atrial fibrillation13 and acute myocardial infarction (AMI),14 arguably because of the limited availability of necessary interventions. Patients undergoing IHT are a sicker cohort of patients than those admitted through the ER, and, therefore, may be even more vulnerable to these issues.3,5 This is supported by our findings that Sunday night transfers (and trend toward Friday night transfers) are associated with greater mortality compared with Monday daytime transfers, when at-the-ready resources and/or specialty personnel may be less available (Sunday night), and delays until receipt of necessary procedures may be longer (Friday night). Though we did not observe similar results among cardiology service transfers, as may be expected based on existing literature,13,14 this subset of patients includes more heterogeneous diagnoses, (ie, not solely those that require acute intervention) and exhibited a low level of acuity (low Elixhauser score and DRG-weight, data not shown).



We also found that increased admitting team busyness on the day of patient transfer is associated with increased odds of 30-day mortality among CT surgery, GI surgery, and ICU service transfers. As above, there are several possible explanations for this finding. It is possible that among these services, only the sickest/neediest patients are accepted for transfer when teams are busiest, explaining our findings. Though this explanation is possible, the measure of team “busyness” includes patient discharge, thereby increasing, not decreasing, availability for incoming patients, making this explanation less likely. Alternatively, it is possible that this finding is reflective of reverse causation, ie, that teams have less ability to discharge/admit new patients when caring for particularly sick/unstable patient transfers, though this assumes that transferred patients arrive earlier in the day, (eg, in time to influence discharge decisions), which infrequently occurs (Table 1). Lastly, it is possible that this subset of patients will be more vulnerable to the workload of the team that is caring for them at the time of their arrival. With high patient turnover (admissions/discharges), the time allocated to each patient’s care may be diminished (ie, “work compression,” trying to do the same amount of work in less time), and may result in decreased time to care for the transferred patient. This has been shown to influence patient outcomes at the time of patient discharge.10

In trying to understand why we observed an inverse relationship between admitting team busyness and odds of ICU transfer within 48 hours, we believe this finding is largely driven by cardiology service transfers, which comprise the highest volume of transferred patients in our cohort (Table 1), and are low acuity patients. Within this population of patients, admitting team busyness is likely a surrogate variable for high turnover/low acuity. This idea is supported by our findings that admitting team busyness is associated with decreased adjusted odds of 30-day mortality in this group (and only in this group).

Similarly, our observed inverse relationship between increased time delay and 30-day mortality among CT and GI surgical service patients is also likely reflective of lower acuity patients. We anticipated that decreased time delay (0-12 hours) would be reflective of greater patient acuity (supported by our findings that decreased time delay is associated with increased odds of ICU transfer and 30-day mortality). However, our findings also suggest that increased time delay (>48 hours) is similarly representative of lower patient acuity and therefore an imperfect measure of discontinuity and/or harmful delays in care during IHT (see limitations below).

Our study is subject to several limitations. This is a single site study; given known variation in transfer practices between hospitals,3 it is possible that our findings are not generalizable. However, given similar existing data on patients admitted through the ER, it is likely our findings may be reflective of IHT to similar tertiary referral hospitals. Second, although we adjusted for patient characteristics, there remains the possibility of unmeasured confounding and other bias that account for our results, as discussed. Third, although the definition of “busyness” used in this study was chosen based on prior data demonstrating an effect on patient outcomes,10 we did not include other measures of busyness that may influence outcomes of transferred patients such as overall team census or hospital busyness. However, the workload associated with a high volume of patient admissions and discharges is arguably a greater reflection of “work compression” for the admitting team compared with overall team census, which may reflect a more static workload with less impact on the care of a newly transferred patient. Also, although hospital census may influence the ability to transfer (ie, lower volume of transferred patients during times of high hospital census), this likely has less of an impact on the direct care of transferred patients than the admitting team’s workload. It is more likely that it would serve as a confounder (eg, sicker patients are accepted for transfer despite high hospital census, while lower risk patients are not).

Nevertheless, future studies should further evaluate the association with other measures of busyness/workload and outcomes of transferred patients. Lastly, though we anticipated time delay between transfer acceptance and arrival would be correlated with patient acuity, we hypothesized that longer delay might affect patient continuity and communication and impact patient outcomes. However, our results demonstrate that our measurement of this variable was unsuccessful in unraveling patient acuity from our intended evaluation of these vulnerable aspects of IHT. It is likely that a more detailed evaluation is required to explore potential challenges more fully that may occur with greater time delays (eg, suboptimal communication regarding changes in clinical status during this time period, delays in treatment). Similarly, though our study evaluates the association between nighttime and weekend transfer (and the interaction between these) with patient outcomes, we did not evaluate other intermediate outcomes that may be more affected by the timing of transfer, such as diagnostic errors or delays in procedural care, which warrant further investigation. We do not directly examine the underlying reasons that explain our observed associations, and thus more research is needed to identify these as well as design and evaluate solutions.

Collectively, our findings suggest that high acuity patients in need of procedural care experience worse outcomes during off-peak times of transfer, and during times of high care-team workload. Though further research is needed to identify underlying reasons to explain our findings, both the timing of patient transfer (when modifiable) and workload of the team caring for the patient on arrival may serve as potential targets for interventions to improve the quality and safety of IHT for patients at greatest risk.

 

 

Disclosures

Dr. Mueller and Dr. Schnipper have nothing to disclose. Ms. Fiskio has nothing to disclose. Dr. Schnipper is the recipient of grant funding from Mallinckrodt Pharmaceuticals to conduct an investigator-initiated study of predictors and impact of opioid-related adverse drug events.

Files
References

1. Iwashyna TJ. The incomplete infrastructure for interhospital patient transfer. Crit Care Med. 2 012;40(8):2470-2478. https://doi.org/10.1097/CCM.0b013e318254516f.
2. Mueller SK, Shannon E, Dalal A, Schnipper JL, Dykes P. Patient and physician experience with interhospital transfer: a qualitative study. J Patient Saf. 2018. https://doi.org/10.1097/PTS.0000000000000501
3. Mueller SK, Zheng J, Orav EJ, Schnipper JL. Rates, predictors and variability of interhospital transfers: a national evaluation. J Hosp Med. 2017;12(6):435-442.https://doi.org/10.12788/jhm.2747.
4. Bosk EA, Veinot T, Iwashyna TJ. Which patients and where: a qualitative study of patient transfers from community hospitals. Med Care. 2011;49(6):592-598. https://doi.org/10.1097/MLR.0b013e31820fb71b.
5. Sokol-Hessner L, White AA, Davis KF, Herzig SJ, Hohmann SF. Interhospital transfer patients discharged by academic hospitalists and general internists: characteristics and outcomes. J Hosp Med. 2016;11(4):245-50. https://doi.org/10.1002/jhm.2515.
6. Mueller S, Zheng J, Orav EJP, Schnipper JL. Inter-hospital transfer and patient outcomes: a retrospective cohort study. BMJ Qual Saf. 2018. https://doi.org/10.1136/bmjqs-2018-008087.
7. Mueller SK, Schnipper JL. Physician perspectives on interhospital transfers. J Patient Saf. 2016. https://doi.org/10.1097/PTS.0000000000000312.
8. Research Patient Data Registry (RPDR). http://rc.partners.org/rpdr. Accessed April 20, 2018.
9. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. https://doi.org/10.1056/NEJMsa003376
10. Mueller SK, Donze J, Schnipper JL. Intern workload and discontinuity of care on 30-day readmission. Am J Med. 2013;126(1):81-88. https://doi.org/10.1016/j.amjmed.2012.09.003.
11. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. PubMed
12. Ananthakrishnan AN, McGinley EL, Saeian K. Outcomes of weekend admissions for upper gastrointestinal hemorrhage: a nationwide analysis. Clin Gastroenterol Hepatol. 2009;7(3):296-302e1. https://doi.org/10.1016/j.cgh.2008.08.013.
13. Deshmukh A, Pant S, Kumar G, Bursac Z, Paydak H, Mehta JL. Comparison of outcomes of weekend versus weekday admissions for atrial fibrillation. Am J Cardiol. 2012;110(2):208-211. https://doi.org/10.1016/j.amjcard.2012.03.011.
14. Clarke MS, Wills RA, Bowman RV, et al. Exploratory study of the ‘weekend effect’ for acute medical admissions to public hospitals in Queensland, Australia. Intern Med J. 2010;40(11):777-783. https://doi.org/-10.1111/j.1445-5994.2009.02067.x.

Article PDF
Issue
Journal of Hospital Medicine 14(8)
Publications
Topics
Page Number
486-491
Sections
Files
Files
Article PDF
Article PDF
Related Articles

The transfer of patients between acute care hospitals (interhospital transfer [IHT]) occurs regularly among patients with a variety of diagnoses, in theory, to gain access to unique specialty services and/or a higher level of care, among other reasons.1,2

However, the practice of IHT is variable and nonstandardized,3,4 and existing data largely suggests that transferred patients experience worse outcomes, including longer length of stay, higher hospitalization costs, longer ICU time, and greater mortality, even with rigorous adjustment for confounding by indication.5,6 Though there are many possible reasons for these findings, existing literature suggests that there may be aspects of the transfer process itself which contribute to these outcomes.2,6,7

Understanding which aspects of the transfer process contribute to poor patient outcomes is a key first step toward the development of targeted quality improvement initiatives to improve this process of care. In this study, we aim to examine the association between select characteristics of the transfer process, including the timing of transfer and workload of the admitting physician team, and clinical outcomes among patients undergoing IHT.

METHODS

Data and Study Population

We performed a retrospective analysis of patients ≥age 18 years who transferred to Brigham and Women’s Hospital (BWH), a 777-bed tertiary care hospital, from another acute care hospital between January 2005, and September 2013. Dates of inclusion were purposefully chosen prior to BWH implementation of a new electronic health records system to avoid potential information bias. As at most academic medical centers, night coverage at BWH differs by service and includes a combination of long-call admitting teams and night float coverage. On weekends, many services are less well staffed, and some procedures may only be available if needed emergently. Some services have caps on the daily number of admissions or total patient census, but none have caps on the number of discharges per day. Patients were excluded from analysis if they left BWH against medical advice, were transferred from closely affiliated hospitals with shared personnel and electronic health records (Brigham and Women’s Faulkner Hospital, Dana Farber Cancer Institute), transferred from inpatient psychiatric or inpatient hospice facilities, or transferred to obstetrics or nursery services. Data were obtained from administrative sources and the research patient data repository (RPDR), a centralized clinical data repository that gathers data from various hospital legacy systems and stores them in one data warehouse.8 Our study was approved by the Partners Institutional Review Board (IRB) with a waiver of patient consent.

Transfer Process Characteristics

Predictors included select characteristics of the transfer process, including (1) Day of week of transfer, dichotomized into Friday through Sunday (“weekend”), versus Monday through Thursday (“weekday”);9 Friday was included with “weekend” given the suggestion of increased volume of transfers in advance of the weekend; (2) Time of arrival of the transferred patient, categorized into “daytime” (7 am-5 pm), “evening” (5 pm -10 pm), and “nighttime” (10 pm -7 am), with daytime as the reference group; (3) Admitting team “busyness” on day of patient transfer, defined as the total number of additional patient admissions and patient discharges performed by the admitting team on the calendar day of patient arrival, as has been used in prior research,10 and categorized into quartiles with lowest quartile as the reference group. Service-specific quartiles were calculated and used for stratified analyses (described below); and (4) “Time delay” between patient acceptance for transfer and patient arrival at BWH, categorized into 0-12 hours, 12-24 hours, 24-48 hours, and >48 hours, with 12-24 hours as the reference group (anticipating that time delay of 0-12 hours would be reflective of “sicker” patients in need of expedited transfer).

 

 

Outcomes

Outcomes included transfer to the intensive care unit (ICU) within 48 hours of arrival and 30-day mortality from date of index admission.5,6

Patient Characteristics

Covariates for adjustment included: patient age, sex, race, Elixhauser comorbidity score,11 Diagnosis-Related Group (DRG)-weight, insurance status, year of admission, number of preadmission medications, and service of admission.

Statistical Analyses

We used descriptive statistics to display baseline characteristics and performed a series of univariable and multivariable logistic regression models to obtain the adjusted odds of each transfer process characteristic on each outcome, adjusting for all covariates (proc logistic, SAS Statistical Software, Cary, North Carolina). For analyses of ICU transfer within 48 hours of arrival, all patients initially admitted to the ICU at time of transfer were excluded.

In the secondary analyses, we used a combined day-of-week and time-of-day variable (ie, Monday day, Monday evening, Monday night, Tuesday day, and so on, with Monday day as the reference group) to obtain a more detailed evaluation of timing of transfer on patient outcomes. We also performed stratified analyses to evaluate each transfer process characteristic on adjusted odds of 30-day mortality stratified by service of admission (ie, at the time of transfer to BWH), adjusting for all covariates. For all analyses, two-sided P values < .05 were considered significant.

RESULTS

Overall, 24,352 patients met our inclusion criteria and underwent IHT, of whom 2,174 (8.9%) died within 30 days. Of the 22,910 transferred patients originally admitted to a non-ICU service, 5,464 (23.8%) underwent ICU transfer within 48 hours of arrival. Cohort characteristics are shown in Table 1.

Multivariable regression analyses demonstrated no significant association between weekend (versus weekday) transfer or increased time delay between patient acceptance and arrival (>48 hours) and adjusted odds of ICU transfer within 48 hours or 30-day mortality. However, they did demonstrate that nighttime (versus daytime) transfer was associated with greater adjusted odds of both ICU transfer and 30-day mortality. Increased admitting team busyness was associated with lower adjusted odds of ICU transfer but was not significantly associated with adjusted odds of 30-day mortality (Table 2). As expected, decreased time delay between patient acceptance and arrival (0-12 hours) was associated with increased adjusted odds of both ICU transfer (adjusted OR 2.68; 95% CI 2.29, 3.15) and 30-day mortality (adjusted OR 1.25; 95% CI 1.03, 1.53) compared with 12-24 hours (results not shown). Time delay >48 hours was not associated with either outcome.

Regression analyses with the combined day/time variable demonstrated that compared with Monday daytime transfer, Sunday night transfer was significantly associated with increased adjusted odds of 30-day mortality, and Friday night transfer was associated with a trend toward increased 30-day mortality (adjusted OR [aOR] 1.88; 95% CI 1.25, 2.82, and aOR 1.43; 95% CI 0.99, 2.06, respectively). We also found that all nighttime transfers (ie, Monday through Sunday night) were associated with increased adjusted odds of ICU transfer within 48 hours (as compared with Monday daytime transfer). Other days/time analyses were not significant.

Univariable and multivariable analyses stratified by service were performed (Appendix). Multivariable stratified analyses demonstrated that weekend transfer, nighttime transfer, and increased admitting team busyness were associated with increased adjusted odds of 30-day mortality among cardiothoracic (CT) and gastrointestinal (GI) surgical service patients. Increased admitting team busyness was also associated with increased mortality among ICU service patients but was associated with decreased mortality among cardiology service patients. An increased time delay between patient acceptance and arrival was associated with decreased mortality among CT and GI surgical service patients (Figure; Appendix). Other adjusted stratified outcomes were not significant.

 

 

DISCUSSION

In this study of 24,352 patients undergoing IHT, we found no significant association between weekend transfer or increased time delay between transfer acceptance and arrival and patient outcomes in the cohort as a whole; but we found that nighttime transfer is associated with increased adjusted odds of both ICU transfer within 48 hours and 30-day mortality. Our analyses combining day-of-week and time-of-day demonstrate that Sunday night transfer is particularly associated with increased adjusted odds of 30-day mortality (as compared with Monday daytime transfer), and show a trend toward increased mortality with Friday night transfers. These detailed analyses otherwise reinforce that nighttime transfer across all nights of the week is associated with increased adjusted odds of ICU transfer within 48 hours. We also found that increased admitting team busyness on the day of patient transfer is associated with decreased odds of ICU transfer, though this may solely be reflective of higher turnover services (ie, cardiology) caring for lower acuity patients, as suggested by secondary analyses stratified by service. In addition, secondary analyses demonstrated differential associations between weekend transfers, nighttime transfers, and increased team busyness on the odds of 30-day mortality based on service of transfer. These analyses showed that patients transferred to higher acuity services requiring procedural care, including CT surgery, GI surgery, and Medical ICU, do worse under all three circumstances as compared with patients transferred to other services. Secondary analyses also demonstrated that increased time delay between patient acceptance and arrival is inversely associated with 30-day mortality among CT and GI surgery service patients, likely reflecting lower acuity patients (ie, less sick patients are less rapidly transferred).

There are several possible explanations for these findings. Patients transferred to surgical services at night may reflect a more urgent need for surgery and include a sicker cohort of patients, possibly explaining these findings. Alternatively, or in addition, both weekend and nighttime hospital admission expose patients to similar potential risks, ie, limited resources available during off-peak hours. Our findings could, therefore, reflect the possibility that patients transferred to higher acuity services in need of procedural care are most vulnerable to off-peak timing of transfer. Similar data looking at patients admitted through the emergency room (ER) find the strongest effect of off-peak admissions on patients in need of procedures, including GI hemorrhage,12 atrial fibrillation13 and acute myocardial infarction (AMI),14 arguably because of the limited availability of necessary interventions. Patients undergoing IHT are a sicker cohort of patients than those admitted through the ER, and, therefore, may be even more vulnerable to these issues.3,5 This is supported by our findings that Sunday night transfers (and trend toward Friday night transfers) are associated with greater mortality compared with Monday daytime transfers, when at-the-ready resources and/or specialty personnel may be less available (Sunday night), and delays until receipt of necessary procedures may be longer (Friday night). Though we did not observe similar results among cardiology service transfers, as may be expected based on existing literature,13,14 this subset of patients includes more heterogeneous diagnoses, (ie, not solely those that require acute intervention) and exhibited a low level of acuity (low Elixhauser score and DRG-weight, data not shown).



We also found that increased admitting team busyness on the day of patient transfer is associated with increased odds of 30-day mortality among CT surgery, GI surgery, and ICU service transfers. As above, there are several possible explanations for this finding. It is possible that among these services, only the sickest/neediest patients are accepted for transfer when teams are busiest, explaining our findings. Though this explanation is possible, the measure of team “busyness” includes patient discharge, thereby increasing, not decreasing, availability for incoming patients, making this explanation less likely. Alternatively, it is possible that this finding is reflective of reverse causation, ie, that teams have less ability to discharge/admit new patients when caring for particularly sick/unstable patient transfers, though this assumes that transferred patients arrive earlier in the day, (eg, in time to influence discharge decisions), which infrequently occurs (Table 1). Lastly, it is possible that this subset of patients will be more vulnerable to the workload of the team that is caring for them at the time of their arrival. With high patient turnover (admissions/discharges), the time allocated to each patient’s care may be diminished (ie, “work compression,” trying to do the same amount of work in less time), and may result in decreased time to care for the transferred patient. This has been shown to influence patient outcomes at the time of patient discharge.10

In trying to understand why we observed an inverse relationship between admitting team busyness and odds of ICU transfer within 48 hours, we believe this finding is largely driven by cardiology service transfers, which comprise the highest volume of transferred patients in our cohort (Table 1), and are low acuity patients. Within this population of patients, admitting team busyness is likely a surrogate variable for high turnover/low acuity. This idea is supported by our findings that admitting team busyness is associated with decreased adjusted odds of 30-day mortality in this group (and only in this group).

Similarly, our observed inverse relationship between increased time delay and 30-day mortality among CT and GI surgical service patients is also likely reflective of lower acuity patients. We anticipated that decreased time delay (0-12 hours) would be reflective of greater patient acuity (supported by our findings that decreased time delay is associated with increased odds of ICU transfer and 30-day mortality). However, our findings also suggest that increased time delay (>48 hours) is similarly representative of lower patient acuity and therefore an imperfect measure of discontinuity and/or harmful delays in care during IHT (see limitations below).

Our study is subject to several limitations. This is a single site study; given known variation in transfer practices between hospitals,3 it is possible that our findings are not generalizable. However, given similar existing data on patients admitted through the ER, it is likely our findings may be reflective of IHT to similar tertiary referral hospitals. Second, although we adjusted for patient characteristics, there remains the possibility of unmeasured confounding and other bias that account for our results, as discussed. Third, although the definition of “busyness” used in this study was chosen based on prior data demonstrating an effect on patient outcomes,10 we did not include other measures of busyness that may influence outcomes of transferred patients such as overall team census or hospital busyness. However, the workload associated with a high volume of patient admissions and discharges is arguably a greater reflection of “work compression” for the admitting team compared with overall team census, which may reflect a more static workload with less impact on the care of a newly transferred patient. Also, although hospital census may influence the ability to transfer (ie, lower volume of transferred patients during times of high hospital census), this likely has less of an impact on the direct care of transferred patients than the admitting team’s workload. It is more likely that it would serve as a confounder (eg, sicker patients are accepted for transfer despite high hospital census, while lower risk patients are not).

Nevertheless, future studies should further evaluate the association with other measures of busyness/workload and outcomes of transferred patients. Lastly, though we anticipated time delay between transfer acceptance and arrival would be correlated with patient acuity, we hypothesized that longer delay might affect patient continuity and communication and impact patient outcomes. However, our results demonstrate that our measurement of this variable was unsuccessful in unraveling patient acuity from our intended evaluation of these vulnerable aspects of IHT. It is likely that a more detailed evaluation is required to explore potential challenges more fully that may occur with greater time delays (eg, suboptimal communication regarding changes in clinical status during this time period, delays in treatment). Similarly, though our study evaluates the association between nighttime and weekend transfer (and the interaction between these) with patient outcomes, we did not evaluate other intermediate outcomes that may be more affected by the timing of transfer, such as diagnostic errors or delays in procedural care, which warrant further investigation. We do not directly examine the underlying reasons that explain our observed associations, and thus more research is needed to identify these as well as design and evaluate solutions.

Collectively, our findings suggest that high acuity patients in need of procedural care experience worse outcomes during off-peak times of transfer, and during times of high care-team workload. Though further research is needed to identify underlying reasons to explain our findings, both the timing of patient transfer (when modifiable) and workload of the team caring for the patient on arrival may serve as potential targets for interventions to improve the quality and safety of IHT for patients at greatest risk.

 

 

Disclosures

Dr. Mueller and Dr. Schnipper have nothing to disclose. Ms. Fiskio has nothing to disclose. Dr. Schnipper is the recipient of grant funding from Mallinckrodt Pharmaceuticals to conduct an investigator-initiated study of predictors and impact of opioid-related adverse drug events.

The transfer of patients between acute care hospitals (interhospital transfer [IHT]) occurs regularly among patients with a variety of diagnoses, in theory, to gain access to unique specialty services and/or a higher level of care, among other reasons.1,2

However, the practice of IHT is variable and nonstandardized,3,4 and existing data largely suggests that transferred patients experience worse outcomes, including longer length of stay, higher hospitalization costs, longer ICU time, and greater mortality, even with rigorous adjustment for confounding by indication.5,6 Though there are many possible reasons for these findings, existing literature suggests that there may be aspects of the transfer process itself which contribute to these outcomes.2,6,7

Understanding which aspects of the transfer process contribute to poor patient outcomes is a key first step toward the development of targeted quality improvement initiatives to improve this process of care. In this study, we aim to examine the association between select characteristics of the transfer process, including the timing of transfer and workload of the admitting physician team, and clinical outcomes among patients undergoing IHT.

METHODS

Data and Study Population

We performed a retrospective analysis of patients ≥age 18 years who transferred to Brigham and Women’s Hospital (BWH), a 777-bed tertiary care hospital, from another acute care hospital between January 2005, and September 2013. Dates of inclusion were purposefully chosen prior to BWH implementation of a new electronic health records system to avoid potential information bias. As at most academic medical centers, night coverage at BWH differs by service and includes a combination of long-call admitting teams and night float coverage. On weekends, many services are less well staffed, and some procedures may only be available if needed emergently. Some services have caps on the daily number of admissions or total patient census, but none have caps on the number of discharges per day. Patients were excluded from analysis if they left BWH against medical advice, were transferred from closely affiliated hospitals with shared personnel and electronic health records (Brigham and Women’s Faulkner Hospital, Dana Farber Cancer Institute), transferred from inpatient psychiatric or inpatient hospice facilities, or transferred to obstetrics or nursery services. Data were obtained from administrative sources and the research patient data repository (RPDR), a centralized clinical data repository that gathers data from various hospital legacy systems and stores them in one data warehouse.8 Our study was approved by the Partners Institutional Review Board (IRB) with a waiver of patient consent.

Transfer Process Characteristics

Predictors included select characteristics of the transfer process, including (1) Day of week of transfer, dichotomized into Friday through Sunday (“weekend”), versus Monday through Thursday (“weekday”);9 Friday was included with “weekend” given the suggestion of increased volume of transfers in advance of the weekend; (2) Time of arrival of the transferred patient, categorized into “daytime” (7 am-5 pm), “evening” (5 pm -10 pm), and “nighttime” (10 pm -7 am), with daytime as the reference group; (3) Admitting team “busyness” on day of patient transfer, defined as the total number of additional patient admissions and patient discharges performed by the admitting team on the calendar day of patient arrival, as has been used in prior research,10 and categorized into quartiles with lowest quartile as the reference group. Service-specific quartiles were calculated and used for stratified analyses (described below); and (4) “Time delay” between patient acceptance for transfer and patient arrival at BWH, categorized into 0-12 hours, 12-24 hours, 24-48 hours, and >48 hours, with 12-24 hours as the reference group (anticipating that time delay of 0-12 hours would be reflective of “sicker” patients in need of expedited transfer).

 

 

Outcomes

Outcomes included transfer to the intensive care unit (ICU) within 48 hours of arrival and 30-day mortality from date of index admission.5,6

Patient Characteristics

Covariates for adjustment included: patient age, sex, race, Elixhauser comorbidity score,11 Diagnosis-Related Group (DRG)-weight, insurance status, year of admission, number of preadmission medications, and service of admission.

Statistical Analyses

We used descriptive statistics to display baseline characteristics and performed a series of univariable and multivariable logistic regression models to obtain the adjusted odds of each transfer process characteristic on each outcome, adjusting for all covariates (proc logistic, SAS Statistical Software, Cary, North Carolina). For analyses of ICU transfer within 48 hours of arrival, all patients initially admitted to the ICU at time of transfer were excluded.

In the secondary analyses, we used a combined day-of-week and time-of-day variable (ie, Monday day, Monday evening, Monday night, Tuesday day, and so on, with Monday day as the reference group) to obtain a more detailed evaluation of timing of transfer on patient outcomes. We also performed stratified analyses to evaluate each transfer process characteristic on adjusted odds of 30-day mortality stratified by service of admission (ie, at the time of transfer to BWH), adjusting for all covariates. For all analyses, two-sided P values < .05 were considered significant.

RESULTS

Overall, 24,352 patients met our inclusion criteria and underwent IHT, of whom 2,174 (8.9%) died within 30 days. Of the 22,910 transferred patients originally admitted to a non-ICU service, 5,464 (23.8%) underwent ICU transfer within 48 hours of arrival. Cohort characteristics are shown in Table 1.

Multivariable regression analyses demonstrated no significant association between weekend (versus weekday) transfer or increased time delay between patient acceptance and arrival (>48 hours) and adjusted odds of ICU transfer within 48 hours or 30-day mortality. However, they did demonstrate that nighttime (versus daytime) transfer was associated with greater adjusted odds of both ICU transfer and 30-day mortality. Increased admitting team busyness was associated with lower adjusted odds of ICU transfer but was not significantly associated with adjusted odds of 30-day mortality (Table 2). As expected, decreased time delay between patient acceptance and arrival (0-12 hours) was associated with increased adjusted odds of both ICU transfer (adjusted OR 2.68; 95% CI 2.29, 3.15) and 30-day mortality (adjusted OR 1.25; 95% CI 1.03, 1.53) compared with 12-24 hours (results not shown). Time delay >48 hours was not associated with either outcome.

Regression analyses with the combined day/time variable demonstrated that compared with Monday daytime transfer, Sunday night transfer was significantly associated with increased adjusted odds of 30-day mortality, and Friday night transfer was associated with a trend toward increased 30-day mortality (adjusted OR [aOR] 1.88; 95% CI 1.25, 2.82, and aOR 1.43; 95% CI 0.99, 2.06, respectively). We also found that all nighttime transfers (ie, Monday through Sunday night) were associated with increased adjusted odds of ICU transfer within 48 hours (as compared with Monday daytime transfer). Other days/time analyses were not significant.

Univariable and multivariable analyses stratified by service were performed (Appendix). Multivariable stratified analyses demonstrated that weekend transfer, nighttime transfer, and increased admitting team busyness were associated with increased adjusted odds of 30-day mortality among cardiothoracic (CT) and gastrointestinal (GI) surgical service patients. Increased admitting team busyness was also associated with increased mortality among ICU service patients but was associated with decreased mortality among cardiology service patients. An increased time delay between patient acceptance and arrival was associated with decreased mortality among CT and GI surgical service patients (Figure; Appendix). Other adjusted stratified outcomes were not significant.

 

 

DISCUSSION

In this study of 24,352 patients undergoing IHT, we found no significant association between weekend transfer or increased time delay between transfer acceptance and arrival and patient outcomes in the cohort as a whole; but we found that nighttime transfer is associated with increased adjusted odds of both ICU transfer within 48 hours and 30-day mortality. Our analyses combining day-of-week and time-of-day demonstrate that Sunday night transfer is particularly associated with increased adjusted odds of 30-day mortality (as compared with Monday daytime transfer), and show a trend toward increased mortality with Friday night transfers. These detailed analyses otherwise reinforce that nighttime transfer across all nights of the week is associated with increased adjusted odds of ICU transfer within 48 hours. We also found that increased admitting team busyness on the day of patient transfer is associated with decreased odds of ICU transfer, though this may solely be reflective of higher turnover services (ie, cardiology) caring for lower acuity patients, as suggested by secondary analyses stratified by service. In addition, secondary analyses demonstrated differential associations between weekend transfers, nighttime transfers, and increased team busyness on the odds of 30-day mortality based on service of transfer. These analyses showed that patients transferred to higher acuity services requiring procedural care, including CT surgery, GI surgery, and Medical ICU, do worse under all three circumstances as compared with patients transferred to other services. Secondary analyses also demonstrated that increased time delay between patient acceptance and arrival is inversely associated with 30-day mortality among CT and GI surgery service patients, likely reflecting lower acuity patients (ie, less sick patients are less rapidly transferred).

There are several possible explanations for these findings. Patients transferred to surgical services at night may reflect a more urgent need for surgery and include a sicker cohort of patients, possibly explaining these findings. Alternatively, or in addition, both weekend and nighttime hospital admission expose patients to similar potential risks, ie, limited resources available during off-peak hours. Our findings could, therefore, reflect the possibility that patients transferred to higher acuity services in need of procedural care are most vulnerable to off-peak timing of transfer. Similar data looking at patients admitted through the emergency room (ER) find the strongest effect of off-peak admissions on patients in need of procedures, including GI hemorrhage,12 atrial fibrillation13 and acute myocardial infarction (AMI),14 arguably because of the limited availability of necessary interventions. Patients undergoing IHT are a sicker cohort of patients than those admitted through the ER, and, therefore, may be even more vulnerable to these issues.3,5 This is supported by our findings that Sunday night transfers (and trend toward Friday night transfers) are associated with greater mortality compared with Monday daytime transfers, when at-the-ready resources and/or specialty personnel may be less available (Sunday night), and delays until receipt of necessary procedures may be longer (Friday night). Though we did not observe similar results among cardiology service transfers, as may be expected based on existing literature,13,14 this subset of patients includes more heterogeneous diagnoses, (ie, not solely those that require acute intervention) and exhibited a low level of acuity (low Elixhauser score and DRG-weight, data not shown).



We also found that increased admitting team busyness on the day of patient transfer is associated with increased odds of 30-day mortality among CT surgery, GI surgery, and ICU service transfers. As above, there are several possible explanations for this finding. It is possible that among these services, only the sickest/neediest patients are accepted for transfer when teams are busiest, explaining our findings. Though this explanation is possible, the measure of team “busyness” includes patient discharge, thereby increasing, not decreasing, availability for incoming patients, making this explanation less likely. Alternatively, it is possible that this finding is reflective of reverse causation, ie, that teams have less ability to discharge/admit new patients when caring for particularly sick/unstable patient transfers, though this assumes that transferred patients arrive earlier in the day, (eg, in time to influence discharge decisions), which infrequently occurs (Table 1). Lastly, it is possible that this subset of patients will be more vulnerable to the workload of the team that is caring for them at the time of their arrival. With high patient turnover (admissions/discharges), the time allocated to each patient’s care may be diminished (ie, “work compression,” trying to do the same amount of work in less time), and may result in decreased time to care for the transferred patient. This has been shown to influence patient outcomes at the time of patient discharge.10

In trying to understand why we observed an inverse relationship between admitting team busyness and odds of ICU transfer within 48 hours, we believe this finding is largely driven by cardiology service transfers, which comprise the highest volume of transferred patients in our cohort (Table 1), and are low acuity patients. Within this population of patients, admitting team busyness is likely a surrogate variable for high turnover/low acuity. This idea is supported by our findings that admitting team busyness is associated with decreased adjusted odds of 30-day mortality in this group (and only in this group).

Similarly, our observed inverse relationship between increased time delay and 30-day mortality among CT and GI surgical service patients is also likely reflective of lower acuity patients. We anticipated that decreased time delay (0-12 hours) would be reflective of greater patient acuity (supported by our findings that decreased time delay is associated with increased odds of ICU transfer and 30-day mortality). However, our findings also suggest that increased time delay (>48 hours) is similarly representative of lower patient acuity and therefore an imperfect measure of discontinuity and/or harmful delays in care during IHT (see limitations below).

Our study is subject to several limitations. This is a single site study; given known variation in transfer practices between hospitals,3 it is possible that our findings are not generalizable. However, given similar existing data on patients admitted through the ER, it is likely our findings may be reflective of IHT to similar tertiary referral hospitals. Second, although we adjusted for patient characteristics, there remains the possibility of unmeasured confounding and other bias that account for our results, as discussed. Third, although the definition of “busyness” used in this study was chosen based on prior data demonstrating an effect on patient outcomes,10 we did not include other measures of busyness that may influence outcomes of transferred patients such as overall team census or hospital busyness. However, the workload associated with a high volume of patient admissions and discharges is arguably a greater reflection of “work compression” for the admitting team compared with overall team census, which may reflect a more static workload with less impact on the care of a newly transferred patient. Also, although hospital census may influence the ability to transfer (ie, lower volume of transferred patients during times of high hospital census), this likely has less of an impact on the direct care of transferred patients than the admitting team’s workload. It is more likely that it would serve as a confounder (eg, sicker patients are accepted for transfer despite high hospital census, while lower risk patients are not).

Nevertheless, future studies should further evaluate the association with other measures of busyness/workload and outcomes of transferred patients. Lastly, though we anticipated time delay between transfer acceptance and arrival would be correlated with patient acuity, we hypothesized that longer delay might affect patient continuity and communication and impact patient outcomes. However, our results demonstrate that our measurement of this variable was unsuccessful in unraveling patient acuity from our intended evaluation of these vulnerable aspects of IHT. It is likely that a more detailed evaluation is required to explore potential challenges more fully that may occur with greater time delays (eg, suboptimal communication regarding changes in clinical status during this time period, delays in treatment). Similarly, though our study evaluates the association between nighttime and weekend transfer (and the interaction between these) with patient outcomes, we did not evaluate other intermediate outcomes that may be more affected by the timing of transfer, such as diagnostic errors or delays in procedural care, which warrant further investigation. We do not directly examine the underlying reasons that explain our observed associations, and thus more research is needed to identify these as well as design and evaluate solutions.

Collectively, our findings suggest that high acuity patients in need of procedural care experience worse outcomes during off-peak times of transfer, and during times of high care-team workload. Though further research is needed to identify underlying reasons to explain our findings, both the timing of patient transfer (when modifiable) and workload of the team caring for the patient on arrival may serve as potential targets for interventions to improve the quality and safety of IHT for patients at greatest risk.

 

 

Disclosures

Dr. Mueller and Dr. Schnipper have nothing to disclose. Ms. Fiskio has nothing to disclose. Dr. Schnipper is the recipient of grant funding from Mallinckrodt Pharmaceuticals to conduct an investigator-initiated study of predictors and impact of opioid-related adverse drug events.

References

1. Iwashyna TJ. The incomplete infrastructure for interhospital patient transfer. Crit Care Med. 2 012;40(8):2470-2478. https://doi.org/10.1097/CCM.0b013e318254516f.
2. Mueller SK, Shannon E, Dalal A, Schnipper JL, Dykes P. Patient and physician experience with interhospital transfer: a qualitative study. J Patient Saf. 2018. https://doi.org/10.1097/PTS.0000000000000501
3. Mueller SK, Zheng J, Orav EJ, Schnipper JL. Rates, predictors and variability of interhospital transfers: a national evaluation. J Hosp Med. 2017;12(6):435-442.https://doi.org/10.12788/jhm.2747.
4. Bosk EA, Veinot T, Iwashyna TJ. Which patients and where: a qualitative study of patient transfers from community hospitals. Med Care. 2011;49(6):592-598. https://doi.org/10.1097/MLR.0b013e31820fb71b.
5. Sokol-Hessner L, White AA, Davis KF, Herzig SJ, Hohmann SF. Interhospital transfer patients discharged by academic hospitalists and general internists: characteristics and outcomes. J Hosp Med. 2016;11(4):245-50. https://doi.org/10.1002/jhm.2515.
6. Mueller S, Zheng J, Orav EJP, Schnipper JL. Inter-hospital transfer and patient outcomes: a retrospective cohort study. BMJ Qual Saf. 2018. https://doi.org/10.1136/bmjqs-2018-008087.
7. Mueller SK, Schnipper JL. Physician perspectives on interhospital transfers. J Patient Saf. 2016. https://doi.org/10.1097/PTS.0000000000000312.
8. Research Patient Data Registry (RPDR). http://rc.partners.org/rpdr. Accessed April 20, 2018.
9. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. https://doi.org/10.1056/NEJMsa003376
10. Mueller SK, Donze J, Schnipper JL. Intern workload and discontinuity of care on 30-day readmission. Am J Med. 2013;126(1):81-88. https://doi.org/10.1016/j.amjmed.2012.09.003.
11. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. PubMed
12. Ananthakrishnan AN, McGinley EL, Saeian K. Outcomes of weekend admissions for upper gastrointestinal hemorrhage: a nationwide analysis. Clin Gastroenterol Hepatol. 2009;7(3):296-302e1. https://doi.org/10.1016/j.cgh.2008.08.013.
13. Deshmukh A, Pant S, Kumar G, Bursac Z, Paydak H, Mehta JL. Comparison of outcomes of weekend versus weekday admissions for atrial fibrillation. Am J Cardiol. 2012;110(2):208-211. https://doi.org/10.1016/j.amjcard.2012.03.011.
14. Clarke MS, Wills RA, Bowman RV, et al. Exploratory study of the ‘weekend effect’ for acute medical admissions to public hospitals in Queensland, Australia. Intern Med J. 2010;40(11):777-783. https://doi.org/-10.1111/j.1445-5994.2009.02067.x.

References

1. Iwashyna TJ. The incomplete infrastructure for interhospital patient transfer. Crit Care Med. 2 012;40(8):2470-2478. https://doi.org/10.1097/CCM.0b013e318254516f.
2. Mueller SK, Shannon E, Dalal A, Schnipper JL, Dykes P. Patient and physician experience with interhospital transfer: a qualitative study. J Patient Saf. 2018. https://doi.org/10.1097/PTS.0000000000000501
3. Mueller SK, Zheng J, Orav EJ, Schnipper JL. Rates, predictors and variability of interhospital transfers: a national evaluation. J Hosp Med. 2017;12(6):435-442.https://doi.org/10.12788/jhm.2747.
4. Bosk EA, Veinot T, Iwashyna TJ. Which patients and where: a qualitative study of patient transfers from community hospitals. Med Care. 2011;49(6):592-598. https://doi.org/10.1097/MLR.0b013e31820fb71b.
5. Sokol-Hessner L, White AA, Davis KF, Herzig SJ, Hohmann SF. Interhospital transfer patients discharged by academic hospitalists and general internists: characteristics and outcomes. J Hosp Med. 2016;11(4):245-50. https://doi.org/10.1002/jhm.2515.
6. Mueller S, Zheng J, Orav EJP, Schnipper JL. Inter-hospital transfer and patient outcomes: a retrospective cohort study. BMJ Qual Saf. 2018. https://doi.org/10.1136/bmjqs-2018-008087.
7. Mueller SK, Schnipper JL. Physician perspectives on interhospital transfers. J Patient Saf. 2016. https://doi.org/10.1097/PTS.0000000000000312.
8. Research Patient Data Registry (RPDR). http://rc.partners.org/rpdr. Accessed April 20, 2018.
9. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. https://doi.org/10.1056/NEJMsa003376
10. Mueller SK, Donze J, Schnipper JL. Intern workload and discontinuity of care on 30-day readmission. Am J Med. 2013;126(1):81-88. https://doi.org/10.1016/j.amjmed.2012.09.003.
11. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. PubMed
12. Ananthakrishnan AN, McGinley EL, Saeian K. Outcomes of weekend admissions for upper gastrointestinal hemorrhage: a nationwide analysis. Clin Gastroenterol Hepatol. 2009;7(3):296-302e1. https://doi.org/10.1016/j.cgh.2008.08.013.
13. Deshmukh A, Pant S, Kumar G, Bursac Z, Paydak H, Mehta JL. Comparison of outcomes of weekend versus weekday admissions for atrial fibrillation. Am J Cardiol. 2012;110(2):208-211. https://doi.org/10.1016/j.amjcard.2012.03.011.
14. Clarke MS, Wills RA, Bowman RV, et al. Exploratory study of the ‘weekend effect’ for acute medical admissions to public hospitals in Queensland, Australia. Intern Med J. 2010;40(11):777-783. https://doi.org/-10.1111/j.1445-5994.2009.02067.x.

Issue
Journal of Hospital Medicine 14(8)
Issue
Journal of Hospital Medicine 14(8)
Page Number
486-491
Page Number
486-491
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Stephanie Mueller, MD, MPH; E-mail: smueller1@bwh.harvard.edu; Telephone: 617-278-0628
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Critical Errors in Inhaler Technique among Children Hospitalized with Asthma

Article Type
Changed
Sun, 06/30/2019 - 20:05

Many studies have shown that improved control can be achieved for most children with asthma if inhaled medications are taken correctly and adequately.1-3 Drug delivery studies have shown that bioavailability of medication with a pressurized metered-dose inhaler (MDI) improves from 34% to 83% with the addition of spacer devices. This difference is largely due to the decrease in oropharyngeal deposition,1,4,5 and therefore, the use of a spacer with proper technique has been recommended in all pediatric patients.1,6

Poor inhaler technique is common among children.1,7 Previous studies of children with asthma have evaluated inhaler technique, primarily in the outpatient and community settings, and reported variable rates of error (from 45% to >90%).8,9 No studies have evaluated children hospitalized with asthma. As these children represent a particularly high-risk group for morbidity and mortality,10,11 the objectives of this study were to assess errors in inhaler technique in hospitalized asthmatic children and identify risk factors for improper use.

METHODS

As part of a larger interventional study, we conducted a prospective cross-sectional study at a tertiary urban children’s hospital. We enrolled a convenience sample of children aged 2-16 years admitted to the inpatient ward with an asthma exacerbation Monday-Friday from 8 AM to 6 PM. Participants were required to have a diagnosis of asthma (an established diagnosis by their primary care provider or meets the National Heart, Lung, and Blood Institute [NHLBI] criteria1), have a consenting adult available, and speak English. Patients were excluded if they had a codiagnosis of an additional respiratory disease (ie, pneumonia), cardiac disease, or sickle cell anemia. The Institutional Review Board approved this study.

We asked caregivers, or children >10 years old if they independently use their inhaler, to demonstrate their typical home inhaler technique using a spacer with mask (SM), spacer with mouthpiece (SMP), or no spacer (per their usual home practice). Inhaler technique was scored using a previously validated asthma checklist (Table 1).12 Certain steps in the checklist were identified as critical: (Step 1) removing the cap, (Step 3) attaching to a spacer, (Step 7) taking six breaths (SM), and (Step 9) holding breath for five seconds (SMP). Caregivers only were also asked to complete questionnaires assessing their literacy (Brief Health Literacy Screen [BHLS]), confidence (Parent Asthma Management Self-Efficacy scale [PAMSE]), and any barriers to managing their child’s asthma (Barriers to Asthma Care). Demographic and medical history information was extracted from the medical chart.



Inhaler technique was evaluated in two ways by comparing: (1) patients who missed more than one critical step with those who missed zero critical steps and (2) patients with an asthma checklist score <7 versus ≥7. While there is a lot of variability in how inhaler technique has been measured in past studies, these two markers (75% of steps and critical errors) were the most common.8

We assessed a number of variables to evaluate their association with improper inhaler technique. For categorical variables, the association with each outcome was evaluated using relative risks (RRs). Bivariate P-values were calculated using chi-square or Fisher’s exact tests, as appropriate. Continuous variables were assessed for associations with each outcome using two-sample t-tests. Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated using logistic regression analyses. Using a model entry criterion of P < .10 on univariate tests, variables were entered into a multivariable logistic regression model for each outcome. Full models with all eligible covariates and reduced models selected via a manual backward selection process were evaluated. Two-sided P-values <.05 were considered statistically significant.

 

 

RESULTS

Participants

From October 2016 to June 2017, 380 participants were assessed for participation; 215 were excluded for not having a parent available (59%), not speaking English (27%), not having an asthma diagnosis (ie, viral wheezing; 14%), and 52 (14%) declined to participate. Therefore, a total of 113 participants were enrolled, with demonstrations provided by 100 caregivers and 13 children. The mean age of the patients overall was 6.6 ± 3.4 years and over half (55%) of the participants had uncontrolled asthma (NHLBI criteria1).

Errors in Inhaler Technique

The mean asthma checklist score was 6.7 (maximum score of 10 for SM and 12 for SMP). A third (35%) scored <7 on the asthma checklist and 42% of participants missed at least one critical step. Overall, children who missed a critical step were significantly older (7.8 [6.7-8.9] vs 5.8 [5.1-6.5] years; P = .002). More participants missed a critical step with the SMP than the SM (75% [51%-90%] vs 36% [27%-46%]; P = .003), and this was the most prominent factor for missing a critical step in the adjusted regression analysis (OR 6.95 [1.71-28.23], P = .007). The most commonly missed steps were breathing normally for 30 seconds for SM, and for SMP, it was breathing out fully and breathing away from the spacer (Table 1). Twenty participants (18%) did not use a spacer device; these patients were older than those who did use a spacer (mean age 8.5 [6.7-10.4] vs 6.2 [5.6-6.9] years; P = .005); however, no other significant differences were identified.

Demographic, Medical History, and Socioeconomic Characteristics

Overall, race, ethnicity, and insurance status did not vary significantly based on asthma checklist score ≥7 or missing a critical step. Patients in the SM group who had received inpatient asthma education during a previous admission, had a history of pediatric intensive care unit (PICU) admission, and had been prescribed a daily controller were less likely to miss a critical step (Table 2). Parental education level varied, with 33% having a high school degree or less, but was not associated with asthma checklist score or missing critical steps. Parental BHLS and parental confidence (PAMSE) were not significantly associated with inhaler proficiency. However, transportation-related barriers were more common in patients with checklist scores <7 and more missed critical steps (OR 1.62 [1.06-2.46]; P = .02).

DISCUSSION

Nearly half of the participants in this study missed at least one critical step in inhaler use. In addition, 18% did not use a spacer when demonstrating their inhaler technique. Despite robust studies demonstrating how asthma education can improve both asthma skills and clinical outcomes,13 our study demonstrates that a large gap remains in proper inhaler technique among asthmatic patients presenting for inpatient care. Specifically, in the mouthpiece group, steps related to breathing technique were the most commonly missed. Our results also show that inhaler technique errors were most prominent in the adolescent population, possibly coinciding with the process of transitioning to a mouthpiece and more independence in medication administration. Adolescents may be a high-impact population on which to focus inpatient asthma education. Additionally, we found that a previous PICU admission and previous inpatient asthma education were associated with missing fewer critical steps in inhaler technique. This finding is consistent with those of another study that evaluated inhaler technique in the emergency department and found that previous hospitalization for asthma was inversely related to improper inhaler use (RR 0.55, 95% CI 0.36-0.84).14 This supports that when provided, inpatient education can increase inhaler administration skills.

 

 

Previous studies conducted in the outpatient setting have demonstrated variable rates of inhaler skill, from 0% to approximately 89% of children performing all steps of inhalation correctly.8 This wide range may be related to variations in the number and definition of critical steps between the different studies. In our study, we highlighted removing the cap, attaching a spacer, and adequate breathing technique as critical steps, because failure to complete them would significantly reduce lung deposition of medication. While past studies did evaluate both MDIs and discuss the devices, our study is the first to report difference in problems with technique between SM and SMP. As asthma educational interventions are developed and/or implemented, it is important to stress that different steps in inhaler technique are being missed in those using a mask versus mouthpiece.

The limitations of this study include that it was at a single center with a primarily urban and English-speaking population; however, this study population reflects the racial diversity of pediatric asthma patients. Further studies may explore the reproducibility of these findings at multiple centers and with non-English-speaking families. This study included younger patients than in some previous publications investigating asthma; however, all patients met the criteria for asthma diagnosis and this age range is reflective of patients presenting for inpatient asthma care. Furthermore, because of our daytime research hours, 59% of patients were excluded because a primary caregiver was not available. It is possible that these families have decreased access to inpatient asthma educators as well and may be another target group for future studies. Finally, a large proportion of parents had a college education or greater in our sample. However, there was no association within our analysis between parental education level and inhaler proficiency.

The findings from this study indicate that continued efforts are needed to establish that inhaler technique is adequate for all families regardless of their educational status or socioeconomic background, especially for adolescents and in the setting of poor asthma control. Furthermore, our findings support that inhaler technique education may be beneficial in the inpatient setting and that acute care settings can provide a valuable “teachable moment.”14,15

CONCLUSION

Errors in inhaler technique are prevalent in pediatric inpatients with asthma, primarily those using a mouthpiece device. Educational efforts in both inpatient and outpatient settings have the potential to improve drug delivery and therefore asthma control. Inpatient hospitalization may serve as a platform for further studies to investigate innovative educational interventions.

Acknowledgments

The authors thank Tina Carter for her assistance in the recruitment and data collection and Ashley Hull and Susannah Butters for training the study staff on the use of the asthma checklist.

Disclosures

Dr. Gupta receives research grant support from the National Institutes of Health and the United Healthcare Group. Dr. Gupta serves as a consultant for DBV Technology, Aimmune Therapeutics, Kaleo & BEFORE Brands. Dr. Gupta has received lecture fees/honorariums from the Allergy Asthma Network & the American College of Asthma, Allergy & Immunology. Dr. Press reports research support from the Chicago Center for Diabetes Translation Research Pilot and Feasibility Grant, the Bucksbaum Institute for Clinical Excellence Pilot Grant Program, the Academy of Distinguished Medical Educators, the Development of Novel Hospital-initiated Care Bundle in Adults Hospitalized for Acute Asthma: the 41st Multicenter Airway Research Collaboration (MARC-41) Study, UCM’s Innovation Grant Program, the University of Chicago-Chapin Hall Join Research Fund, the NIH/NHLBI Loan Repayment Program, 1 K23 HL118151 01, NIH NLBHI R03 (RFA-HL-18-025), the George and Carol Abramson Pilot Awards, the COPD Foundation Green Shoots Grant, the University of Chicago Women’s Board Grant, NIH NHLBI UG1 (RFA-HL-17-009), and the CTSA Pilot Award, outside the submitted work. These disclosures have been reported to Dr. Press’ institutional IRB board. Additionally, a management plan is on file that details how to address conflicts such as these which are sources of research support but do not directly support the work at hand. The remaining authors have no conflicts of interest relevant to the article to disclose.

 

 

Funding

This study was funded by internal grants from Ann and Robert H. Lurie Children’s Hospital of Chicago. Dr. Press was funded by a K23HL118151.

Files
References

1. Expert Panel Report 3: guidelines for the diagnosis and management of asthma: full report. Washington, DC: US Department of Health and Human Services, National Institutes of Health, National Heart, Lung, and Blood Institute; 2007. PubMed
2. Hekking PP, Wener RR, Amelink M, Zwinderman AH, Bouvy ML, Bel EH. The prevalence of severe refractory asthma. J Allergy Clin Immunol. 2015;135(4):896-902. doi: 10.1016/j.jaci.2014.08.042. PubMed
3. Peters SP, Ferguson G, Deniz Y, Reisner C. Uncontrolled asthma: a review of the prevalence, disease burden and options for treatment. Respir Med. 2006;100(7):1139-1151. doi: 10.1016/j.rmed.2006.03.031. PubMed
4. Dickens GR, Wermeling DP, Matheny CJ, et al. Pharmacokinetics of flunisolide administered via metered dose inhaler with and without a spacer device and following oral administration. Ann Allergy Asthma Immunol. 2000;84(5):528-532. doi: 10.1016/S1081-1206(10)62517-3. PubMed
5. Nikander K, Nicholls C, Denyer J, Pritchard J. The evolution of spacers and valved holding chambers. J Aerosol Med Pulm Drug Deliv. 2014;27(1):S4-S23. doi: 10.1089/jamp.2013.1076. PubMed
6. Rubin BK, Fink JB. The delivery of inhaled medication to the young child. Pediatr Clin North Am. 2003;50(3):717-731. doi:10.1016/S0031-3955(03)00049-X. PubMed
7. Roland NJ, Bhalla RK, Earis J. The local side effects of inhaled corticosteroids: current understanding and review of the literature. Chest. 2004;126(1):213-219. doi: 10.1378/chest.126.1.213. PubMed
8. Gillette C, Rockich-Winston N, Kuhn JA, Flesher S, Shepherd M. Inhaler technique in children with asthma: a systematic review. Acad Pediatr. 2016;16(7):605-615. doi: 10.1016/j.acap.2016.04.006. PubMed
9. Pappalardo AA, Karavolos K, Martin MA. What really happens in the home: the medication environment of urban, minority youth. J Allergy Clin Immunol Pract. 2017;5(3):764-770. doi: 10.1016/j.jaip.2016.09.046. PubMed
10. Crane J, Pearce N, Burgess C, Woodman K, Robson B, Beasley R. Markers of risk of asthma death or readmission in the 12 months following a hospital admission for asthma. Int J Epidemiol. 1992;21(4):737-744. doi: 10.1093/ije/21.4.737. PubMed
11. Turner MO, Noertjojo K, Vedal S, Bai T, Crump S, Fitzgerald JM. Risk factors for near-fatal asthma. A case-control study in hospitalized patients with asthma. Am J Respir Crit Care Med. 1998;157(6 Pt 1):1804-1809. doi: 10.1164/ajrccm.157.6.9708092. PubMed
12. Press VG, Arora VM, Shah LM, et al. Misuse of respiratory inhalers in hospitalized patients with asthma or COPD. J Gen Intern Med. 2011;26(6):635-642. doi: 10.1007/s11606-010-1624-2. PubMed
13. Guevara JP, Wolf FM, Grum CM, Clark NM. Effects of educational interventions for self management of asthma in children and adolescents: systematic review and meta-analysis. BMJ. 2003;326(7402):1308-1309. doi: 10.1136/bmj.326.7402.1308. PubMed
14. Scarfone RJ, Capraro GA, Zorc JJ, Zhao H. Demonstrated use of metered-dose inhalers and peak flow meters by children and adolescents with acute asthma exacerbations. Arch Pediatr Adolesc Med. 2002;156(4):378-383. doi: 10.1001/archpedi.156.4.378. PubMed
15. Sockrider MM, Abramson S, Brooks E, et al. Delivering tailored asthma family education in a pediatric emergency department setting: a pilot study. Pediatrics. 2006;117(4 Pt 2):S135-144. doi: 10.1542/peds.2005-2000K. PubMed

Article PDF
Issue
Journal of Hospital Medicine 14(6)
Publications
Topics
Page Number
361-365. Published online first April 8, 2019.
Sections
Files
Files
Article PDF
Article PDF

Many studies have shown that improved control can be achieved for most children with asthma if inhaled medications are taken correctly and adequately.1-3 Drug delivery studies have shown that bioavailability of medication with a pressurized metered-dose inhaler (MDI) improves from 34% to 83% with the addition of spacer devices. This difference is largely due to the decrease in oropharyngeal deposition,1,4,5 and therefore, the use of a spacer with proper technique has been recommended in all pediatric patients.1,6

Poor inhaler technique is common among children.1,7 Previous studies of children with asthma have evaluated inhaler technique, primarily in the outpatient and community settings, and reported variable rates of error (from 45% to >90%).8,9 No studies have evaluated children hospitalized with asthma. As these children represent a particularly high-risk group for morbidity and mortality,10,11 the objectives of this study were to assess errors in inhaler technique in hospitalized asthmatic children and identify risk factors for improper use.

METHODS

As part of a larger interventional study, we conducted a prospective cross-sectional study at a tertiary urban children’s hospital. We enrolled a convenience sample of children aged 2-16 years admitted to the inpatient ward with an asthma exacerbation Monday-Friday from 8 AM to 6 PM. Participants were required to have a diagnosis of asthma (an established diagnosis by their primary care provider or meets the National Heart, Lung, and Blood Institute [NHLBI] criteria1), have a consenting adult available, and speak English. Patients were excluded if they had a codiagnosis of an additional respiratory disease (ie, pneumonia), cardiac disease, or sickle cell anemia. The Institutional Review Board approved this study.

We asked caregivers, or children >10 years old if they independently use their inhaler, to demonstrate their typical home inhaler technique using a spacer with mask (SM), spacer with mouthpiece (SMP), or no spacer (per their usual home practice). Inhaler technique was scored using a previously validated asthma checklist (Table 1).12 Certain steps in the checklist were identified as critical: (Step 1) removing the cap, (Step 3) attaching to a spacer, (Step 7) taking six breaths (SM), and (Step 9) holding breath for five seconds (SMP). Caregivers only were also asked to complete questionnaires assessing their literacy (Brief Health Literacy Screen [BHLS]), confidence (Parent Asthma Management Self-Efficacy scale [PAMSE]), and any barriers to managing their child’s asthma (Barriers to Asthma Care). Demographic and medical history information was extracted from the medical chart.



Inhaler technique was evaluated in two ways by comparing: (1) patients who missed more than one critical step with those who missed zero critical steps and (2) patients with an asthma checklist score <7 versus ≥7. While there is a lot of variability in how inhaler technique has been measured in past studies, these two markers (75% of steps and critical errors) were the most common.8

We assessed a number of variables to evaluate their association with improper inhaler technique. For categorical variables, the association with each outcome was evaluated using relative risks (RRs). Bivariate P-values were calculated using chi-square or Fisher’s exact tests, as appropriate. Continuous variables were assessed for associations with each outcome using two-sample t-tests. Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated using logistic regression analyses. Using a model entry criterion of P < .10 on univariate tests, variables were entered into a multivariable logistic regression model for each outcome. Full models with all eligible covariates and reduced models selected via a manual backward selection process were evaluated. Two-sided P-values <.05 were considered statistically significant.

 

 

RESULTS

Participants

From October 2016 to June 2017, 380 participants were assessed for participation; 215 were excluded for not having a parent available (59%), not speaking English (27%), not having an asthma diagnosis (ie, viral wheezing; 14%), and 52 (14%) declined to participate. Therefore, a total of 113 participants were enrolled, with demonstrations provided by 100 caregivers and 13 children. The mean age of the patients overall was 6.6 ± 3.4 years and over half (55%) of the participants had uncontrolled asthma (NHLBI criteria1).

Errors in Inhaler Technique

The mean asthma checklist score was 6.7 (maximum score of 10 for SM and 12 for SMP). A third (35%) scored <7 on the asthma checklist and 42% of participants missed at least one critical step. Overall, children who missed a critical step were significantly older (7.8 [6.7-8.9] vs 5.8 [5.1-6.5] years; P = .002). More participants missed a critical step with the SMP than the SM (75% [51%-90%] vs 36% [27%-46%]; P = .003), and this was the most prominent factor for missing a critical step in the adjusted regression analysis (OR 6.95 [1.71-28.23], P = .007). The most commonly missed steps were breathing normally for 30 seconds for SM, and for SMP, it was breathing out fully and breathing away from the spacer (Table 1). Twenty participants (18%) did not use a spacer device; these patients were older than those who did use a spacer (mean age 8.5 [6.7-10.4] vs 6.2 [5.6-6.9] years; P = .005); however, no other significant differences were identified.

Demographic, Medical History, and Socioeconomic Characteristics

Overall, race, ethnicity, and insurance status did not vary significantly based on asthma checklist score ≥7 or missing a critical step. Patients in the SM group who had received inpatient asthma education during a previous admission, had a history of pediatric intensive care unit (PICU) admission, and had been prescribed a daily controller were less likely to miss a critical step (Table 2). Parental education level varied, with 33% having a high school degree or less, but was not associated with asthma checklist score or missing critical steps. Parental BHLS and parental confidence (PAMSE) were not significantly associated with inhaler proficiency. However, transportation-related barriers were more common in patients with checklist scores <7 and more missed critical steps (OR 1.62 [1.06-2.46]; P = .02).

DISCUSSION

Nearly half of the participants in this study missed at least one critical step in inhaler use. In addition, 18% did not use a spacer when demonstrating their inhaler technique. Despite robust studies demonstrating how asthma education can improve both asthma skills and clinical outcomes,13 our study demonstrates that a large gap remains in proper inhaler technique among asthmatic patients presenting for inpatient care. Specifically, in the mouthpiece group, steps related to breathing technique were the most commonly missed. Our results also show that inhaler technique errors were most prominent in the adolescent population, possibly coinciding with the process of transitioning to a mouthpiece and more independence in medication administration. Adolescents may be a high-impact population on which to focus inpatient asthma education. Additionally, we found that a previous PICU admission and previous inpatient asthma education were associated with missing fewer critical steps in inhaler technique. This finding is consistent with those of another study that evaluated inhaler technique in the emergency department and found that previous hospitalization for asthma was inversely related to improper inhaler use (RR 0.55, 95% CI 0.36-0.84).14 This supports that when provided, inpatient education can increase inhaler administration skills.

 

 

Previous studies conducted in the outpatient setting have demonstrated variable rates of inhaler skill, from 0% to approximately 89% of children performing all steps of inhalation correctly.8 This wide range may be related to variations in the number and definition of critical steps between the different studies. In our study, we highlighted removing the cap, attaching a spacer, and adequate breathing technique as critical steps, because failure to complete them would significantly reduce lung deposition of medication. While past studies did evaluate both MDIs and discuss the devices, our study is the first to report difference in problems with technique between SM and SMP. As asthma educational interventions are developed and/or implemented, it is important to stress that different steps in inhaler technique are being missed in those using a mask versus mouthpiece.

The limitations of this study include that it was at a single center with a primarily urban and English-speaking population; however, this study population reflects the racial diversity of pediatric asthma patients. Further studies may explore the reproducibility of these findings at multiple centers and with non-English-speaking families. This study included younger patients than in some previous publications investigating asthma; however, all patients met the criteria for asthma diagnosis and this age range is reflective of patients presenting for inpatient asthma care. Furthermore, because of our daytime research hours, 59% of patients were excluded because a primary caregiver was not available. It is possible that these families have decreased access to inpatient asthma educators as well and may be another target group for future studies. Finally, a large proportion of parents had a college education or greater in our sample. However, there was no association within our analysis between parental education level and inhaler proficiency.

The findings from this study indicate that continued efforts are needed to establish that inhaler technique is adequate for all families regardless of their educational status or socioeconomic background, especially for adolescents and in the setting of poor asthma control. Furthermore, our findings support that inhaler technique education may be beneficial in the inpatient setting and that acute care settings can provide a valuable “teachable moment.”14,15

CONCLUSION

Errors in inhaler technique are prevalent in pediatric inpatients with asthma, primarily those using a mouthpiece device. Educational efforts in both inpatient and outpatient settings have the potential to improve drug delivery and therefore asthma control. Inpatient hospitalization may serve as a platform for further studies to investigate innovative educational interventions.

Acknowledgments

The authors thank Tina Carter for her assistance in the recruitment and data collection and Ashley Hull and Susannah Butters for training the study staff on the use of the asthma checklist.

Disclosures

Dr. Gupta receives research grant support from the National Institutes of Health and the United Healthcare Group. Dr. Gupta serves as a consultant for DBV Technology, Aimmune Therapeutics, Kaleo & BEFORE Brands. Dr. Gupta has received lecture fees/honorariums from the Allergy Asthma Network & the American College of Asthma, Allergy & Immunology. Dr. Press reports research support from the Chicago Center for Diabetes Translation Research Pilot and Feasibility Grant, the Bucksbaum Institute for Clinical Excellence Pilot Grant Program, the Academy of Distinguished Medical Educators, the Development of Novel Hospital-initiated Care Bundle in Adults Hospitalized for Acute Asthma: the 41st Multicenter Airway Research Collaboration (MARC-41) Study, UCM’s Innovation Grant Program, the University of Chicago-Chapin Hall Join Research Fund, the NIH/NHLBI Loan Repayment Program, 1 K23 HL118151 01, NIH NLBHI R03 (RFA-HL-18-025), the George and Carol Abramson Pilot Awards, the COPD Foundation Green Shoots Grant, the University of Chicago Women’s Board Grant, NIH NHLBI UG1 (RFA-HL-17-009), and the CTSA Pilot Award, outside the submitted work. These disclosures have been reported to Dr. Press’ institutional IRB board. Additionally, a management plan is on file that details how to address conflicts such as these which are sources of research support but do not directly support the work at hand. The remaining authors have no conflicts of interest relevant to the article to disclose.

 

 

Funding

This study was funded by internal grants from Ann and Robert H. Lurie Children’s Hospital of Chicago. Dr. Press was funded by a K23HL118151.

Many studies have shown that improved control can be achieved for most children with asthma if inhaled medications are taken correctly and adequately.1-3 Drug delivery studies have shown that bioavailability of medication with a pressurized metered-dose inhaler (MDI) improves from 34% to 83% with the addition of spacer devices. This difference is largely due to the decrease in oropharyngeal deposition,1,4,5 and therefore, the use of a spacer with proper technique has been recommended in all pediatric patients.1,6

Poor inhaler technique is common among children.1,7 Previous studies of children with asthma have evaluated inhaler technique, primarily in the outpatient and community settings, and reported variable rates of error (from 45% to >90%).8,9 No studies have evaluated children hospitalized with asthma. As these children represent a particularly high-risk group for morbidity and mortality,10,11 the objectives of this study were to assess errors in inhaler technique in hospitalized asthmatic children and identify risk factors for improper use.

METHODS

As part of a larger interventional study, we conducted a prospective cross-sectional study at a tertiary urban children’s hospital. We enrolled a convenience sample of children aged 2-16 years admitted to the inpatient ward with an asthma exacerbation Monday-Friday from 8 AM to 6 PM. Participants were required to have a diagnosis of asthma (an established diagnosis by their primary care provider or meets the National Heart, Lung, and Blood Institute [NHLBI] criteria1), have a consenting adult available, and speak English. Patients were excluded if they had a codiagnosis of an additional respiratory disease (ie, pneumonia), cardiac disease, or sickle cell anemia. The Institutional Review Board approved this study.

We asked caregivers, or children >10 years old if they independently use their inhaler, to demonstrate their typical home inhaler technique using a spacer with mask (SM), spacer with mouthpiece (SMP), or no spacer (per their usual home practice). Inhaler technique was scored using a previously validated asthma checklist (Table 1).12 Certain steps in the checklist were identified as critical: (Step 1) removing the cap, (Step 3) attaching to a spacer, (Step 7) taking six breaths (SM), and (Step 9) holding breath for five seconds (SMP). Caregivers only were also asked to complete questionnaires assessing their literacy (Brief Health Literacy Screen [BHLS]), confidence (Parent Asthma Management Self-Efficacy scale [PAMSE]), and any barriers to managing their child’s asthma (Barriers to Asthma Care). Demographic and medical history information was extracted from the medical chart.



Inhaler technique was evaluated in two ways by comparing: (1) patients who missed more than one critical step with those who missed zero critical steps and (2) patients with an asthma checklist score <7 versus ≥7. While there is a lot of variability in how inhaler technique has been measured in past studies, these two markers (75% of steps and critical errors) were the most common.8

We assessed a number of variables to evaluate their association with improper inhaler technique. For categorical variables, the association with each outcome was evaluated using relative risks (RRs). Bivariate P-values were calculated using chi-square or Fisher’s exact tests, as appropriate. Continuous variables were assessed for associations with each outcome using two-sample t-tests. Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated using logistic regression analyses. Using a model entry criterion of P < .10 on univariate tests, variables were entered into a multivariable logistic regression model for each outcome. Full models with all eligible covariates and reduced models selected via a manual backward selection process were evaluated. Two-sided P-values <.05 were considered statistically significant.

 

 

RESULTS

Participants

From October 2016 to June 2017, 380 participants were assessed for participation; 215 were excluded for not having a parent available (59%), not speaking English (27%), not having an asthma diagnosis (ie, viral wheezing; 14%), and 52 (14%) declined to participate. Therefore, a total of 113 participants were enrolled, with demonstrations provided by 100 caregivers and 13 children. The mean age of the patients overall was 6.6 ± 3.4 years and over half (55%) of the participants had uncontrolled asthma (NHLBI criteria1).

Errors in Inhaler Technique

The mean asthma checklist score was 6.7 (maximum score of 10 for SM and 12 for SMP). A third (35%) scored <7 on the asthma checklist and 42% of participants missed at least one critical step. Overall, children who missed a critical step were significantly older (7.8 [6.7-8.9] vs 5.8 [5.1-6.5] years; P = .002). More participants missed a critical step with the SMP than the SM (75% [51%-90%] vs 36% [27%-46%]; P = .003), and this was the most prominent factor for missing a critical step in the adjusted regression analysis (OR 6.95 [1.71-28.23], P = .007). The most commonly missed steps were breathing normally for 30 seconds for SM, and for SMP, it was breathing out fully and breathing away from the spacer (Table 1). Twenty participants (18%) did not use a spacer device; these patients were older than those who did use a spacer (mean age 8.5 [6.7-10.4] vs 6.2 [5.6-6.9] years; P = .005); however, no other significant differences were identified.

Demographic, Medical History, and Socioeconomic Characteristics

Overall, race, ethnicity, and insurance status did not vary significantly based on asthma checklist score ≥7 or missing a critical step. Patients in the SM group who had received inpatient asthma education during a previous admission, had a history of pediatric intensive care unit (PICU) admission, and had been prescribed a daily controller were less likely to miss a critical step (Table 2). Parental education level varied, with 33% having a high school degree or less, but was not associated with asthma checklist score or missing critical steps. Parental BHLS and parental confidence (PAMSE) were not significantly associated with inhaler proficiency. However, transportation-related barriers were more common in patients with checklist scores <7 and more missed critical steps (OR 1.62 [1.06-2.46]; P = .02).

DISCUSSION

Nearly half of the participants in this study missed at least one critical step in inhaler use. In addition, 18% did not use a spacer when demonstrating their inhaler technique. Despite robust studies demonstrating how asthma education can improve both asthma skills and clinical outcomes,13 our study demonstrates that a large gap remains in proper inhaler technique among asthmatic patients presenting for inpatient care. Specifically, in the mouthpiece group, steps related to breathing technique were the most commonly missed. Our results also show that inhaler technique errors were most prominent in the adolescent population, possibly coinciding with the process of transitioning to a mouthpiece and more independence in medication administration. Adolescents may be a high-impact population on which to focus inpatient asthma education. Additionally, we found that a previous PICU admission and previous inpatient asthma education were associated with missing fewer critical steps in inhaler technique. This finding is consistent with those of another study that evaluated inhaler technique in the emergency department and found that previous hospitalization for asthma was inversely related to improper inhaler use (RR 0.55, 95% CI 0.36-0.84).14 This supports that when provided, inpatient education can increase inhaler administration skills.

 

 

Previous studies conducted in the outpatient setting have demonstrated variable rates of inhaler skill, from 0% to approximately 89% of children performing all steps of inhalation correctly.8 This wide range may be related to variations in the number and definition of critical steps between the different studies. In our study, we highlighted removing the cap, attaching a spacer, and adequate breathing technique as critical steps, because failure to complete them would significantly reduce lung deposition of medication. While past studies did evaluate both MDIs and discuss the devices, our study is the first to report difference in problems with technique between SM and SMP. As asthma educational interventions are developed and/or implemented, it is important to stress that different steps in inhaler technique are being missed in those using a mask versus mouthpiece.

The limitations of this study include that it was at a single center with a primarily urban and English-speaking population; however, this study population reflects the racial diversity of pediatric asthma patients. Further studies may explore the reproducibility of these findings at multiple centers and with non-English-speaking families. This study included younger patients than in some previous publications investigating asthma; however, all patients met the criteria for asthma diagnosis and this age range is reflective of patients presenting for inpatient asthma care. Furthermore, because of our daytime research hours, 59% of patients were excluded because a primary caregiver was not available. It is possible that these families have decreased access to inpatient asthma educators as well and may be another target group for future studies. Finally, a large proportion of parents had a college education or greater in our sample. However, there was no association within our analysis between parental education level and inhaler proficiency.

The findings from this study indicate that continued efforts are needed to establish that inhaler technique is adequate for all families regardless of their educational status or socioeconomic background, especially for adolescents and in the setting of poor asthma control. Furthermore, our findings support that inhaler technique education may be beneficial in the inpatient setting and that acute care settings can provide a valuable “teachable moment.”14,15

CONCLUSION

Errors in inhaler technique are prevalent in pediatric inpatients with asthma, primarily those using a mouthpiece device. Educational efforts in both inpatient and outpatient settings have the potential to improve drug delivery and therefore asthma control. Inpatient hospitalization may serve as a platform for further studies to investigate innovative educational interventions.

Acknowledgments

The authors thank Tina Carter for her assistance in the recruitment and data collection and Ashley Hull and Susannah Butters for training the study staff on the use of the asthma checklist.

Disclosures

Dr. Gupta receives research grant support from the National Institutes of Health and the United Healthcare Group. Dr. Gupta serves as a consultant for DBV Technology, Aimmune Therapeutics, Kaleo & BEFORE Brands. Dr. Gupta has received lecture fees/honorariums from the Allergy Asthma Network & the American College of Asthma, Allergy & Immunology. Dr. Press reports research support from the Chicago Center for Diabetes Translation Research Pilot and Feasibility Grant, the Bucksbaum Institute for Clinical Excellence Pilot Grant Program, the Academy of Distinguished Medical Educators, the Development of Novel Hospital-initiated Care Bundle in Adults Hospitalized for Acute Asthma: the 41st Multicenter Airway Research Collaboration (MARC-41) Study, UCM’s Innovation Grant Program, the University of Chicago-Chapin Hall Join Research Fund, the NIH/NHLBI Loan Repayment Program, 1 K23 HL118151 01, NIH NLBHI R03 (RFA-HL-18-025), the George and Carol Abramson Pilot Awards, the COPD Foundation Green Shoots Grant, the University of Chicago Women’s Board Grant, NIH NHLBI UG1 (RFA-HL-17-009), and the CTSA Pilot Award, outside the submitted work. These disclosures have been reported to Dr. Press’ institutional IRB board. Additionally, a management plan is on file that details how to address conflicts such as these which are sources of research support but do not directly support the work at hand. The remaining authors have no conflicts of interest relevant to the article to disclose.

 

 

Funding

This study was funded by internal grants from Ann and Robert H. Lurie Children’s Hospital of Chicago. Dr. Press was funded by a K23HL118151.

References

1. Expert Panel Report 3: guidelines for the diagnosis and management of asthma: full report. Washington, DC: US Department of Health and Human Services, National Institutes of Health, National Heart, Lung, and Blood Institute; 2007. PubMed
2. Hekking PP, Wener RR, Amelink M, Zwinderman AH, Bouvy ML, Bel EH. The prevalence of severe refractory asthma. J Allergy Clin Immunol. 2015;135(4):896-902. doi: 10.1016/j.jaci.2014.08.042. PubMed
3. Peters SP, Ferguson G, Deniz Y, Reisner C. Uncontrolled asthma: a review of the prevalence, disease burden and options for treatment. Respir Med. 2006;100(7):1139-1151. doi: 10.1016/j.rmed.2006.03.031. PubMed
4. Dickens GR, Wermeling DP, Matheny CJ, et al. Pharmacokinetics of flunisolide administered via metered dose inhaler with and without a spacer device and following oral administration. Ann Allergy Asthma Immunol. 2000;84(5):528-532. doi: 10.1016/S1081-1206(10)62517-3. PubMed
5. Nikander K, Nicholls C, Denyer J, Pritchard J. The evolution of spacers and valved holding chambers. J Aerosol Med Pulm Drug Deliv. 2014;27(1):S4-S23. doi: 10.1089/jamp.2013.1076. PubMed
6. Rubin BK, Fink JB. The delivery of inhaled medication to the young child. Pediatr Clin North Am. 2003;50(3):717-731. doi:10.1016/S0031-3955(03)00049-X. PubMed
7. Roland NJ, Bhalla RK, Earis J. The local side effects of inhaled corticosteroids: current understanding and review of the literature. Chest. 2004;126(1):213-219. doi: 10.1378/chest.126.1.213. PubMed
8. Gillette C, Rockich-Winston N, Kuhn JA, Flesher S, Shepherd M. Inhaler technique in children with asthma: a systematic review. Acad Pediatr. 2016;16(7):605-615. doi: 10.1016/j.acap.2016.04.006. PubMed
9. Pappalardo AA, Karavolos K, Martin MA. What really happens in the home: the medication environment of urban, minority youth. J Allergy Clin Immunol Pract. 2017;5(3):764-770. doi: 10.1016/j.jaip.2016.09.046. PubMed
10. Crane J, Pearce N, Burgess C, Woodman K, Robson B, Beasley R. Markers of risk of asthma death or readmission in the 12 months following a hospital admission for asthma. Int J Epidemiol. 1992;21(4):737-744. doi: 10.1093/ije/21.4.737. PubMed
11. Turner MO, Noertjojo K, Vedal S, Bai T, Crump S, Fitzgerald JM. Risk factors for near-fatal asthma. A case-control study in hospitalized patients with asthma. Am J Respir Crit Care Med. 1998;157(6 Pt 1):1804-1809. doi: 10.1164/ajrccm.157.6.9708092. PubMed
12. Press VG, Arora VM, Shah LM, et al. Misuse of respiratory inhalers in hospitalized patients with asthma or COPD. J Gen Intern Med. 2011;26(6):635-642. doi: 10.1007/s11606-010-1624-2. PubMed
13. Guevara JP, Wolf FM, Grum CM, Clark NM. Effects of educational interventions for self management of asthma in children and adolescents: systematic review and meta-analysis. BMJ. 2003;326(7402):1308-1309. doi: 10.1136/bmj.326.7402.1308. PubMed
14. Scarfone RJ, Capraro GA, Zorc JJ, Zhao H. Demonstrated use of metered-dose inhalers and peak flow meters by children and adolescents with acute asthma exacerbations. Arch Pediatr Adolesc Med. 2002;156(4):378-383. doi: 10.1001/archpedi.156.4.378. PubMed
15. Sockrider MM, Abramson S, Brooks E, et al. Delivering tailored asthma family education in a pediatric emergency department setting: a pilot study. Pediatrics. 2006;117(4 Pt 2):S135-144. doi: 10.1542/peds.2005-2000K. PubMed

References

1. Expert Panel Report 3: guidelines for the diagnosis and management of asthma: full report. Washington, DC: US Department of Health and Human Services, National Institutes of Health, National Heart, Lung, and Blood Institute; 2007. PubMed
2. Hekking PP, Wener RR, Amelink M, Zwinderman AH, Bouvy ML, Bel EH. The prevalence of severe refractory asthma. J Allergy Clin Immunol. 2015;135(4):896-902. doi: 10.1016/j.jaci.2014.08.042. PubMed
3. Peters SP, Ferguson G, Deniz Y, Reisner C. Uncontrolled asthma: a review of the prevalence, disease burden and options for treatment. Respir Med. 2006;100(7):1139-1151. doi: 10.1016/j.rmed.2006.03.031. PubMed
4. Dickens GR, Wermeling DP, Matheny CJ, et al. Pharmacokinetics of flunisolide administered via metered dose inhaler with and without a spacer device and following oral administration. Ann Allergy Asthma Immunol. 2000;84(5):528-532. doi: 10.1016/S1081-1206(10)62517-3. PubMed
5. Nikander K, Nicholls C, Denyer J, Pritchard J. The evolution of spacers and valved holding chambers. J Aerosol Med Pulm Drug Deliv. 2014;27(1):S4-S23. doi: 10.1089/jamp.2013.1076. PubMed
6. Rubin BK, Fink JB. The delivery of inhaled medication to the young child. Pediatr Clin North Am. 2003;50(3):717-731. doi:10.1016/S0031-3955(03)00049-X. PubMed
7. Roland NJ, Bhalla RK, Earis J. The local side effects of inhaled corticosteroids: current understanding and review of the literature. Chest. 2004;126(1):213-219. doi: 10.1378/chest.126.1.213. PubMed
8. Gillette C, Rockich-Winston N, Kuhn JA, Flesher S, Shepherd M. Inhaler technique in children with asthma: a systematic review. Acad Pediatr. 2016;16(7):605-615. doi: 10.1016/j.acap.2016.04.006. PubMed
9. Pappalardo AA, Karavolos K, Martin MA. What really happens in the home: the medication environment of urban, minority youth. J Allergy Clin Immunol Pract. 2017;5(3):764-770. doi: 10.1016/j.jaip.2016.09.046. PubMed
10. Crane J, Pearce N, Burgess C, Woodman K, Robson B, Beasley R. Markers of risk of asthma death or readmission in the 12 months following a hospital admission for asthma. Int J Epidemiol. 1992;21(4):737-744. doi: 10.1093/ije/21.4.737. PubMed
11. Turner MO, Noertjojo K, Vedal S, Bai T, Crump S, Fitzgerald JM. Risk factors for near-fatal asthma. A case-control study in hospitalized patients with asthma. Am J Respir Crit Care Med. 1998;157(6 Pt 1):1804-1809. doi: 10.1164/ajrccm.157.6.9708092. PubMed
12. Press VG, Arora VM, Shah LM, et al. Misuse of respiratory inhalers in hospitalized patients with asthma or COPD. J Gen Intern Med. 2011;26(6):635-642. doi: 10.1007/s11606-010-1624-2. PubMed
13. Guevara JP, Wolf FM, Grum CM, Clark NM. Effects of educational interventions for self management of asthma in children and adolescents: systematic review and meta-analysis. BMJ. 2003;326(7402):1308-1309. doi: 10.1136/bmj.326.7402.1308. PubMed
14. Scarfone RJ, Capraro GA, Zorc JJ, Zhao H. Demonstrated use of metered-dose inhalers and peak flow meters by children and adolescents with acute asthma exacerbations. Arch Pediatr Adolesc Med. 2002;156(4):378-383. doi: 10.1001/archpedi.156.4.378. PubMed
15. Sockrider MM, Abramson S, Brooks E, et al. Delivering tailored asthma family education in a pediatric emergency department setting: a pilot study. Pediatrics. 2006;117(4 Pt 2):S135-144. doi: 10.1542/peds.2005-2000K. PubMed

Issue
Journal of Hospital Medicine 14(6)
Issue
Journal of Hospital Medicine 14(6)
Page Number
361-365. Published online first April 8, 2019.
Page Number
361-365. Published online first April 8, 2019.
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Waheeda Samady, MD; E-mail: wsamady@luriechildrens.org; Telephone: 312-227-4000.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

The Current State of Advanced Practice Provider Fellowships in Hospital Medicine: A Survey of Program Directors

Article Type
Changed
Sun, 07/28/2019 - 14:56

Postgraduate training for physician assistants (PAs) and nurse practitioners (NPs) is a rapidly evolving field. It has been estimated that the number of these advanced practice providers (APPs) almost doubled between 2000 and 2016 (from 15.3 to 28.2 per 100 physicians) and is expected to double again by 2030.1 As APPs continue to become a progressively larger part of the healthcare workforce, medical organizations are seeking more comprehensive strategies to train and mentor them.2 This has led to the development of formal postgraduate programs, often called APP fellowships.

Historically, postgraduate APP fellowships have functioned to help bridge the gap in clinical practice experience between physicians and APPs.3 This gap is evident in hours of clinical training. Whereas NPs are generally expected to complete 500-1,500 hours of clinical practice before graduating,4 and PAs are expected to complete 2,000 hours,5 most physicians will complete over 15,000 hours of clinical training by the end of residency.6 As increasing patient complexity continues to challenge the healthcare workforce,7 both the NP and the PA leadership have recommended increased training of graduates and outcome studies of formal postgraduate fellowships.8,9 In 2007, there were over 60 of these programs in the United States,10 most of them offering training in surgical specialties.

First described in 2010 by the Mayo Clinic,11 APP fellowships in hospital medicine are also being developed. These programs are built to improve the training of nonphysician hospitalists, who often work independently12 and manage medically complex patients.13 However, little is known about the number or structure of these fellowships. The limited understanding of the current APP fellowship environment is partly due to the lack of an administrative body overseeing these programs.14 The Accreditation Review Commission on Education for the Physician Assistant (ARC-PA) pioneered a model in 2007 for postgraduate PA programs, but it has been held in abeyance since 2014.15 Both the American Nurses Credentialing Center and the National Nurse Practitioner Residency and Fellowship Training Consortium have fellowship accreditation review processes, but they are not specific to hospital medicine.16 The Society of Hospital Medicine (SHM) has several resources for the training of APPs;17 however, it neither reviews nor accredits fellowship programs. Without standards, guidelines, or active accrediting bodies, APP fellowships in hospital medicine are poorly understood and are of unknown efficacy. The purpose of this study was to identify and describe the active APP fellowships in hospital medicine.

METHODS

This was a cross-sectional study of all APP adult and pediatric fellowships in hospital medicine, in the United States, that were identifiable through May 2018. Multiple methods were used to identify all active fellowships. First, all training programs offering a Hospital Medicine Fellowship in the ARC-PA and Association of Postgraduate PA Programs databases were noted. Second, questionnaires were given out at the NP/PA forum at the national SHM conference in 2018 to gather information on existing APP fellowships. Third, similar online requests to identify known programs were posted to the SHM web forum Hospital Medicine Exchange (HMX). Fourth, Internet searches were used to discover additional programs. Once those fellowships were identified, surveys were sent to their program directors (PDs). These surveys not only asked the PDs about their fellowship but also asked them to identify additional APP fellowships beyond those that we had captured. Once additional programs were identified, a second round of surveys was sent to their PDs. This was performed in an iterative fashion until no additional fellowships were discovered.

 

 

The survey tool was developed and validated internally in the AAMC Survey Development style18 and was influenced by prior validated surveys of postgraduate medical fellowships.10,19-21 Each question was developed by a team that had expertise in survey design (Wright and Tackett), and two survey design team members were themselves PDs of APP fellowships in hospital medicine (Kisuule and Franco). The survey was revised iteratively by the team on the basis of meetings and pilot testing with PDs of other programs. All qualitative or descriptive questions had a free response option available to allow PDs to answer the survey accurately and exhaustively. The final version of the survey was approved by consensus of all authors. It consisted of 25 multiple choice questions which were created to gather information about the following key areas of APP hospital medicine fellowships: fellowship and learner characteristics, program rationales, curricula, and methods of fellow assessment.

A web-based survey format (Qualtrics) was used to distribute the questionnaire e-mail to the PDs. Follow up e-mail reminders were sent to all nonresponders to encourage full participation. Survey completion was voluntary; no financial incentives or gifts were offered. IRB approval was obtained at Johns Hopkins Bayview (IRB number 00181629). Descriptive statistics (proportions, means, and ranges as appropriate) were calculated for all variables. Stata 13 (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, Texas. StataCorp LP) was used for data analysis.

RESULTS

In total, 11 fellowships were identified using our multimethod approach. We found four (36%) programs by utilizing existing online databases, two (18%) through the SHM questionnaire and HMX forum, three (27%) through internet searches, and the remaining two (18%) were referred to us by the other PDs who were surveyed. Of the programs surveyed, 10 were adult programs and one was a pediatric program. Surveys were sent to the PDs of the 11 fellowships, and all but one of them (10/11, 91%) responded. Respondent programs were given alphabetical designations A through J (Table). 

Fellowship and Individual Characteristics

Most programs have been in existence for five years or fewer. Eighty percent of the programs are about one year in duration; two outlier programs have fellowship lengths of six months and 18 months. The main hospital where training occurs has a mean of 496 beds (range 213 to 900). Ninety percent of the hospitals also have physician residency training programs. Sixty percent of programs enroll two to four fellows per year while 40% enroll five or more. The salary range paid by the programs is $55,000 to >$70,000, and half the programs pay more than $65,000.

The majority of fellows accepted into APP fellowships in hospital medicine are women. Eighty percent of fellows are 26-30 years old, and 90% of fellows have been out of NP or PA school for one year or less. Both NP and PA applicants are accepted in 80% of fellowships.

Program Rationales

All programs reported that training and retaining applicants is the main driver for developing their fellowship, and 50% of them offer financial incentives for retention upon successful completion of the program. Forty percent of PDs stated that there is an implicit or explicit understanding that successful completion of the fellowship would result in further employment. Over the last five years, 89% (range: 71%-100%) of graduates were asked to remain for a full-time position after program completion.

 

 

In addition to training and retention, building an interprofessional team (50%), managing patient volume (30%), and reducing overhead (20%) were also reported as rationales for program development. The majority of programs (80%) have fellows bill for clinical services, and five of those eight programs do so after their fellows become more clinically competent.

Curricula

Of the nine adult programs, 67% teach explicitly to SHM core competencies and 33% send their fellows to the SHM NP/PA Boot Camp. Thirty percent of fellowships partner formally with either a physician residency or a local PA program to develop educational content. Six of the nine programs with active physician residencies, including the pediatric fellowship, offer shared educational experiences for the residents and APPs.

There are notable differences in clinical rotations between the programs (Figure 1). No single rotation is universally required, although general hospital internal medicine is required in all adult fellowships. The majority (80%) of programs offer at least one elective. Six programs reported mandatory rotations outside the department of medicine, most commonly neurology or the stroke service (four programs). Only one program reported only general medicine rotations, with no subspecialty electives.



There are also differences between programs with respect to educational experiences and learning formats (Figure 2). Each fellowship takes a unique approach to clinical instruction; teaching rounds and lecture attendance are the only experiences that are mandatory across the board. Grand rounds are available, but not required, in all programs. Ninety percent of programs offer or require fellow presentations, journal clubs, reading assignments, or scholarly projects. Fellow presentations (70%) and journal club attendance (60%) are required in more than half the programs; however, reading assignments (30%) and scholarly projects (20%) are rarely required.

Methods of Fellow Assessment

Each program surveyed has a unique method of fellow assessment. Ninety percent of the programs use more than one method to assess their fellows. Faculty reviews are most commonly used and are conducted in all rotations in 80% of fellowships. Both self-assessment exercises and written examinations are used in some rotations by the majority of programs. Capstone projects are required infrequently (30%).

DISCUSSION

We found several commonalities between the fellowships surveyed. Many of the program characteristics, such as years in operation, salary, duration, and lack of accreditation, are quite similar. Most fellowships also have a similar rationale for building their programs and use resources from the SHM to inform their curricula. Fellows, on average, share several demographic characteristics, such as age, gender, and time out of schooling. Conversely, we found wide variability in clinical rotations, the general teaching structure, and methods of fellow evaluation.

There have been several publications detailing successful individual APP fellowships in medical subspecialties,22 psychiatry,23 and surgical specialties,24 all of which describe the benefits to the institution. One study found that physician hospitalists have a poor understanding of the training PAs undergo and would favor a standardized curriculum for PA hospitalists.25 Another study compared all PA postgraduate training programs in emergency medicine;19 it also described a small number of relatively young programs with variable curricula and a need for standardization. Yet another paper10 surveyed postgraduate PA programs across all specialties; however, that study only captured two hospital medicine programs, and it was not focused on several key areas studied in this paper—such as the program rationale, curricular elements, and assessment.

It is noteworthy that every program surveyed was created with training and retention in mind, rather than other factors like decreasing overhead or managing patient volume. Training one’s own APPs so that they can learn on the job, come to understand expectations within a group, and witness the culture is extremely valuable. From a patient safety standpoint, it has been documented that physician hospitalists straight out of residency have a higher patient mortality compared with more experienced providers.26 Given the findings that on a national level, the majority of hospitalist NPs and PAs practice autonomously or somewhat autonomously,12 it is reasonable to assume that similar trends of more experienced providers delivering safer care would be expected for APPs, but this remains speculative. From a retention standpoint, it has been well described that high APP turnover is often due to decreased feelings of competence and confidence during their transition from trainees to medical providers.27 APPs who have completed fellowships feel more confident and able to succeed in their field.28 To this point, in one survey of hospitalist PAs, almost all reported that they would have been interested in completing a fellowship, even it meant a lower initial salary.29Despite having the same general goals and using similar national resources, our study reveals that APP fellows are trained and assessed very differently between programs. This might represent an area of future growth in the field of hospitalist APP education. For physician learning, competency-based medical education (CBME) has emerged as a learner centric, outcomes-based model of teaching and assessment that emphasizes mastery of skills and progression through milestones.30 Both the ACGME31 and the SHM32 have described core competencies that provide a framework within CBME for determining readiness for independent practice. While we were not surprised to find that each fellowship has its own unique method of determining readiness for practice, these findings suggest that graduates from different programs likely have very different skill sets and aptitude levels. In the future, an active accrediting body could offer guidance in defining hospitalist APP core competencies and help standardize education.

Several limitations to this study should be considered. While we used multiple strategies to locate as many fellowships as possible, it is unlikely that we successfully captured all existing programs, and new programs are being developed annually. We also relied on self-reported data from PDs. While we would expect PDs to provide accurate data, we could not externally validate their answers. Additionally, although our survey tool was reviewed extensively and validated internally, it was developed de novo for this study.

 

 

CONCLUSION

APP fellowships in hospital medicine have experienced marked growth since the first program was described in 2010. The majority of programs are 12 months long, operate in existing teaching centers, and are intended to further enhance the training and retention of newly graduated PAs and NPs. Despite their similarities, fellowships have striking variability in their methods of teaching and assessing their learners. Best practices have yet to be identified, and further study is required to determine how to standardize curricula across the board.

Acknowledgments

The authors thank all program directors who responded to the survey.

Disclosures

The authors report no conflicts of interest.

Funding

This project was supported by the Johns Hopkins School of Medicine Biostatistics, Epidemiology and Data Management (BEAD) Core. Dr. Wright is the Anne Gaines and G. Thomas Miller Professor of Medicine, which is supported through the Johns Hopkins’ Center for Innovative Medicine.

Files
References

1. Auerbach DI, Staiger DO, Buerhaus PI. Growing ranks of advanced practice clinicians — implications for the physician workforce. N Engl J Med. 2018;378(25):2358-2360. doi: 10.1056/nejmp1801869. PubMed
2. Darves B. Midlevels make a rocky entrance into hospital medicine. Todays Hospitalist. 2007;5(1):28-32. 
3. Polansky M. A historical perspective on postgraduate physician assistant education and the association of postgraduate physician assistant programs. J Physician Assist Educ. 2007;18(3):100-108. doi: 10.1097/01367895-200718030-00014. 
4. FNP & AGNP Certification Candidate Handbook. The American Academy of Nurse Practitioners National Certification Board, Inc; 2018. https://www.aanpcert.org/resource/documents/AGNP FNP Candidate Handbook.pdf. Accessed December 20, 2018
5. Become a PA: Getting Your Prerequisites and Certification. AAPA. https://www.aapa.org/career-central/become-a-pa/. Accessed December 20, 2018.
6. ACGME Common Program Requirements. ACGME; 2017. https://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/CPRs_2017-07-01.pdf. Accessed December 20, 2018
7. Committee on the Learning Health Care System in America; Institute of Medicine, Smith MD, Smith M, Saunders R, Stuckhardt L, McGinnis JM. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: National Academies Press; 2013. PubMed
8. The Future of Nursing LEADING CHANGE, ADVANCING HEALTH. THE NATIONAL ACADEMIES PRESS; 2014. https://www.nap.edu/read/12956/chapter/1. Accessed December 16, 2018.
9. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate pa training programs. JAAPA. 2016:29:1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
10. Polansky M, Garver GJH, Hilton G. Postgraduate clinical education of physician assistants. J Physician Assist Educ. 2012;23(1):39-45. doi: 10.1097/01367895-201223010-00008. 
11. Will KK, Budavari AI, Wilkens JA, Mishark K, Hartsell ZC. A hospitalist postgraduate training program for physician assistants. J Hosp Med. 2010;5(2):94-98. doi: 10.1002/jhm.619. PubMed
12. Kartha A, Restuccia JD, Burgess JF, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9(10):615-620. doi: 10.1002/jhm.2231. PubMed
13. Singh S, Fletcher KE, Schapira MM, et al. A comparison of outcomes of general medical inpatient care provided by a hospitalist-physician assistant model vs a traditional resident-based model. J Hosp Med. 2011;6(3):122-130. doi: 10.1002/jhm.826. PubMed
14. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate PA training programs. JAAPA. 2016;29(5):1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
15. Postgraduate Programs. ARC-PA. http://www.arc-pa.org/accreditation/postgraduate-programs. Accessed September 13, 2018.
16. National Nurse Practitioner Residency & Fellowship Training Consortium: Mission. https://www.nppostgradtraining.com/About-Us/Mission. Accessed September 27, 2018.
17. NP/PA Boot Camp. State of Hospital Medicine | Society of Hospital Medicine. http://www.hospitalmedicine.org/events/nppa-boot-camp. Accessed September 13, 2018.
18. Gehlbach H, Artino Jr AR, Durning SJ. AM last page: survey development guidance for medical education researchers. Acad Med. 2010;85(5):925. doi: 10.1097/ACM.0b013e3181dd3e88.” Accessed March 10, 2018. PubMed
19. Kraus C, Carlisle T, Carney D. Emergency Medicine Physician Assistant (EMPA) post-graduate training programs: program characteristics and training curricula. West J Emerg Med. 2018;19(5):803-807. doi: 10.5811/westjem.2018.6.37892. 
20. Shah NH, Rhim HJH, Maniscalco J, Wilson K, Rassbach C. The current state of pediatric hospital medicine fellowships: A survey of program directors. J Hosp Med. 2016;11(5):324-328. doi: 10.1002/jhm.2571. PubMed
21. Thompson BM, Searle NS, Gruppen LD, Hatem CJ, Nelson E. A national survey of medical education fellowships. Med Educ Online. 2011;16(1):5642. doi: 10.3402/meo.v16i0.5642. PubMed
22. Hooker R. A physician assistant rheumatology fellowship. JAAPA. 2013;26(6):49-52. doi: 10.1097/01.jaa.0000430346.04435.e4 PubMed
23. Keizer T, Trangle M. the benefits of a physician assistant and/or nurse practitioner psychiatric postgraduate training program. Acad Psychiatry. 2015;39(6):691-694. doi: 10.1007/s40596-015-0331-z. PubMed
24. Miller A, Weiss J, Hill V, Lindaman K, Emory C. Implementation of a postgraduate orthopaedic physician assistant fellowship for improved specialty training. JBJS Journal of Orthopaedics for Physician Assistants. 2017:1. doi: 10.2106/jbjs.jopa.17.00021. 
25. Sharma P, Brooks M, Roomiany P, Verma L, Criscione-Schreiber L. physician assistant student training for the inpatient setting. J Physician Assist Educ. 2017;28(4):189-195. doi: 10.1097/jpa.0000000000000174. PubMed
26. Goodwin JS, Salameh H, Zhou J, Singh S, Kuo Y-F, Nattinger AB. Association of hospitalist years of experience with mortality in the hospitalized medicare population. JAMA Intern Med. 2018;178(2):196. doi: 10.1001/jamainternmed.2017.7049. PubMed
27. Barnes H. Exploring the factors that influence nurse practitioner role transition. J Nurse Pract. 2015;11(2):178-183. doi: 10.1016/j.nurpra.2014.11.004. PubMed
28. Will K, Williams J, Hilton G, Wilson L, Geyer H. Perceived efficacy and utility of postgraduate physician assistant training programs. JAAPA. 2016;29(3):46-48. doi: 10.1097/01.jaa.0000480569.39885.c8. PubMed
29. Torok H, Lackner C, Landis R, Wright S. Learning needs of physician assistants working in hospital medicine. J Hosp Med. 2011;7(3):190-194. doi: 10.1002/jhm.1001. PubMed
30. Cate O. Competency-based postgraduate medical education: past, present and future. GMS J Med Educ. 2017:34(5). doi: 10.3205/zma001146. PubMed
31. Exploring the ACGME Core Competencies (Part 1 of 7). NEJM Knowledge. https://knowledgeplus.nejm.org/blog/exploring-acgme-core-competencies/. Accessed October 24, 2018.
32. Core Competencies. Core Competencies | Society of Hospital Medicine. http://www.hospitalmedicine.org/professional-development/core-competencies/. Accessed October 24, 2018.

Article PDF
Issue
Journal of Hospital Medicine 14(7)
Publications
Topics
Page Number
401-406. Published online first April 8, 2019.
Sections
Files
Files
Article PDF
Article PDF

Postgraduate training for physician assistants (PAs) and nurse practitioners (NPs) is a rapidly evolving field. It has been estimated that the number of these advanced practice providers (APPs) almost doubled between 2000 and 2016 (from 15.3 to 28.2 per 100 physicians) and is expected to double again by 2030.1 As APPs continue to become a progressively larger part of the healthcare workforce, medical organizations are seeking more comprehensive strategies to train and mentor them.2 This has led to the development of formal postgraduate programs, often called APP fellowships.

Historically, postgraduate APP fellowships have functioned to help bridge the gap in clinical practice experience between physicians and APPs.3 This gap is evident in hours of clinical training. Whereas NPs are generally expected to complete 500-1,500 hours of clinical practice before graduating,4 and PAs are expected to complete 2,000 hours,5 most physicians will complete over 15,000 hours of clinical training by the end of residency.6 As increasing patient complexity continues to challenge the healthcare workforce,7 both the NP and the PA leadership have recommended increased training of graduates and outcome studies of formal postgraduate fellowships.8,9 In 2007, there were over 60 of these programs in the United States,10 most of them offering training in surgical specialties.

First described in 2010 by the Mayo Clinic,11 APP fellowships in hospital medicine are also being developed. These programs are built to improve the training of nonphysician hospitalists, who often work independently12 and manage medically complex patients.13 However, little is known about the number or structure of these fellowships. The limited understanding of the current APP fellowship environment is partly due to the lack of an administrative body overseeing these programs.14 The Accreditation Review Commission on Education for the Physician Assistant (ARC-PA) pioneered a model in 2007 for postgraduate PA programs, but it has been held in abeyance since 2014.15 Both the American Nurses Credentialing Center and the National Nurse Practitioner Residency and Fellowship Training Consortium have fellowship accreditation review processes, but they are not specific to hospital medicine.16 The Society of Hospital Medicine (SHM) has several resources for the training of APPs;17 however, it neither reviews nor accredits fellowship programs. Without standards, guidelines, or active accrediting bodies, APP fellowships in hospital medicine are poorly understood and are of unknown efficacy. The purpose of this study was to identify and describe the active APP fellowships in hospital medicine.

METHODS

This was a cross-sectional study of all APP adult and pediatric fellowships in hospital medicine, in the United States, that were identifiable through May 2018. Multiple methods were used to identify all active fellowships. First, all training programs offering a Hospital Medicine Fellowship in the ARC-PA and Association of Postgraduate PA Programs databases were noted. Second, questionnaires were given out at the NP/PA forum at the national SHM conference in 2018 to gather information on existing APP fellowships. Third, similar online requests to identify known programs were posted to the SHM web forum Hospital Medicine Exchange (HMX). Fourth, Internet searches were used to discover additional programs. Once those fellowships were identified, surveys were sent to their program directors (PDs). These surveys not only asked the PDs about their fellowship but also asked them to identify additional APP fellowships beyond those that we had captured. Once additional programs were identified, a second round of surveys was sent to their PDs. This was performed in an iterative fashion until no additional fellowships were discovered.

 

 

The survey tool was developed and validated internally in the AAMC Survey Development style18 and was influenced by prior validated surveys of postgraduate medical fellowships.10,19-21 Each question was developed by a team that had expertise in survey design (Wright and Tackett), and two survey design team members were themselves PDs of APP fellowships in hospital medicine (Kisuule and Franco). The survey was revised iteratively by the team on the basis of meetings and pilot testing with PDs of other programs. All qualitative or descriptive questions had a free response option available to allow PDs to answer the survey accurately and exhaustively. The final version of the survey was approved by consensus of all authors. It consisted of 25 multiple choice questions which were created to gather information about the following key areas of APP hospital medicine fellowships: fellowship and learner characteristics, program rationales, curricula, and methods of fellow assessment.

A web-based survey format (Qualtrics) was used to distribute the questionnaire e-mail to the PDs. Follow up e-mail reminders were sent to all nonresponders to encourage full participation. Survey completion was voluntary; no financial incentives or gifts were offered. IRB approval was obtained at Johns Hopkins Bayview (IRB number 00181629). Descriptive statistics (proportions, means, and ranges as appropriate) were calculated for all variables. Stata 13 (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, Texas. StataCorp LP) was used for data analysis.

RESULTS

In total, 11 fellowships were identified using our multimethod approach. We found four (36%) programs by utilizing existing online databases, two (18%) through the SHM questionnaire and HMX forum, three (27%) through internet searches, and the remaining two (18%) were referred to us by the other PDs who were surveyed. Of the programs surveyed, 10 were adult programs and one was a pediatric program. Surveys were sent to the PDs of the 11 fellowships, and all but one of them (10/11, 91%) responded. Respondent programs were given alphabetical designations A through J (Table). 

Fellowship and Individual Characteristics

Most programs have been in existence for five years or fewer. Eighty percent of the programs are about one year in duration; two outlier programs have fellowship lengths of six months and 18 months. The main hospital where training occurs has a mean of 496 beds (range 213 to 900). Ninety percent of the hospitals also have physician residency training programs. Sixty percent of programs enroll two to four fellows per year while 40% enroll five or more. The salary range paid by the programs is $55,000 to >$70,000, and half the programs pay more than $65,000.

The majority of fellows accepted into APP fellowships in hospital medicine are women. Eighty percent of fellows are 26-30 years old, and 90% of fellows have been out of NP or PA school for one year or less. Both NP and PA applicants are accepted in 80% of fellowships.

Program Rationales

All programs reported that training and retaining applicants is the main driver for developing their fellowship, and 50% of them offer financial incentives for retention upon successful completion of the program. Forty percent of PDs stated that there is an implicit or explicit understanding that successful completion of the fellowship would result in further employment. Over the last five years, 89% (range: 71%-100%) of graduates were asked to remain for a full-time position after program completion.

 

 

In addition to training and retention, building an interprofessional team (50%), managing patient volume (30%), and reducing overhead (20%) were also reported as rationales for program development. The majority of programs (80%) have fellows bill for clinical services, and five of those eight programs do so after their fellows become more clinically competent.

Curricula

Of the nine adult programs, 67% teach explicitly to SHM core competencies and 33% send their fellows to the SHM NP/PA Boot Camp. Thirty percent of fellowships partner formally with either a physician residency or a local PA program to develop educational content. Six of the nine programs with active physician residencies, including the pediatric fellowship, offer shared educational experiences for the residents and APPs.

There are notable differences in clinical rotations between the programs (Figure 1). No single rotation is universally required, although general hospital internal medicine is required in all adult fellowships. The majority (80%) of programs offer at least one elective. Six programs reported mandatory rotations outside the department of medicine, most commonly neurology or the stroke service (four programs). Only one program reported only general medicine rotations, with no subspecialty electives.



There are also differences between programs with respect to educational experiences and learning formats (Figure 2). Each fellowship takes a unique approach to clinical instruction; teaching rounds and lecture attendance are the only experiences that are mandatory across the board. Grand rounds are available, but not required, in all programs. Ninety percent of programs offer or require fellow presentations, journal clubs, reading assignments, or scholarly projects. Fellow presentations (70%) and journal club attendance (60%) are required in more than half the programs; however, reading assignments (30%) and scholarly projects (20%) are rarely required.

Methods of Fellow Assessment

Each program surveyed has a unique method of fellow assessment. Ninety percent of the programs use more than one method to assess their fellows. Faculty reviews are most commonly used and are conducted in all rotations in 80% of fellowships. Both self-assessment exercises and written examinations are used in some rotations by the majority of programs. Capstone projects are required infrequently (30%).

DISCUSSION

We found several commonalities between the fellowships surveyed. Many of the program characteristics, such as years in operation, salary, duration, and lack of accreditation, are quite similar. Most fellowships also have a similar rationale for building their programs and use resources from the SHM to inform their curricula. Fellows, on average, share several demographic characteristics, such as age, gender, and time out of schooling. Conversely, we found wide variability in clinical rotations, the general teaching structure, and methods of fellow evaluation.

There have been several publications detailing successful individual APP fellowships in medical subspecialties,22 psychiatry,23 and surgical specialties,24 all of which describe the benefits to the institution. One study found that physician hospitalists have a poor understanding of the training PAs undergo and would favor a standardized curriculum for PA hospitalists.25 Another study compared all PA postgraduate training programs in emergency medicine;19 it also described a small number of relatively young programs with variable curricula and a need for standardization. Yet another paper10 surveyed postgraduate PA programs across all specialties; however, that study only captured two hospital medicine programs, and it was not focused on several key areas studied in this paper—such as the program rationale, curricular elements, and assessment.

It is noteworthy that every program surveyed was created with training and retention in mind, rather than other factors like decreasing overhead or managing patient volume. Training one’s own APPs so that they can learn on the job, come to understand expectations within a group, and witness the culture is extremely valuable. From a patient safety standpoint, it has been documented that physician hospitalists straight out of residency have a higher patient mortality compared with more experienced providers.26 Given the findings that on a national level, the majority of hospitalist NPs and PAs practice autonomously or somewhat autonomously,12 it is reasonable to assume that similar trends of more experienced providers delivering safer care would be expected for APPs, but this remains speculative. From a retention standpoint, it has been well described that high APP turnover is often due to decreased feelings of competence and confidence during their transition from trainees to medical providers.27 APPs who have completed fellowships feel more confident and able to succeed in their field.28 To this point, in one survey of hospitalist PAs, almost all reported that they would have been interested in completing a fellowship, even it meant a lower initial salary.29Despite having the same general goals and using similar national resources, our study reveals that APP fellows are trained and assessed very differently between programs. This might represent an area of future growth in the field of hospitalist APP education. For physician learning, competency-based medical education (CBME) has emerged as a learner centric, outcomes-based model of teaching and assessment that emphasizes mastery of skills and progression through milestones.30 Both the ACGME31 and the SHM32 have described core competencies that provide a framework within CBME for determining readiness for independent practice. While we were not surprised to find that each fellowship has its own unique method of determining readiness for practice, these findings suggest that graduates from different programs likely have very different skill sets and aptitude levels. In the future, an active accrediting body could offer guidance in defining hospitalist APP core competencies and help standardize education.

Several limitations to this study should be considered. While we used multiple strategies to locate as many fellowships as possible, it is unlikely that we successfully captured all existing programs, and new programs are being developed annually. We also relied on self-reported data from PDs. While we would expect PDs to provide accurate data, we could not externally validate their answers. Additionally, although our survey tool was reviewed extensively and validated internally, it was developed de novo for this study.

 

 

CONCLUSION

APP fellowships in hospital medicine have experienced marked growth since the first program was described in 2010. The majority of programs are 12 months long, operate in existing teaching centers, and are intended to further enhance the training and retention of newly graduated PAs and NPs. Despite their similarities, fellowships have striking variability in their methods of teaching and assessing their learners. Best practices have yet to be identified, and further study is required to determine how to standardize curricula across the board.

Acknowledgments

The authors thank all program directors who responded to the survey.

Disclosures

The authors report no conflicts of interest.

Funding

This project was supported by the Johns Hopkins School of Medicine Biostatistics, Epidemiology and Data Management (BEAD) Core. Dr. Wright is the Anne Gaines and G. Thomas Miller Professor of Medicine, which is supported through the Johns Hopkins’ Center for Innovative Medicine.

Postgraduate training for physician assistants (PAs) and nurse practitioners (NPs) is a rapidly evolving field. It has been estimated that the number of these advanced practice providers (APPs) almost doubled between 2000 and 2016 (from 15.3 to 28.2 per 100 physicians) and is expected to double again by 2030.1 As APPs continue to become a progressively larger part of the healthcare workforce, medical organizations are seeking more comprehensive strategies to train and mentor them.2 This has led to the development of formal postgraduate programs, often called APP fellowships.

Historically, postgraduate APP fellowships have functioned to help bridge the gap in clinical practice experience between physicians and APPs.3 This gap is evident in hours of clinical training. Whereas NPs are generally expected to complete 500-1,500 hours of clinical practice before graduating,4 and PAs are expected to complete 2,000 hours,5 most physicians will complete over 15,000 hours of clinical training by the end of residency.6 As increasing patient complexity continues to challenge the healthcare workforce,7 both the NP and the PA leadership have recommended increased training of graduates and outcome studies of formal postgraduate fellowships.8,9 In 2007, there were over 60 of these programs in the United States,10 most of them offering training in surgical specialties.

First described in 2010 by the Mayo Clinic,11 APP fellowships in hospital medicine are also being developed. These programs are built to improve the training of nonphysician hospitalists, who often work independently12 and manage medically complex patients.13 However, little is known about the number or structure of these fellowships. The limited understanding of the current APP fellowship environment is partly due to the lack of an administrative body overseeing these programs.14 The Accreditation Review Commission on Education for the Physician Assistant (ARC-PA) pioneered a model in 2007 for postgraduate PA programs, but it has been held in abeyance since 2014.15 Both the American Nurses Credentialing Center and the National Nurse Practitioner Residency and Fellowship Training Consortium have fellowship accreditation review processes, but they are not specific to hospital medicine.16 The Society of Hospital Medicine (SHM) has several resources for the training of APPs;17 however, it neither reviews nor accredits fellowship programs. Without standards, guidelines, or active accrediting bodies, APP fellowships in hospital medicine are poorly understood and are of unknown efficacy. The purpose of this study was to identify and describe the active APP fellowships in hospital medicine.

METHODS

This was a cross-sectional study of all APP adult and pediatric fellowships in hospital medicine, in the United States, that were identifiable through May 2018. Multiple methods were used to identify all active fellowships. First, all training programs offering a Hospital Medicine Fellowship in the ARC-PA and Association of Postgraduate PA Programs databases were noted. Second, questionnaires were given out at the NP/PA forum at the national SHM conference in 2018 to gather information on existing APP fellowships. Third, similar online requests to identify known programs were posted to the SHM web forum Hospital Medicine Exchange (HMX). Fourth, Internet searches were used to discover additional programs. Once those fellowships were identified, surveys were sent to their program directors (PDs). These surveys not only asked the PDs about their fellowship but also asked them to identify additional APP fellowships beyond those that we had captured. Once additional programs were identified, a second round of surveys was sent to their PDs. This was performed in an iterative fashion until no additional fellowships were discovered.

 

 

The survey tool was developed and validated internally in the AAMC Survey Development style18 and was influenced by prior validated surveys of postgraduate medical fellowships.10,19-21 Each question was developed by a team that had expertise in survey design (Wright and Tackett), and two survey design team members were themselves PDs of APP fellowships in hospital medicine (Kisuule and Franco). The survey was revised iteratively by the team on the basis of meetings and pilot testing with PDs of other programs. All qualitative or descriptive questions had a free response option available to allow PDs to answer the survey accurately and exhaustively. The final version of the survey was approved by consensus of all authors. It consisted of 25 multiple choice questions which were created to gather information about the following key areas of APP hospital medicine fellowships: fellowship and learner characteristics, program rationales, curricula, and methods of fellow assessment.

A web-based survey format (Qualtrics) was used to distribute the questionnaire e-mail to the PDs. Follow up e-mail reminders were sent to all nonresponders to encourage full participation. Survey completion was voluntary; no financial incentives or gifts were offered. IRB approval was obtained at Johns Hopkins Bayview (IRB number 00181629). Descriptive statistics (proportions, means, and ranges as appropriate) were calculated for all variables. Stata 13 (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, Texas. StataCorp LP) was used for data analysis.

RESULTS

In total, 11 fellowships were identified using our multimethod approach. We found four (36%) programs by utilizing existing online databases, two (18%) through the SHM questionnaire and HMX forum, three (27%) through internet searches, and the remaining two (18%) were referred to us by the other PDs who were surveyed. Of the programs surveyed, 10 were adult programs and one was a pediatric program. Surveys were sent to the PDs of the 11 fellowships, and all but one of them (10/11, 91%) responded. Respondent programs were given alphabetical designations A through J (Table). 

Fellowship and Individual Characteristics

Most programs have been in existence for five years or fewer. Eighty percent of the programs are about one year in duration; two outlier programs have fellowship lengths of six months and 18 months. The main hospital where training occurs has a mean of 496 beds (range 213 to 900). Ninety percent of the hospitals also have physician residency training programs. Sixty percent of programs enroll two to four fellows per year while 40% enroll five or more. The salary range paid by the programs is $55,000 to >$70,000, and half the programs pay more than $65,000.

The majority of fellows accepted into APP fellowships in hospital medicine are women. Eighty percent of fellows are 26-30 years old, and 90% of fellows have been out of NP or PA school for one year or less. Both NP and PA applicants are accepted in 80% of fellowships.

Program Rationales

All programs reported that training and retaining applicants is the main driver for developing their fellowship, and 50% of them offer financial incentives for retention upon successful completion of the program. Forty percent of PDs stated that there is an implicit or explicit understanding that successful completion of the fellowship would result in further employment. Over the last five years, 89% (range: 71%-100%) of graduates were asked to remain for a full-time position after program completion.

 

 

In addition to training and retention, building an interprofessional team (50%), managing patient volume (30%), and reducing overhead (20%) were also reported as rationales for program development. The majority of programs (80%) have fellows bill for clinical services, and five of those eight programs do so after their fellows become more clinically competent.

Curricula

Of the nine adult programs, 67% teach explicitly to SHM core competencies and 33% send their fellows to the SHM NP/PA Boot Camp. Thirty percent of fellowships partner formally with either a physician residency or a local PA program to develop educational content. Six of the nine programs with active physician residencies, including the pediatric fellowship, offer shared educational experiences for the residents and APPs.

There are notable differences in clinical rotations between the programs (Figure 1). No single rotation is universally required, although general hospital internal medicine is required in all adult fellowships. The majority (80%) of programs offer at least one elective. Six programs reported mandatory rotations outside the department of medicine, most commonly neurology or the stroke service (four programs). Only one program reported only general medicine rotations, with no subspecialty electives.



There are also differences between programs with respect to educational experiences and learning formats (Figure 2). Each fellowship takes a unique approach to clinical instruction; teaching rounds and lecture attendance are the only experiences that are mandatory across the board. Grand rounds are available, but not required, in all programs. Ninety percent of programs offer or require fellow presentations, journal clubs, reading assignments, or scholarly projects. Fellow presentations (70%) and journal club attendance (60%) are required in more than half the programs; however, reading assignments (30%) and scholarly projects (20%) are rarely required.

Methods of Fellow Assessment

Each program surveyed has a unique method of fellow assessment. Ninety percent of the programs use more than one method to assess their fellows. Faculty reviews are most commonly used and are conducted in all rotations in 80% of fellowships. Both self-assessment exercises and written examinations are used in some rotations by the majority of programs. Capstone projects are required infrequently (30%).

DISCUSSION

We found several commonalities between the fellowships surveyed. Many of the program characteristics, such as years in operation, salary, duration, and lack of accreditation, are quite similar. Most fellowships also have a similar rationale for building their programs and use resources from the SHM to inform their curricula. Fellows, on average, share several demographic characteristics, such as age, gender, and time out of schooling. Conversely, we found wide variability in clinical rotations, the general teaching structure, and methods of fellow evaluation.

There have been several publications detailing successful individual APP fellowships in medical subspecialties,22 psychiatry,23 and surgical specialties,24 all of which describe the benefits to the institution. One study found that physician hospitalists have a poor understanding of the training PAs undergo and would favor a standardized curriculum for PA hospitalists.25 Another study compared all PA postgraduate training programs in emergency medicine;19 it also described a small number of relatively young programs with variable curricula and a need for standardization. Yet another paper10 surveyed postgraduate PA programs across all specialties; however, that study only captured two hospital medicine programs, and it was not focused on several key areas studied in this paper—such as the program rationale, curricular elements, and assessment.

It is noteworthy that every program surveyed was created with training and retention in mind, rather than other factors like decreasing overhead or managing patient volume. Training one’s own APPs so that they can learn on the job, come to understand expectations within a group, and witness the culture is extremely valuable. From a patient safety standpoint, it has been documented that physician hospitalists straight out of residency have a higher patient mortality compared with more experienced providers.26 Given the findings that on a national level, the majority of hospitalist NPs and PAs practice autonomously or somewhat autonomously,12 it is reasonable to assume that similar trends of more experienced providers delivering safer care would be expected for APPs, but this remains speculative. From a retention standpoint, it has been well described that high APP turnover is often due to decreased feelings of competence and confidence during their transition from trainees to medical providers.27 APPs who have completed fellowships feel more confident and able to succeed in their field.28 To this point, in one survey of hospitalist PAs, almost all reported that they would have been interested in completing a fellowship, even it meant a lower initial salary.29Despite having the same general goals and using similar national resources, our study reveals that APP fellows are trained and assessed very differently between programs. This might represent an area of future growth in the field of hospitalist APP education. For physician learning, competency-based medical education (CBME) has emerged as a learner centric, outcomes-based model of teaching and assessment that emphasizes mastery of skills and progression through milestones.30 Both the ACGME31 and the SHM32 have described core competencies that provide a framework within CBME for determining readiness for independent practice. While we were not surprised to find that each fellowship has its own unique method of determining readiness for practice, these findings suggest that graduates from different programs likely have very different skill sets and aptitude levels. In the future, an active accrediting body could offer guidance in defining hospitalist APP core competencies and help standardize education.

Several limitations to this study should be considered. While we used multiple strategies to locate as many fellowships as possible, it is unlikely that we successfully captured all existing programs, and new programs are being developed annually. We also relied on self-reported data from PDs. While we would expect PDs to provide accurate data, we could not externally validate their answers. Additionally, although our survey tool was reviewed extensively and validated internally, it was developed de novo for this study.

 

 

CONCLUSION

APP fellowships in hospital medicine have experienced marked growth since the first program was described in 2010. The majority of programs are 12 months long, operate in existing teaching centers, and are intended to further enhance the training and retention of newly graduated PAs and NPs. Despite their similarities, fellowships have striking variability in their methods of teaching and assessing their learners. Best practices have yet to be identified, and further study is required to determine how to standardize curricula across the board.

Acknowledgments

The authors thank all program directors who responded to the survey.

Disclosures

The authors report no conflicts of interest.

Funding

This project was supported by the Johns Hopkins School of Medicine Biostatistics, Epidemiology and Data Management (BEAD) Core. Dr. Wright is the Anne Gaines and G. Thomas Miller Professor of Medicine, which is supported through the Johns Hopkins’ Center for Innovative Medicine.

References

1. Auerbach DI, Staiger DO, Buerhaus PI. Growing ranks of advanced practice clinicians — implications for the physician workforce. N Engl J Med. 2018;378(25):2358-2360. doi: 10.1056/nejmp1801869. PubMed
2. Darves B. Midlevels make a rocky entrance into hospital medicine. Todays Hospitalist. 2007;5(1):28-32. 
3. Polansky M. A historical perspective on postgraduate physician assistant education and the association of postgraduate physician assistant programs. J Physician Assist Educ. 2007;18(3):100-108. doi: 10.1097/01367895-200718030-00014. 
4. FNP & AGNP Certification Candidate Handbook. The American Academy of Nurse Practitioners National Certification Board, Inc; 2018. https://www.aanpcert.org/resource/documents/AGNP FNP Candidate Handbook.pdf. Accessed December 20, 2018
5. Become a PA: Getting Your Prerequisites and Certification. AAPA. https://www.aapa.org/career-central/become-a-pa/. Accessed December 20, 2018.
6. ACGME Common Program Requirements. ACGME; 2017. https://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/CPRs_2017-07-01.pdf. Accessed December 20, 2018
7. Committee on the Learning Health Care System in America; Institute of Medicine, Smith MD, Smith M, Saunders R, Stuckhardt L, McGinnis JM. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: National Academies Press; 2013. PubMed
8. The Future of Nursing LEADING CHANGE, ADVANCING HEALTH. THE NATIONAL ACADEMIES PRESS; 2014. https://www.nap.edu/read/12956/chapter/1. Accessed December 16, 2018.
9. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate pa training programs. JAAPA. 2016:29:1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
10. Polansky M, Garver GJH, Hilton G. Postgraduate clinical education of physician assistants. J Physician Assist Educ. 2012;23(1):39-45. doi: 10.1097/01367895-201223010-00008. 
11. Will KK, Budavari AI, Wilkens JA, Mishark K, Hartsell ZC. A hospitalist postgraduate training program for physician assistants. J Hosp Med. 2010;5(2):94-98. doi: 10.1002/jhm.619. PubMed
12. Kartha A, Restuccia JD, Burgess JF, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9(10):615-620. doi: 10.1002/jhm.2231. PubMed
13. Singh S, Fletcher KE, Schapira MM, et al. A comparison of outcomes of general medical inpatient care provided by a hospitalist-physician assistant model vs a traditional resident-based model. J Hosp Med. 2011;6(3):122-130. doi: 10.1002/jhm.826. PubMed
14. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate PA training programs. JAAPA. 2016;29(5):1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
15. Postgraduate Programs. ARC-PA. http://www.arc-pa.org/accreditation/postgraduate-programs. Accessed September 13, 2018.
16. National Nurse Practitioner Residency & Fellowship Training Consortium: Mission. https://www.nppostgradtraining.com/About-Us/Mission. Accessed September 27, 2018.
17. NP/PA Boot Camp. State of Hospital Medicine | Society of Hospital Medicine. http://www.hospitalmedicine.org/events/nppa-boot-camp. Accessed September 13, 2018.
18. Gehlbach H, Artino Jr AR, Durning SJ. AM last page: survey development guidance for medical education researchers. Acad Med. 2010;85(5):925. doi: 10.1097/ACM.0b013e3181dd3e88.” Accessed March 10, 2018. PubMed
19. Kraus C, Carlisle T, Carney D. Emergency Medicine Physician Assistant (EMPA) post-graduate training programs: program characteristics and training curricula. West J Emerg Med. 2018;19(5):803-807. doi: 10.5811/westjem.2018.6.37892. 
20. Shah NH, Rhim HJH, Maniscalco J, Wilson K, Rassbach C. The current state of pediatric hospital medicine fellowships: A survey of program directors. J Hosp Med. 2016;11(5):324-328. doi: 10.1002/jhm.2571. PubMed
21. Thompson BM, Searle NS, Gruppen LD, Hatem CJ, Nelson E. A national survey of medical education fellowships. Med Educ Online. 2011;16(1):5642. doi: 10.3402/meo.v16i0.5642. PubMed
22. Hooker R. A physician assistant rheumatology fellowship. JAAPA. 2013;26(6):49-52. doi: 10.1097/01.jaa.0000430346.04435.e4 PubMed
23. Keizer T, Trangle M. the benefits of a physician assistant and/or nurse practitioner psychiatric postgraduate training program. Acad Psychiatry. 2015;39(6):691-694. doi: 10.1007/s40596-015-0331-z. PubMed
24. Miller A, Weiss J, Hill V, Lindaman K, Emory C. Implementation of a postgraduate orthopaedic physician assistant fellowship for improved specialty training. JBJS Journal of Orthopaedics for Physician Assistants. 2017:1. doi: 10.2106/jbjs.jopa.17.00021. 
25. Sharma P, Brooks M, Roomiany P, Verma L, Criscione-Schreiber L. physician assistant student training for the inpatient setting. J Physician Assist Educ. 2017;28(4):189-195. doi: 10.1097/jpa.0000000000000174. PubMed
26. Goodwin JS, Salameh H, Zhou J, Singh S, Kuo Y-F, Nattinger AB. Association of hospitalist years of experience with mortality in the hospitalized medicare population. JAMA Intern Med. 2018;178(2):196. doi: 10.1001/jamainternmed.2017.7049. PubMed
27. Barnes H. Exploring the factors that influence nurse practitioner role transition. J Nurse Pract. 2015;11(2):178-183. doi: 10.1016/j.nurpra.2014.11.004. PubMed
28. Will K, Williams J, Hilton G, Wilson L, Geyer H. Perceived efficacy and utility of postgraduate physician assistant training programs. JAAPA. 2016;29(3):46-48. doi: 10.1097/01.jaa.0000480569.39885.c8. PubMed
29. Torok H, Lackner C, Landis R, Wright S. Learning needs of physician assistants working in hospital medicine. J Hosp Med. 2011;7(3):190-194. doi: 10.1002/jhm.1001. PubMed
30. Cate O. Competency-based postgraduate medical education: past, present and future. GMS J Med Educ. 2017:34(5). doi: 10.3205/zma001146. PubMed
31. Exploring the ACGME Core Competencies (Part 1 of 7). NEJM Knowledge. https://knowledgeplus.nejm.org/blog/exploring-acgme-core-competencies/. Accessed October 24, 2018.
32. Core Competencies. Core Competencies | Society of Hospital Medicine. http://www.hospitalmedicine.org/professional-development/core-competencies/. Accessed October 24, 2018.

References

1. Auerbach DI, Staiger DO, Buerhaus PI. Growing ranks of advanced practice clinicians — implications for the physician workforce. N Engl J Med. 2018;378(25):2358-2360. doi: 10.1056/nejmp1801869. PubMed
2. Darves B. Midlevels make a rocky entrance into hospital medicine. Todays Hospitalist. 2007;5(1):28-32. 
3. Polansky M. A historical perspective on postgraduate physician assistant education and the association of postgraduate physician assistant programs. J Physician Assist Educ. 2007;18(3):100-108. doi: 10.1097/01367895-200718030-00014. 
4. FNP & AGNP Certification Candidate Handbook. The American Academy of Nurse Practitioners National Certification Board, Inc; 2018. https://www.aanpcert.org/resource/documents/AGNP FNP Candidate Handbook.pdf. Accessed December 20, 2018
5. Become a PA: Getting Your Prerequisites and Certification. AAPA. https://www.aapa.org/career-central/become-a-pa/. Accessed December 20, 2018.
6. ACGME Common Program Requirements. ACGME; 2017. https://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/CPRs_2017-07-01.pdf. Accessed December 20, 2018
7. Committee on the Learning Health Care System in America; Institute of Medicine, Smith MD, Smith M, Saunders R, Stuckhardt L, McGinnis JM. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: National Academies Press; 2013. PubMed
8. The Future of Nursing LEADING CHANGE, ADVANCING HEALTH. THE NATIONAL ACADEMIES PRESS; 2014. https://www.nap.edu/read/12956/chapter/1. Accessed December 16, 2018.
9. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate pa training programs. JAAPA. 2016:29:1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
10. Polansky M, Garver GJH, Hilton G. Postgraduate clinical education of physician assistants. J Physician Assist Educ. 2012;23(1):39-45. doi: 10.1097/01367895-201223010-00008. 
11. Will KK, Budavari AI, Wilkens JA, Mishark K, Hartsell ZC. A hospitalist postgraduate training program for physician assistants. J Hosp Med. 2010;5(2):94-98. doi: 10.1002/jhm.619. PubMed
12. Kartha A, Restuccia JD, Burgess JF, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9(10):615-620. doi: 10.1002/jhm.2231. PubMed
13. Singh S, Fletcher KE, Schapira MM, et al. A comparison of outcomes of general medical inpatient care provided by a hospitalist-physician assistant model vs a traditional resident-based model. J Hosp Med. 2011;6(3):122-130. doi: 10.1002/jhm.826. PubMed
14. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate PA training programs. JAAPA. 2016;29(5):1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
15. Postgraduate Programs. ARC-PA. http://www.arc-pa.org/accreditation/postgraduate-programs. Accessed September 13, 2018.
16. National Nurse Practitioner Residency & Fellowship Training Consortium: Mission. https://www.nppostgradtraining.com/About-Us/Mission. Accessed September 27, 2018.
17. NP/PA Boot Camp. State of Hospital Medicine | Society of Hospital Medicine. http://www.hospitalmedicine.org/events/nppa-boot-camp. Accessed September 13, 2018.
18. Gehlbach H, Artino Jr AR, Durning SJ. AM last page: survey development guidance for medical education researchers. Acad Med. 2010;85(5):925. doi: 10.1097/ACM.0b013e3181dd3e88.” Accessed March 10, 2018. PubMed
19. Kraus C, Carlisle T, Carney D. Emergency Medicine Physician Assistant (EMPA) post-graduate training programs: program characteristics and training curricula. West J Emerg Med. 2018;19(5):803-807. doi: 10.5811/westjem.2018.6.37892. 
20. Shah NH, Rhim HJH, Maniscalco J, Wilson K, Rassbach C. The current state of pediatric hospital medicine fellowships: A survey of program directors. J Hosp Med. 2016;11(5):324-328. doi: 10.1002/jhm.2571. PubMed
21. Thompson BM, Searle NS, Gruppen LD, Hatem CJ, Nelson E. A national survey of medical education fellowships. Med Educ Online. 2011;16(1):5642. doi: 10.3402/meo.v16i0.5642. PubMed
22. Hooker R. A physician assistant rheumatology fellowship. JAAPA. 2013;26(6):49-52. doi: 10.1097/01.jaa.0000430346.04435.e4 PubMed
23. Keizer T, Trangle M. the benefits of a physician assistant and/or nurse practitioner psychiatric postgraduate training program. Acad Psychiatry. 2015;39(6):691-694. doi: 10.1007/s40596-015-0331-z. PubMed
24. Miller A, Weiss J, Hill V, Lindaman K, Emory C. Implementation of a postgraduate orthopaedic physician assistant fellowship for improved specialty training. JBJS Journal of Orthopaedics for Physician Assistants. 2017:1. doi: 10.2106/jbjs.jopa.17.00021. 
25. Sharma P, Brooks M, Roomiany P, Verma L, Criscione-Schreiber L. physician assistant student training for the inpatient setting. J Physician Assist Educ. 2017;28(4):189-195. doi: 10.1097/jpa.0000000000000174. PubMed
26. Goodwin JS, Salameh H, Zhou J, Singh S, Kuo Y-F, Nattinger AB. Association of hospitalist years of experience with mortality in the hospitalized medicare population. JAMA Intern Med. 2018;178(2):196. doi: 10.1001/jamainternmed.2017.7049. PubMed
27. Barnes H. Exploring the factors that influence nurse practitioner role transition. J Nurse Pract. 2015;11(2):178-183. doi: 10.1016/j.nurpra.2014.11.004. PubMed
28. Will K, Williams J, Hilton G, Wilson L, Geyer H. Perceived efficacy and utility of postgraduate physician assistant training programs. JAAPA. 2016;29(3):46-48. doi: 10.1097/01.jaa.0000480569.39885.c8. PubMed
29. Torok H, Lackner C, Landis R, Wright S. Learning needs of physician assistants working in hospital medicine. J Hosp Med. 2011;7(3):190-194. doi: 10.1002/jhm.1001. PubMed
30. Cate O. Competency-based postgraduate medical education: past, present and future. GMS J Med Educ. 2017:34(5). doi: 10.3205/zma001146. PubMed
31. Exploring the ACGME Core Competencies (Part 1 of 7). NEJM Knowledge. https://knowledgeplus.nejm.org/blog/exploring-acgme-core-competencies/. Accessed October 24, 2018.
32. Core Competencies. Core Competencies | Society of Hospital Medicine. http://www.hospitalmedicine.org/professional-development/core-competencies/. Accessed October 24, 2018.

Issue
Journal of Hospital Medicine 14(7)
Issue
Journal of Hospital Medicine 14(7)
Page Number
401-406. Published online first April 8, 2019.
Page Number
401-406. Published online first April 8, 2019.
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
David Klimpl, MD; E-mail: David.klimpl@gmail.com; Telephone: 720-848-4289
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Modifiable Factors Associated with Quality of Bowel Preparation Among Hospitalized Patients Undergoing Colonoscopy

Article Type
Changed
Sun, 05/26/2019 - 00:04

Inadequate bowel preparation (IBP) at the time of inpatient colonoscopy is common and associated with increased length of stay and cost of care.1 The factors that contribute to IBP can be categorized into those that are modifiable and those that are nonmodifiable. While many factors have been associated with IBP, studies have been limited by small sample size or have combined inpatient/outpatient populations, thus limiting generalizability.1-5 Moreover, most factors associated with IBP, such as socioeconomic status, male gender, increased age, and comorbidities, are nonmodifiable. No studies have explicitly focused on modifiable risk factors, such as medication use, colonoscopy timing, or assessed the potential impact of modifying these factors.

In a large, multihospital system, we examine the frequency of IBP among inpatients undergoing colonoscopy along with factors associated with IBP. We attempted to identify modifiable risk factors that were associated with IBP.

METHODS

After obtaining Cleveland Clinic Institutional Review Board approval, records of all adult (≥18 years) inpatients undergoing colonoscopy between January 2011 and June 2017 were obtained. Patients with colonoscopy reports lacking a description of the bowel preparation quality and colonoscopies performed in the intensive care unit were excluded. For each patient, we considered only the first inpatient colonoscopy if more than one occurred during the study period.

Potential Predictors of IBP

Demographic data such as patient age, gender, ethnicity, body mass index (BMI), and insurance/payor status were obtained from the electronic health record (EHR). International Classification of Disease 9th and 10th revision, Clinical Modifications (ICD-9/10-CM) codes were used to obtain patient comorbidities including diabetes, coronary artery disease, heart failure, cirrhosis, gastroparesis, hypothyroidism, inflammatory bowel disease, constipation, stroke, dementia, dysphagia, and nausea/vomiting. Use of opioid medications within three days before colonoscopy was extracted from the medication administration record. These variables were chosen as biologically plausible modifiers of bowel preparation or had previously been assessed in the literature.1-6 The name and volume, classified as 4 L (GoLytely®) and < 4 liters (MoviPrep®) of bowel preparation, time of day when colonoscopy was performed, solid diet the day prior to colonoscopy, type of sedation used (conscious sedation or general anesthesia), and total colonoscopy time (defined as the time from scope insertion to removal) was recorded. Hospitalization-related variables, including the number of hospitalizations in the year before the current hospitalization, the year in which the colonoscopy was performed, and the number of days from admission to colonoscopy, were also obtained from the EHR.

 

 

Outcome Measures

An internally validated natural language algorithm, using Structured Queried Language was used to search through colonoscopy reports to identify adequacy of bowel preparation. ProVation® software allows the gastroenterologist to use some terms to describe bowel preparation in a drop-down menu format. In addition to the Aronchik scale (which allows the gastroenterologist to rate bowel preparation on a five-point scale: “excellent,” “good,” “fair,” “poor,” and “inadequate”) it also allows the provider to use terms such as “adequate” or “adequate to detect polyps >5 mm” as well as “unsatisfactory.”7 Mirroring prior literature, bowel preparation quality was classified into “adequate” and “inadequate”; “good” and “excellent” on the Aronchik scale were categorized as adequate as was the term “adequate” in any form; “fair,” “poor,” or “inadequate” on the Aronchik scale were classified as inadequate as was the term “unsatisfactory.” We evaluated the hospital length of stay (LOS) as a secondary outcome measure.

Statistical Analysis

After describing the frequency of IBP, the quality of bowel preparation (adequate vs inadequate) was compared based on the predictors described above. Categorical variables were reported as frequencies with percentages and continuous variables were reported as medians with 25th-75th percentile values. The significance of the difference between the proportion or median values of those who had inadequate versus adequate bowel preparation was assessed. Two-sided chi-square analysis was used to assess the significance of differences between categorical variables and the Wilcoxon Rank-Sum test was used to assess the significance of differences between continuous variables.

Multivariate logistic regression analysis was performed to assess factors associated with hospital predictors and outcomes, after adjusting for all the aforementioned factors and clustering the effect based on the endoscopist. To evaluate the potential impact of modifiable factors on IBP, we performed counterfactual analysis, in which the observed distribution was compared to a hypothetical population in which all the modifiable risk factors were optimal.

RESULTS

Overall, 8,819 patients were included in our study population. They had a median age of 64 [53-76] years; 50.5% were female and 51% had an IBP. Patient characteristics and rates of IBP are presented in Table 1.

In unadjusted analyses, with regards to modifiable factors, opiate use within three days of colonoscopy was associated with a higher rate of IBP (55.4% vs 47.3%, P <.001), as was a lower volume (<4L) bowel preparation (55.3% vs 50.4%, P = .003). IBP was less frequent when colonoscopy was performed before noon vs afternoon (50.3% vs 57.4%, P < .001), and when patients were documented to receive a clear liquid diet or nil per os vs a solid diet the day prior to colonoscopy (50.3% vs 57.4%, P < .001). Overall bowel preparation quality improved over time (Figure 1). Median LOS was five [3-11] days. Patients who had IBP on their initial colonoscopy had a LOS one day longer than patients without IBP (six days vs five days, P < .001).

Multivariate Analysis

Table 2 shows the results of the multivariate analysis. The following modifiable factors were associated with IBP: opiate used within three days of the procedure (OR 1.31; 95% CI 1.8, 1.45), having the colonoscopy performed after12:00 pm (OR, 1.25; 95% CI, 1.10, 1.41), and consuming a solid diet the day prior to the colonoscopy (OR, 1.37; 95% CI, 1.18, 1.59). However, the volume of bowel preparation was not associated with IBP. The selected nonmodifiable factors that were found to be associated with IBP included age (increment of five years; OR, 1.04; 95% CI, 1.02, 1.05), male gender (OR, 1.33; 95% CI, 1.23, 1.44), Medicare insurance (OR, 1.17; 95% CI, 1.07, 1.28), Medicaid insurance (OR, 1.34; 95% CI, 1.07, 1.28), gastroparesis (OR, 1.62; 95% CI, 1.16, 2.27), nausea/vomiting (OR 1.21; 95% CI, 1.09, 1.34), and dysphagia (OR, 1.16; 95% CI, 1.01, 1.34).

 

 

Potential Impact of Modifiable Variables

We conducted a counterfactual analysis based on a multivariate model to assess the impact of each modifiable risk factor on the IBP rate (Figure 1). In the included study population, 44.9% received an opiate, 39.3% had a colonoscopy after 12:00 pm, and 9.1% received solid food the day prior to the procedure. Holding all other factors constant, if all patients were not prescribed opiates within three days of the procedure a 2.9% reduction in IBP would be expected. Similarly, if all patients underwent colonoscopy before noon, a 2.1% reduction in IBP rate would be expected. A 0.7% reduction would be expected if all patients were maintained on a liquid diet or nil per os. Combined, instituting all these changes (no opiates or solid diet before colonoscopy and performing all colonoscopies before noon) would produce a 5.6% reduction in IBP rate.

DISCUSSION

In this large, multihospital cohort, IBP was documented in half (51%) of 8,819 inpatient colonoscopies performed. Nonmodifiable patient characteristics independently associated with IBP were age, male gender, white race, Medicare and Medicaid insurance, nausea/vomiting, dysphagia, and gastroparesis. Modifiable factors included not consuming opiates within three days of colonoscopy, avoidance of a solid diet the day prior to colonoscopy and performing the colonoscopy before noon. The volume of bowel preparation consumed was not associated with IBP. In a counterfactual analysis, we found that if all three modifiable factors were optimized, the predicted rate of IBP would drop to 45%.

Many studies, including our analysis, have shown significant differences between the frequency of IBP in inpatient versus outpatient bowel preparations.8-11 Therefore, it is crucial to study IBP in these settings separately. Three single-institution studies, including a total of 898 patients, have identified risk factors for inpatient IBP. Individual studies ranged in size from 130 to 524 patients with rates of IBP ranging from 22%-57%.1-3 They found IBP to be associated with increasing age, lower income, ASA Grade >3, diabetes, coronary artery disease (CAD), nausea or vomiting, BMI >25, and chronic constipation. Modifiable factors included opiates, afternoon procedures, and runway times >6 hours.

We also found IBP to be associated with increasing age and male gender. However, we found no association with diabetes, chronic constipation, CAD or BMI. As we were able to adjust for a wider variety of variables, it is possible that we were able to account for residual confounding better than previous studies. For example, we found that having nausea/vomiting, dysphagia, and gastroparesis was associated with IBP. Gastroparesis with associated nausea and vomiting may be the mechanism by which diabetes increases the risk for IBP. Further studies are needed to assess if interventions or alternative bowel cleansing in these patients can result in improved IBP. Finally, in contrast to studies with smaller cohorts which found that lower volume bowel preps improved IBP in the right colon,4,12 we found no association between IBP based and volume of bowel preparation consumed. Our impact analysis suggests that avoidance of opiates for at least three days before colonoscopy, avoidance of solid diet on the day before colonoscopy and performing all colonoscopies before noon would reduce the rate of IBP by 5.6%. While at first glance this does not appear to be a significant change, from a public health perspective with thousands of inpatient colonoscopies performed every year, it is crucial. We found that IBP was associated with an increased inpatient LOS of approximately one day. Assuming an average cost of one hospital day to be $2,000,13 this 5.6% improvement among our almost 9,000 patients, would translate into eliminating 494 unnecessary hospital days, or approximately $1 million in savings. More importantly, this savings comes without risk to patients and would result in an improvement in quality.

The factors mentioned above may not always be amenable to modification. For example, for patients with active gastrointestinal bleeding, postponing colonoscopy by one day for the sake of maintaining a patient on a clear diet may not be feasible. Similarly, performing colonoscopies in the morning is highly dependent on endoscopy suite availability and hospital logistics. Denying opiates to patients experiencing severe pain is not ethical. In many scenarios, however, these variables could be modified, and institutional efforts to support these practices could yield considerable savings. Future prospective studies are needed to verify the real impact of these changes.

Further discussion is needed to contextualize the finding that colonoscopies scheduled in the afternoon are associated with improved bowel preparation quality. Previous research—albeit in the outpatient setting—has demonstrated 11.8 hours as the maximum upper time limit for the time elapsed between the end of bowel preparation to colonoscopy.14 Another study found an inverse relationship between the quality of bowel preparation and the time after completion of the bowel preparation.15 This makes sense from a physiological perspective as delaying the time between completion of bowel preparation, and the procedure allows chyme from the small intestine to reaccumulate in the colon. Anecdotally, at our institution as well as at many others, the bowel preparations are ordered to start in the evening to allow the consumption of complete bowel preparation by midnight. As a result of this practice, only patients who have their colonoscopies scheduled before noon fall within the optimal period of 11.8 hours. In the outpatient setting, the use of split preparations has led to the obliteration of the difference in the quality of bowel preparation between morning and afternoon colonoscopies.16 Prospective trials are needed to evaluate the use of split preparations to improve the quality of afternoon inpatient colonoscopies.



Few other strategies have been shown to mitigate IBP in the inpatient setting. In a small randomized controlled trial, Ergen et al. found that providing an educational booklet improved inpatient bowel preparation as measured by the Boston Bowel Preparation Scale.17 In a quasi-experimental design, Yadlapati et al. found that an automated split-dose bowel preparation resulted in decreased IBP, fewer repeated procedures, shorter LOS, and lower hospital cost.18 Our study adds to these tools by identifying three additional risk factors which could be optimized for inpatients. Because our findings are observational, they should be subjected to prospective trials. Our study also calls into question the impact of bowel preparation volume. We found no difference in the rate of IBP between low and large volume preparations. It is possible that other factors are more important than the specific preparation employed. Information regarding the use of split preparations or same day preparations was not recorded and therefore not assessed in our study.

Interestingly, we found that IBP declined substantially in 2014 and continued to decline after that. The year was the most influential risk factor for IBP (on par with gastroparesis). The reason for this is unclear, as rates of our modifiable risk factors did not differ substantially by year. Other possibilities include improved access (including weekend access) to endoscopy coinciding with the development of a new endoscopy facility and use of integrated irrigation pump system instead of the use of manual syringes for flushing.

Our study has many strengths. It is by far the most extensive study of bowel preparation quality in inpatients to date and the only one that has included patient, procedural and bowel preparation characteristics. The study also has several significant limitations. This is a single center study, which could limit generalizability. Nonetheless, it was conducted within a health system with multiple hospitals in different parts of the United States (Ohio and Florida) and included a broad population mix with differing levels of acuity. The retrospective nature of the assessment precludes establishing causation. However, we mitigated confounding by adjusting for a wide variety of factors, and there is a plausible physiological mechanism for each of the factors we studied. Also, the retrospective nature of our study predisposes our data to omissions and misrepresentations during the documentation process. This is especially true with the use of ICD codes.19 Inaccuracies in coding are likely to bias toward the null, so observed associations may be an underestimate of the true association.

Our inability to ascertain if a patient completed the prescribed bowel preparation limited our ability to detect what may be a significant risk factor. Lastly, while clinically relevant, the Aronchik scale used to identify adequate from IBP has never been validated though it is frequently utilized and cited in the bowel preparation literature.20

 

 

CONCLUSIONS

In this large retrospective study evaluating bowel preparation quality in inpatients undergoing colonoscopy, we found that more than half of the patients have IBP and that IBP was associated with an extra day of hospitalization. Our study identifies those patients at highest risk and identifies modifiable risk factors for IBP. Specifically, we found that abstinence from opiates or solid diet before the colonoscopy, along with performing colonoscopies before noon were associated with improved outcomes. Prospective studies are needed to confirm the effects of these interventions on bowel preparation quality.

Disclosures

Carol A Burke, MD has received research funding from Ferring Pharmaceuticals. Other authors have no conflicts of interest to disclose.

References

1. Yadlapati R, Johnston ER, Gregory DL, Ciolino JD, Cooper A, Keswani RN. Predictors of inadequate inpatient colonoscopy preparation and its association with hospital length of stay and costs. Dig Dis Sci. 2015;60(11):3482-3490. doi: 10.1007/s10620-015-3761-2. PubMed
2. Jawa H, Mosli M, Alsamadani W, et al. Predictors of inadequate bowel preparation for inpatient colonoscopy. Turk J Gastroenterol. 2017;28(6):460-464. doi: 10.5152/tjg.2017.17196. PubMed
3. Mcnabb-Baltar J, Dorreen A, Dhahab HA, et al. Age is the only predictor of poor bowel preparation in the hospitalized patient. Can J Gastroenterol Hepatol. 2016;2016:1-5. doi: 10.1155/2016/2139264. PubMed
4. Rotondano G, Rispo A, Bottiglieri ME, et al. Tu1503 Quality of bowel cleansing in hospitalized patients is not worse than that of outpatients undergoing colonoscopy: results of a multicenter prospective regional study. Gastrointest Endosc. 2014;79(5):AB564. doi: 10.1016/j.gie.2014.02.949. PubMed
5. Ness R. Predictors of inadequate bowel preparation for colonoscopy. Am J Gastroenterol. 2001;96(6):1797-1802. doi: 10.1016/s0002-9270(01)02437-6. PubMed
6. Johnson DA, Barkun AN, Cohen LB, et al. Optimizing adequacy of bowel cleansing for colonoscopy: recommendations from the us multi-society task force on colorectal cancer. Gastroenterology. 2014;147(4):903-924. doi: 10.1053/j.gastro.2014.07.002. PubMed
7. Aronchick CA, Lipshutz WH, Wright SH, et al. A novel tableted purgative for colonoscopic preparation: efficacy and safety comparisons with Colyte and Fleet Phospho-Soda. Gastrointest Endosc. 2000;52(3):346-352. doi: 10.1067/mge.2000.108480. PubMed
8. Froehlich F, Wietlisbach V, Gonvers J-J, Burnand B, Vader J-P. Impact of colonic cleansing on quality and diagnostic yield of colonoscopy: the European Panel of Appropriateness of Gastrointestinal Endoscopy European multicenter study. Gastrointest Endosc. 2005;61(3):378-384. doi: 10.1016/s0016-5107(04)02776-2. PubMed
9. Sarvepalli S, Garber A, Rizk M, et al. 923 adjusted comparison of commercial bowel preparations based on inadequacy of bowel preparation in outpatient settings. Gastrointest Endosc. 2018;87(6):AB127. doi: 10.1016/j.gie.2018.04.1331. 
10. Hendry PO, Jenkins JT, Diament RH. The impact of poor bowel preparation on colonoscopy: a prospective single center study of 10 571 colonoscopies. Colorectal Dis. 2007;9(8):745-748. doi: 10.1111/j.1463-1318.2007.01220.x. PubMed
11. Lebwohl B, Wang TC, Neugut AI. Socioeconomic and other predictors of colonoscopy preparation quality. Dig Dis Sci. 2010;55(7):2014-2020. doi: 10.1007/s10620-009-1079-7. PubMed
12. Chorev N, Chadad B, Segal N, et al. Preparation for colonoscopy in hospitalized patients. Dig Dis Sci. 2007;52(3):835-839. doi: 10.1007/s10620-006-9591-5. PubMed
13. Weiss AJ. Overview of Hospital Stays in the United States, 2012. HCUP Statistical Brief #180. Rockville, MD: Agency for Healthcare Research and Quality; 2014. PubMed
14. Kojecky V, Matous J, Keil R, et al. The optimal bowel preparation intervals before colonoscopy: a randomized study comparing polyethylene glycol and low-volume solutions. Dig Liver Dis. 2018;50(3):271-276. doi: 10.1016/j.dld.2017.10.010. PubMed
15. Siddiqui AA, Yang K, Spechler SJ, et al. Duration of the interval between the completion of bowel preparation and the start of colonoscopy predicts bowel-preparation quality. Gastrointest Endosc. 2009;69(3):700-706. doi: 10.1016/j.gie.2008.09.047. PubMed
16. Eun CS, Han DS, Hyun YS, et al. The timing of bowel preparation is more important than the timing of colonoscopy in determining the quality of bowel cleansing. Dig Dis Sci. 2010;56(2):539-544. doi: 10.1007/s10620-010-1457-1. PubMed
17. Ergen WF, Pasricha T, Hubbard FJ, et al. Providing hospitalized patients with an educational booklet increases the quality of colonoscopy bowel preparation. Clin Gastroenterol Hepatol. 2016;14(6):858-864. doi: 10.1016/j.cgh.2015.11.015. PubMed
18. Yadlapati R, Johnston ER, Gluskin AB, et al. An automated inpatient split-dose bowel preparation system improves colonoscopy quality and reduces repeat procedures. J Clin Gastroenterol. 2018;52(8):709-714. doi: 10.1097/mcg.0000000000000849. PubMed
19. Birman-Deych E, Waterman AD, Yan Y, Nilasena DS, Radford MJ, Gage BF. The accuracy of ICD-9-CM codes for identifying cardiovascular and stroke risk factors. Med Care. 2005;43(5):480-485. doi: 10.1097/01.mlr.0000160417.39497.a9. PubMed
20. Parmar R, Martel M, Rostom A, Barkun AN. Validated scales for colon cleansing: a systematic review. J Clin Gastroenterol. 2016;111(2):197-204. doi: 10.1038/ajg.2015.417. PubMed

Article PDF
Issue
Journal of Hospital Medicine 14(5)
Publications
Topics
Page Number
278-283. Published online first April 8, 2019.
Sections
Article PDF
Article PDF

Inadequate bowel preparation (IBP) at the time of inpatient colonoscopy is common and associated with increased length of stay and cost of care.1 The factors that contribute to IBP can be categorized into those that are modifiable and those that are nonmodifiable. While many factors have been associated with IBP, studies have been limited by small sample size or have combined inpatient/outpatient populations, thus limiting generalizability.1-5 Moreover, most factors associated with IBP, such as socioeconomic status, male gender, increased age, and comorbidities, are nonmodifiable. No studies have explicitly focused on modifiable risk factors, such as medication use, colonoscopy timing, or assessed the potential impact of modifying these factors.

In a large, multihospital system, we examine the frequency of IBP among inpatients undergoing colonoscopy along with factors associated with IBP. We attempted to identify modifiable risk factors that were associated with IBP.

METHODS

After obtaining Cleveland Clinic Institutional Review Board approval, records of all adult (≥18 years) inpatients undergoing colonoscopy between January 2011 and June 2017 were obtained. Patients with colonoscopy reports lacking a description of the bowel preparation quality and colonoscopies performed in the intensive care unit were excluded. For each patient, we considered only the first inpatient colonoscopy if more than one occurred during the study period.

Potential Predictors of IBP

Demographic data such as patient age, gender, ethnicity, body mass index (BMI), and insurance/payor status were obtained from the electronic health record (EHR). International Classification of Disease 9th and 10th revision, Clinical Modifications (ICD-9/10-CM) codes were used to obtain patient comorbidities including diabetes, coronary artery disease, heart failure, cirrhosis, gastroparesis, hypothyroidism, inflammatory bowel disease, constipation, stroke, dementia, dysphagia, and nausea/vomiting. Use of opioid medications within three days before colonoscopy was extracted from the medication administration record. These variables were chosen as biologically plausible modifiers of bowel preparation or had previously been assessed in the literature.1-6 The name and volume, classified as 4 L (GoLytely®) and < 4 liters (MoviPrep®) of bowel preparation, time of day when colonoscopy was performed, solid diet the day prior to colonoscopy, type of sedation used (conscious sedation or general anesthesia), and total colonoscopy time (defined as the time from scope insertion to removal) was recorded. Hospitalization-related variables, including the number of hospitalizations in the year before the current hospitalization, the year in which the colonoscopy was performed, and the number of days from admission to colonoscopy, were also obtained from the EHR.

 

 

Outcome Measures

An internally validated natural language algorithm, using Structured Queried Language was used to search through colonoscopy reports to identify adequacy of bowel preparation. ProVation® software allows the gastroenterologist to use some terms to describe bowel preparation in a drop-down menu format. In addition to the Aronchik scale (which allows the gastroenterologist to rate bowel preparation on a five-point scale: “excellent,” “good,” “fair,” “poor,” and “inadequate”) it also allows the provider to use terms such as “adequate” or “adequate to detect polyps >5 mm” as well as “unsatisfactory.”7 Mirroring prior literature, bowel preparation quality was classified into “adequate” and “inadequate”; “good” and “excellent” on the Aronchik scale were categorized as adequate as was the term “adequate” in any form; “fair,” “poor,” or “inadequate” on the Aronchik scale were classified as inadequate as was the term “unsatisfactory.” We evaluated the hospital length of stay (LOS) as a secondary outcome measure.

Statistical Analysis

After describing the frequency of IBP, the quality of bowel preparation (adequate vs inadequate) was compared based on the predictors described above. Categorical variables were reported as frequencies with percentages and continuous variables were reported as medians with 25th-75th percentile values. The significance of the difference between the proportion or median values of those who had inadequate versus adequate bowel preparation was assessed. Two-sided chi-square analysis was used to assess the significance of differences between categorical variables and the Wilcoxon Rank-Sum test was used to assess the significance of differences between continuous variables.

Multivariate logistic regression analysis was performed to assess factors associated with hospital predictors and outcomes, after adjusting for all the aforementioned factors and clustering the effect based on the endoscopist. To evaluate the potential impact of modifiable factors on IBP, we performed counterfactual analysis, in which the observed distribution was compared to a hypothetical population in which all the modifiable risk factors were optimal.

RESULTS

Overall, 8,819 patients were included in our study population. They had a median age of 64 [53-76] years; 50.5% were female and 51% had an IBP. Patient characteristics and rates of IBP are presented in Table 1.

In unadjusted analyses, with regards to modifiable factors, opiate use within three days of colonoscopy was associated with a higher rate of IBP (55.4% vs 47.3%, P <.001), as was a lower volume (<4L) bowel preparation (55.3% vs 50.4%, P = .003). IBP was less frequent when colonoscopy was performed before noon vs afternoon (50.3% vs 57.4%, P < .001), and when patients were documented to receive a clear liquid diet or nil per os vs a solid diet the day prior to colonoscopy (50.3% vs 57.4%, P < .001). Overall bowel preparation quality improved over time (Figure 1). Median LOS was five [3-11] days. Patients who had IBP on their initial colonoscopy had a LOS one day longer than patients without IBP (six days vs five days, P < .001).

Multivariate Analysis

Table 2 shows the results of the multivariate analysis. The following modifiable factors were associated with IBP: opiate used within three days of the procedure (OR 1.31; 95% CI 1.8, 1.45), having the colonoscopy performed after12:00 pm (OR, 1.25; 95% CI, 1.10, 1.41), and consuming a solid diet the day prior to the colonoscopy (OR, 1.37; 95% CI, 1.18, 1.59). However, the volume of bowel preparation was not associated with IBP. The selected nonmodifiable factors that were found to be associated with IBP included age (increment of five years; OR, 1.04; 95% CI, 1.02, 1.05), male gender (OR, 1.33; 95% CI, 1.23, 1.44), Medicare insurance (OR, 1.17; 95% CI, 1.07, 1.28), Medicaid insurance (OR, 1.34; 95% CI, 1.07, 1.28), gastroparesis (OR, 1.62; 95% CI, 1.16, 2.27), nausea/vomiting (OR 1.21; 95% CI, 1.09, 1.34), and dysphagia (OR, 1.16; 95% CI, 1.01, 1.34).

 

 

Potential Impact of Modifiable Variables

We conducted a counterfactual analysis based on a multivariate model to assess the impact of each modifiable risk factor on the IBP rate (Figure 1). In the included study population, 44.9% received an opiate, 39.3% had a colonoscopy after 12:00 pm, and 9.1% received solid food the day prior to the procedure. Holding all other factors constant, if all patients were not prescribed opiates within three days of the procedure a 2.9% reduction in IBP would be expected. Similarly, if all patients underwent colonoscopy before noon, a 2.1% reduction in IBP rate would be expected. A 0.7% reduction would be expected if all patients were maintained on a liquid diet or nil per os. Combined, instituting all these changes (no opiates or solid diet before colonoscopy and performing all colonoscopies before noon) would produce a 5.6% reduction in IBP rate.

DISCUSSION

In this large, multihospital cohort, IBP was documented in half (51%) of 8,819 inpatient colonoscopies performed. Nonmodifiable patient characteristics independently associated with IBP were age, male gender, white race, Medicare and Medicaid insurance, nausea/vomiting, dysphagia, and gastroparesis. Modifiable factors included not consuming opiates within three days of colonoscopy, avoidance of a solid diet the day prior to colonoscopy and performing the colonoscopy before noon. The volume of bowel preparation consumed was not associated with IBP. In a counterfactual analysis, we found that if all three modifiable factors were optimized, the predicted rate of IBP would drop to 45%.

Many studies, including our analysis, have shown significant differences between the frequency of IBP in inpatient versus outpatient bowel preparations.8-11 Therefore, it is crucial to study IBP in these settings separately. Three single-institution studies, including a total of 898 patients, have identified risk factors for inpatient IBP. Individual studies ranged in size from 130 to 524 patients with rates of IBP ranging from 22%-57%.1-3 They found IBP to be associated with increasing age, lower income, ASA Grade >3, diabetes, coronary artery disease (CAD), nausea or vomiting, BMI >25, and chronic constipation. Modifiable factors included opiates, afternoon procedures, and runway times >6 hours.

We also found IBP to be associated with increasing age and male gender. However, we found no association with diabetes, chronic constipation, CAD or BMI. As we were able to adjust for a wider variety of variables, it is possible that we were able to account for residual confounding better than previous studies. For example, we found that having nausea/vomiting, dysphagia, and gastroparesis was associated with IBP. Gastroparesis with associated nausea and vomiting may be the mechanism by which diabetes increases the risk for IBP. Further studies are needed to assess if interventions or alternative bowel cleansing in these patients can result in improved IBP. Finally, in contrast to studies with smaller cohorts which found that lower volume bowel preps improved IBP in the right colon,4,12 we found no association between IBP based and volume of bowel preparation consumed. Our impact analysis suggests that avoidance of opiates for at least three days before colonoscopy, avoidance of solid diet on the day before colonoscopy and performing all colonoscopies before noon would reduce the rate of IBP by 5.6%. While at first glance this does not appear to be a significant change, from a public health perspective with thousands of inpatient colonoscopies performed every year, it is crucial. We found that IBP was associated with an increased inpatient LOS of approximately one day. Assuming an average cost of one hospital day to be $2,000,13 this 5.6% improvement among our almost 9,000 patients, would translate into eliminating 494 unnecessary hospital days, or approximately $1 million in savings. More importantly, this savings comes without risk to patients and would result in an improvement in quality.

The factors mentioned above may not always be amenable to modification. For example, for patients with active gastrointestinal bleeding, postponing colonoscopy by one day for the sake of maintaining a patient on a clear diet may not be feasible. Similarly, performing colonoscopies in the morning is highly dependent on endoscopy suite availability and hospital logistics. Denying opiates to patients experiencing severe pain is not ethical. In many scenarios, however, these variables could be modified, and institutional efforts to support these practices could yield considerable savings. Future prospective studies are needed to verify the real impact of these changes.

Further discussion is needed to contextualize the finding that colonoscopies scheduled in the afternoon are associated with improved bowel preparation quality. Previous research—albeit in the outpatient setting—has demonstrated 11.8 hours as the maximum upper time limit for the time elapsed between the end of bowel preparation to colonoscopy.14 Another study found an inverse relationship between the quality of bowel preparation and the time after completion of the bowel preparation.15 This makes sense from a physiological perspective as delaying the time between completion of bowel preparation, and the procedure allows chyme from the small intestine to reaccumulate in the colon. Anecdotally, at our institution as well as at many others, the bowel preparations are ordered to start in the evening to allow the consumption of complete bowel preparation by midnight. As a result of this practice, only patients who have their colonoscopies scheduled before noon fall within the optimal period of 11.8 hours. In the outpatient setting, the use of split preparations has led to the obliteration of the difference in the quality of bowel preparation between morning and afternoon colonoscopies.16 Prospective trials are needed to evaluate the use of split preparations to improve the quality of afternoon inpatient colonoscopies.



Few other strategies have been shown to mitigate IBP in the inpatient setting. In a small randomized controlled trial, Ergen et al. found that providing an educational booklet improved inpatient bowel preparation as measured by the Boston Bowel Preparation Scale.17 In a quasi-experimental design, Yadlapati et al. found that an automated split-dose bowel preparation resulted in decreased IBP, fewer repeated procedures, shorter LOS, and lower hospital cost.18 Our study adds to these tools by identifying three additional risk factors which could be optimized for inpatients. Because our findings are observational, they should be subjected to prospective trials. Our study also calls into question the impact of bowel preparation volume. We found no difference in the rate of IBP between low and large volume preparations. It is possible that other factors are more important than the specific preparation employed. Information regarding the use of split preparations or same day preparations was not recorded and therefore not assessed in our study.

Interestingly, we found that IBP declined substantially in 2014 and continued to decline after that. The year was the most influential risk factor for IBP (on par with gastroparesis). The reason for this is unclear, as rates of our modifiable risk factors did not differ substantially by year. Other possibilities include improved access (including weekend access) to endoscopy coinciding with the development of a new endoscopy facility and use of integrated irrigation pump system instead of the use of manual syringes for flushing.

Our study has many strengths. It is by far the most extensive study of bowel preparation quality in inpatients to date and the only one that has included patient, procedural and bowel preparation characteristics. The study also has several significant limitations. This is a single center study, which could limit generalizability. Nonetheless, it was conducted within a health system with multiple hospitals in different parts of the United States (Ohio and Florida) and included a broad population mix with differing levels of acuity. The retrospective nature of the assessment precludes establishing causation. However, we mitigated confounding by adjusting for a wide variety of factors, and there is a plausible physiological mechanism for each of the factors we studied. Also, the retrospective nature of our study predisposes our data to omissions and misrepresentations during the documentation process. This is especially true with the use of ICD codes.19 Inaccuracies in coding are likely to bias toward the null, so observed associations may be an underestimate of the true association.

Our inability to ascertain if a patient completed the prescribed bowel preparation limited our ability to detect what may be a significant risk factor. Lastly, while clinically relevant, the Aronchik scale used to identify adequate from IBP has never been validated though it is frequently utilized and cited in the bowel preparation literature.20

 

 

CONCLUSIONS

In this large retrospective study evaluating bowel preparation quality in inpatients undergoing colonoscopy, we found that more than half of the patients have IBP and that IBP was associated with an extra day of hospitalization. Our study identifies those patients at highest risk and identifies modifiable risk factors for IBP. Specifically, we found that abstinence from opiates or solid diet before the colonoscopy, along with performing colonoscopies before noon were associated with improved outcomes. Prospective studies are needed to confirm the effects of these interventions on bowel preparation quality.

Disclosures

Carol A Burke, MD has received research funding from Ferring Pharmaceuticals. Other authors have no conflicts of interest to disclose.

Inadequate bowel preparation (IBP) at the time of inpatient colonoscopy is common and associated with increased length of stay and cost of care.1 The factors that contribute to IBP can be categorized into those that are modifiable and those that are nonmodifiable. While many factors have been associated with IBP, studies have been limited by small sample size or have combined inpatient/outpatient populations, thus limiting generalizability.1-5 Moreover, most factors associated with IBP, such as socioeconomic status, male gender, increased age, and comorbidities, are nonmodifiable. No studies have explicitly focused on modifiable risk factors, such as medication use, colonoscopy timing, or assessed the potential impact of modifying these factors.

In a large, multihospital system, we examine the frequency of IBP among inpatients undergoing colonoscopy along with factors associated with IBP. We attempted to identify modifiable risk factors that were associated with IBP.

METHODS

After obtaining Cleveland Clinic Institutional Review Board approval, records of all adult (≥18 years) inpatients undergoing colonoscopy between January 2011 and June 2017 were obtained. Patients with colonoscopy reports lacking a description of the bowel preparation quality and colonoscopies performed in the intensive care unit were excluded. For each patient, we considered only the first inpatient colonoscopy if more than one occurred during the study period.

Potential Predictors of IBP

Demographic data such as patient age, gender, ethnicity, body mass index (BMI), and insurance/payor status were obtained from the electronic health record (EHR). International Classification of Disease 9th and 10th revision, Clinical Modifications (ICD-9/10-CM) codes were used to obtain patient comorbidities including diabetes, coronary artery disease, heart failure, cirrhosis, gastroparesis, hypothyroidism, inflammatory bowel disease, constipation, stroke, dementia, dysphagia, and nausea/vomiting. Use of opioid medications within three days before colonoscopy was extracted from the medication administration record. These variables were chosen as biologically plausible modifiers of bowel preparation or had previously been assessed in the literature.1-6 The name and volume, classified as 4 L (GoLytely®) and < 4 liters (MoviPrep®) of bowel preparation, time of day when colonoscopy was performed, solid diet the day prior to colonoscopy, type of sedation used (conscious sedation or general anesthesia), and total colonoscopy time (defined as the time from scope insertion to removal) was recorded. Hospitalization-related variables, including the number of hospitalizations in the year before the current hospitalization, the year in which the colonoscopy was performed, and the number of days from admission to colonoscopy, were also obtained from the EHR.

 

 

Outcome Measures

An internally validated natural language algorithm, using Structured Queried Language was used to search through colonoscopy reports to identify adequacy of bowel preparation. ProVation® software allows the gastroenterologist to use some terms to describe bowel preparation in a drop-down menu format. In addition to the Aronchik scale (which allows the gastroenterologist to rate bowel preparation on a five-point scale: “excellent,” “good,” “fair,” “poor,” and “inadequate”) it also allows the provider to use terms such as “adequate” or “adequate to detect polyps >5 mm” as well as “unsatisfactory.”7 Mirroring prior literature, bowel preparation quality was classified into “adequate” and “inadequate”; “good” and “excellent” on the Aronchik scale were categorized as adequate as was the term “adequate” in any form; “fair,” “poor,” or “inadequate” on the Aronchik scale were classified as inadequate as was the term “unsatisfactory.” We evaluated the hospital length of stay (LOS) as a secondary outcome measure.

Statistical Analysis

After describing the frequency of IBP, the quality of bowel preparation (adequate vs inadequate) was compared based on the predictors described above. Categorical variables were reported as frequencies with percentages and continuous variables were reported as medians with 25th-75th percentile values. The significance of the difference between the proportion or median values of those who had inadequate versus adequate bowel preparation was assessed. Two-sided chi-square analysis was used to assess the significance of differences between categorical variables and the Wilcoxon Rank-Sum test was used to assess the significance of differences between continuous variables.

Multivariate logistic regression analysis was performed to assess factors associated with hospital predictors and outcomes, after adjusting for all the aforementioned factors and clustering the effect based on the endoscopist. To evaluate the potential impact of modifiable factors on IBP, we performed counterfactual analysis, in which the observed distribution was compared to a hypothetical population in which all the modifiable risk factors were optimal.

RESULTS

Overall, 8,819 patients were included in our study population. They had a median age of 64 [53-76] years; 50.5% were female and 51% had an IBP. Patient characteristics and rates of IBP are presented in Table 1.

In unadjusted analyses, with regards to modifiable factors, opiate use within three days of colonoscopy was associated with a higher rate of IBP (55.4% vs 47.3%, P <.001), as was a lower volume (<4L) bowel preparation (55.3% vs 50.4%, P = .003). IBP was less frequent when colonoscopy was performed before noon vs afternoon (50.3% vs 57.4%, P < .001), and when patients were documented to receive a clear liquid diet or nil per os vs a solid diet the day prior to colonoscopy (50.3% vs 57.4%, P < .001). Overall bowel preparation quality improved over time (Figure 1). Median LOS was five [3-11] days. Patients who had IBP on their initial colonoscopy had a LOS one day longer than patients without IBP (six days vs five days, P < .001).

Multivariate Analysis

Table 2 shows the results of the multivariate analysis. The following modifiable factors were associated with IBP: opiate used within three days of the procedure (OR 1.31; 95% CI 1.8, 1.45), having the colonoscopy performed after12:00 pm (OR, 1.25; 95% CI, 1.10, 1.41), and consuming a solid diet the day prior to the colonoscopy (OR, 1.37; 95% CI, 1.18, 1.59). However, the volume of bowel preparation was not associated with IBP. The selected nonmodifiable factors that were found to be associated with IBP included age (increment of five years; OR, 1.04; 95% CI, 1.02, 1.05), male gender (OR, 1.33; 95% CI, 1.23, 1.44), Medicare insurance (OR, 1.17; 95% CI, 1.07, 1.28), Medicaid insurance (OR, 1.34; 95% CI, 1.07, 1.28), gastroparesis (OR, 1.62; 95% CI, 1.16, 2.27), nausea/vomiting (OR 1.21; 95% CI, 1.09, 1.34), and dysphagia (OR, 1.16; 95% CI, 1.01, 1.34).

 

 

Potential Impact of Modifiable Variables

We conducted a counterfactual analysis based on a multivariate model to assess the impact of each modifiable risk factor on the IBP rate (Figure 1). In the included study population, 44.9% received an opiate, 39.3% had a colonoscopy after 12:00 pm, and 9.1% received solid food the day prior to the procedure. Holding all other factors constant, if all patients were not prescribed opiates within three days of the procedure a 2.9% reduction in IBP would be expected. Similarly, if all patients underwent colonoscopy before noon, a 2.1% reduction in IBP rate would be expected. A 0.7% reduction would be expected if all patients were maintained on a liquid diet or nil per os. Combined, instituting all these changes (no opiates or solid diet before colonoscopy and performing all colonoscopies before noon) would produce a 5.6% reduction in IBP rate.

DISCUSSION

In this large, multihospital cohort, IBP was documented in half (51%) of 8,819 inpatient colonoscopies performed. Nonmodifiable patient characteristics independently associated with IBP were age, male gender, white race, Medicare and Medicaid insurance, nausea/vomiting, dysphagia, and gastroparesis. Modifiable factors included not consuming opiates within three days of colonoscopy, avoidance of a solid diet the day prior to colonoscopy and performing the colonoscopy before noon. The volume of bowel preparation consumed was not associated with IBP. In a counterfactual analysis, we found that if all three modifiable factors were optimized, the predicted rate of IBP would drop to 45%.

Many studies, including our analysis, have shown significant differences between the frequency of IBP in inpatient versus outpatient bowel preparations.8-11 Therefore, it is crucial to study IBP in these settings separately. Three single-institution studies, including a total of 898 patients, have identified risk factors for inpatient IBP. Individual studies ranged in size from 130 to 524 patients with rates of IBP ranging from 22%-57%.1-3 They found IBP to be associated with increasing age, lower income, ASA Grade >3, diabetes, coronary artery disease (CAD), nausea or vomiting, BMI >25, and chronic constipation. Modifiable factors included opiates, afternoon procedures, and runway times >6 hours.

We also found IBP to be associated with increasing age and male gender. However, we found no association with diabetes, chronic constipation, CAD or BMI. As we were able to adjust for a wider variety of variables, it is possible that we were able to account for residual confounding better than previous studies. For example, we found that having nausea/vomiting, dysphagia, and gastroparesis was associated with IBP. Gastroparesis with associated nausea and vomiting may be the mechanism by which diabetes increases the risk for IBP. Further studies are needed to assess if interventions or alternative bowel cleansing in these patients can result in improved IBP. Finally, in contrast to studies with smaller cohorts which found that lower volume bowel preps improved IBP in the right colon,4,12 we found no association between IBP based and volume of bowel preparation consumed. Our impact analysis suggests that avoidance of opiates for at least three days before colonoscopy, avoidance of solid diet on the day before colonoscopy and performing all colonoscopies before noon would reduce the rate of IBP by 5.6%. While at first glance this does not appear to be a significant change, from a public health perspective with thousands of inpatient colonoscopies performed every year, it is crucial. We found that IBP was associated with an increased inpatient LOS of approximately one day. Assuming an average cost of one hospital day to be $2,000,13 this 5.6% improvement among our almost 9,000 patients, would translate into eliminating 494 unnecessary hospital days, or approximately $1 million in savings. More importantly, this savings comes without risk to patients and would result in an improvement in quality.

The factors mentioned above may not always be amenable to modification. For example, for patients with active gastrointestinal bleeding, postponing colonoscopy by one day for the sake of maintaining a patient on a clear diet may not be feasible. Similarly, performing colonoscopies in the morning is highly dependent on endoscopy suite availability and hospital logistics. Denying opiates to patients experiencing severe pain is not ethical. In many scenarios, however, these variables could be modified, and institutional efforts to support these practices could yield considerable savings. Future prospective studies are needed to verify the real impact of these changes.

Further discussion is needed to contextualize the finding that colonoscopies scheduled in the afternoon are associated with improved bowel preparation quality. Previous research—albeit in the outpatient setting—has demonstrated 11.8 hours as the maximum upper time limit for the time elapsed between the end of bowel preparation to colonoscopy.14 Another study found an inverse relationship between the quality of bowel preparation and the time after completion of the bowel preparation.15 This makes sense from a physiological perspective as delaying the time between completion of bowel preparation, and the procedure allows chyme from the small intestine to reaccumulate in the colon. Anecdotally, at our institution as well as at many others, the bowel preparations are ordered to start in the evening to allow the consumption of complete bowel preparation by midnight. As a result of this practice, only patients who have their colonoscopies scheduled before noon fall within the optimal period of 11.8 hours. In the outpatient setting, the use of split preparations has led to the obliteration of the difference in the quality of bowel preparation between morning and afternoon colonoscopies.16 Prospective trials are needed to evaluate the use of split preparations to improve the quality of afternoon inpatient colonoscopies.



Few other strategies have been shown to mitigate IBP in the inpatient setting. In a small randomized controlled trial, Ergen et al. found that providing an educational booklet improved inpatient bowel preparation as measured by the Boston Bowel Preparation Scale.17 In a quasi-experimental design, Yadlapati et al. found that an automated split-dose bowel preparation resulted in decreased IBP, fewer repeated procedures, shorter LOS, and lower hospital cost.18 Our study adds to these tools by identifying three additional risk factors which could be optimized for inpatients. Because our findings are observational, they should be subjected to prospective trials. Our study also calls into question the impact of bowel preparation volume. We found no difference in the rate of IBP between low and large volume preparations. It is possible that other factors are more important than the specific preparation employed. Information regarding the use of split preparations or same day preparations was not recorded and therefore not assessed in our study.

Interestingly, we found that IBP declined substantially in 2014 and continued to decline after that. The year was the most influential risk factor for IBP (on par with gastroparesis). The reason for this is unclear, as rates of our modifiable risk factors did not differ substantially by year. Other possibilities include improved access (including weekend access) to endoscopy coinciding with the development of a new endoscopy facility and use of integrated irrigation pump system instead of the use of manual syringes for flushing.

Our study has many strengths. It is by far the most extensive study of bowel preparation quality in inpatients to date and the only one that has included patient, procedural and bowel preparation characteristics. The study also has several significant limitations. This is a single center study, which could limit generalizability. Nonetheless, it was conducted within a health system with multiple hospitals in different parts of the United States (Ohio and Florida) and included a broad population mix with differing levels of acuity. The retrospective nature of the assessment precludes establishing causation. However, we mitigated confounding by adjusting for a wide variety of factors, and there is a plausible physiological mechanism for each of the factors we studied. Also, the retrospective nature of our study predisposes our data to omissions and misrepresentations during the documentation process. This is especially true with the use of ICD codes.19 Inaccuracies in coding are likely to bias toward the null, so observed associations may be an underestimate of the true association.

Our inability to ascertain if a patient completed the prescribed bowel preparation limited our ability to detect what may be a significant risk factor. Lastly, while clinically relevant, the Aronchik scale used to identify adequate from IBP has never been validated though it is frequently utilized and cited in the bowel preparation literature.20

 

 

CONCLUSIONS

In this large retrospective study evaluating bowel preparation quality in inpatients undergoing colonoscopy, we found that more than half of the patients have IBP and that IBP was associated with an extra day of hospitalization. Our study identifies those patients at highest risk and identifies modifiable risk factors for IBP. Specifically, we found that abstinence from opiates or solid diet before the colonoscopy, along with performing colonoscopies before noon were associated with improved outcomes. Prospective studies are needed to confirm the effects of these interventions on bowel preparation quality.

Disclosures

Carol A Burke, MD has received research funding from Ferring Pharmaceuticals. Other authors have no conflicts of interest to disclose.

References

1. Yadlapati R, Johnston ER, Gregory DL, Ciolino JD, Cooper A, Keswani RN. Predictors of inadequate inpatient colonoscopy preparation and its association with hospital length of stay and costs. Dig Dis Sci. 2015;60(11):3482-3490. doi: 10.1007/s10620-015-3761-2. PubMed
2. Jawa H, Mosli M, Alsamadani W, et al. Predictors of inadequate bowel preparation for inpatient colonoscopy. Turk J Gastroenterol. 2017;28(6):460-464. doi: 10.5152/tjg.2017.17196. PubMed
3. Mcnabb-Baltar J, Dorreen A, Dhahab HA, et al. Age is the only predictor of poor bowel preparation in the hospitalized patient. Can J Gastroenterol Hepatol. 2016;2016:1-5. doi: 10.1155/2016/2139264. PubMed
4. Rotondano G, Rispo A, Bottiglieri ME, et al. Tu1503 Quality of bowel cleansing in hospitalized patients is not worse than that of outpatients undergoing colonoscopy: results of a multicenter prospective regional study. Gastrointest Endosc. 2014;79(5):AB564. doi: 10.1016/j.gie.2014.02.949. PubMed
5. Ness R. Predictors of inadequate bowel preparation for colonoscopy. Am J Gastroenterol. 2001;96(6):1797-1802. doi: 10.1016/s0002-9270(01)02437-6. PubMed
6. Johnson DA, Barkun AN, Cohen LB, et al. Optimizing adequacy of bowel cleansing for colonoscopy: recommendations from the us multi-society task force on colorectal cancer. Gastroenterology. 2014;147(4):903-924. doi: 10.1053/j.gastro.2014.07.002. PubMed
7. Aronchick CA, Lipshutz WH, Wright SH, et al. A novel tableted purgative for colonoscopic preparation: efficacy and safety comparisons with Colyte and Fleet Phospho-Soda. Gastrointest Endosc. 2000;52(3):346-352. doi: 10.1067/mge.2000.108480. PubMed
8. Froehlich F, Wietlisbach V, Gonvers J-J, Burnand B, Vader J-P. Impact of colonic cleansing on quality and diagnostic yield of colonoscopy: the European Panel of Appropriateness of Gastrointestinal Endoscopy European multicenter study. Gastrointest Endosc. 2005;61(3):378-384. doi: 10.1016/s0016-5107(04)02776-2. PubMed
9. Sarvepalli S, Garber A, Rizk M, et al. 923 adjusted comparison of commercial bowel preparations based on inadequacy of bowel preparation in outpatient settings. Gastrointest Endosc. 2018;87(6):AB127. doi: 10.1016/j.gie.2018.04.1331. 
10. Hendry PO, Jenkins JT, Diament RH. The impact of poor bowel preparation on colonoscopy: a prospective single center study of 10 571 colonoscopies. Colorectal Dis. 2007;9(8):745-748. doi: 10.1111/j.1463-1318.2007.01220.x. PubMed
11. Lebwohl B, Wang TC, Neugut AI. Socioeconomic and other predictors of colonoscopy preparation quality. Dig Dis Sci. 2010;55(7):2014-2020. doi: 10.1007/s10620-009-1079-7. PubMed
12. Chorev N, Chadad B, Segal N, et al. Preparation for colonoscopy in hospitalized patients. Dig Dis Sci. 2007;52(3):835-839. doi: 10.1007/s10620-006-9591-5. PubMed
13. Weiss AJ. Overview of Hospital Stays in the United States, 2012. HCUP Statistical Brief #180. Rockville, MD: Agency for Healthcare Research and Quality; 2014. PubMed
14. Kojecky V, Matous J, Keil R, et al. The optimal bowel preparation intervals before colonoscopy: a randomized study comparing polyethylene glycol and low-volume solutions. Dig Liver Dis. 2018;50(3):271-276. doi: 10.1016/j.dld.2017.10.010. PubMed
15. Siddiqui AA, Yang K, Spechler SJ, et al. Duration of the interval between the completion of bowel preparation and the start of colonoscopy predicts bowel-preparation quality. Gastrointest Endosc. 2009;69(3):700-706. doi: 10.1016/j.gie.2008.09.047. PubMed
16. Eun CS, Han DS, Hyun YS, et al. The timing of bowel preparation is more important than the timing of colonoscopy in determining the quality of bowel cleansing. Dig Dis Sci. 2010;56(2):539-544. doi: 10.1007/s10620-010-1457-1. PubMed
17. Ergen WF, Pasricha T, Hubbard FJ, et al. Providing hospitalized patients with an educational booklet increases the quality of colonoscopy bowel preparation. Clin Gastroenterol Hepatol. 2016;14(6):858-864. doi: 10.1016/j.cgh.2015.11.015. PubMed
18. Yadlapati R, Johnston ER, Gluskin AB, et al. An automated inpatient split-dose bowel preparation system improves colonoscopy quality and reduces repeat procedures. J Clin Gastroenterol. 2018;52(8):709-714. doi: 10.1097/mcg.0000000000000849. PubMed
19. Birman-Deych E, Waterman AD, Yan Y, Nilasena DS, Radford MJ, Gage BF. The accuracy of ICD-9-CM codes for identifying cardiovascular and stroke risk factors. Med Care. 2005;43(5):480-485. doi: 10.1097/01.mlr.0000160417.39497.a9. PubMed
20. Parmar R, Martel M, Rostom A, Barkun AN. Validated scales for colon cleansing: a systematic review. J Clin Gastroenterol. 2016;111(2):197-204. doi: 10.1038/ajg.2015.417. PubMed

References

1. Yadlapati R, Johnston ER, Gregory DL, Ciolino JD, Cooper A, Keswani RN. Predictors of inadequate inpatient colonoscopy preparation and its association with hospital length of stay and costs. Dig Dis Sci. 2015;60(11):3482-3490. doi: 10.1007/s10620-015-3761-2. PubMed
2. Jawa H, Mosli M, Alsamadani W, et al. Predictors of inadequate bowel preparation for inpatient colonoscopy. Turk J Gastroenterol. 2017;28(6):460-464. doi: 10.5152/tjg.2017.17196. PubMed
3. Mcnabb-Baltar J, Dorreen A, Dhahab HA, et al. Age is the only predictor of poor bowel preparation in the hospitalized patient. Can J Gastroenterol Hepatol. 2016;2016:1-5. doi: 10.1155/2016/2139264. PubMed
4. Rotondano G, Rispo A, Bottiglieri ME, et al. Tu1503 Quality of bowel cleansing in hospitalized patients is not worse than that of outpatients undergoing colonoscopy: results of a multicenter prospective regional study. Gastrointest Endosc. 2014;79(5):AB564. doi: 10.1016/j.gie.2014.02.949. PubMed
5. Ness R. Predictors of inadequate bowel preparation for colonoscopy. Am J Gastroenterol. 2001;96(6):1797-1802. doi: 10.1016/s0002-9270(01)02437-6. PubMed
6. Johnson DA, Barkun AN, Cohen LB, et al. Optimizing adequacy of bowel cleansing for colonoscopy: recommendations from the us multi-society task force on colorectal cancer. Gastroenterology. 2014;147(4):903-924. doi: 10.1053/j.gastro.2014.07.002. PubMed
7. Aronchick CA, Lipshutz WH, Wright SH, et al. A novel tableted purgative for colonoscopic preparation: efficacy and safety comparisons with Colyte and Fleet Phospho-Soda. Gastrointest Endosc. 2000;52(3):346-352. doi: 10.1067/mge.2000.108480. PubMed
8. Froehlich F, Wietlisbach V, Gonvers J-J, Burnand B, Vader J-P. Impact of colonic cleansing on quality and diagnostic yield of colonoscopy: the European Panel of Appropriateness of Gastrointestinal Endoscopy European multicenter study. Gastrointest Endosc. 2005;61(3):378-384. doi: 10.1016/s0016-5107(04)02776-2. PubMed
9. Sarvepalli S, Garber A, Rizk M, et al. 923 adjusted comparison of commercial bowel preparations based on inadequacy of bowel preparation in outpatient settings. Gastrointest Endosc. 2018;87(6):AB127. doi: 10.1016/j.gie.2018.04.1331. 
10. Hendry PO, Jenkins JT, Diament RH. The impact of poor bowel preparation on colonoscopy: a prospective single center study of 10 571 colonoscopies. Colorectal Dis. 2007;9(8):745-748. doi: 10.1111/j.1463-1318.2007.01220.x. PubMed
11. Lebwohl B, Wang TC, Neugut AI. Socioeconomic and other predictors of colonoscopy preparation quality. Dig Dis Sci. 2010;55(7):2014-2020. doi: 10.1007/s10620-009-1079-7. PubMed
12. Chorev N, Chadad B, Segal N, et al. Preparation for colonoscopy in hospitalized patients. Dig Dis Sci. 2007;52(3):835-839. doi: 10.1007/s10620-006-9591-5. PubMed
13. Weiss AJ. Overview of Hospital Stays in the United States, 2012. HCUP Statistical Brief #180. Rockville, MD: Agency for Healthcare Research and Quality; 2014. PubMed
14. Kojecky V, Matous J, Keil R, et al. The optimal bowel preparation intervals before colonoscopy: a randomized study comparing polyethylene glycol and low-volume solutions. Dig Liver Dis. 2018;50(3):271-276. doi: 10.1016/j.dld.2017.10.010. PubMed
15. Siddiqui AA, Yang K, Spechler SJ, et al. Duration of the interval between the completion of bowel preparation and the start of colonoscopy predicts bowel-preparation quality. Gastrointest Endosc. 2009;69(3):700-706. doi: 10.1016/j.gie.2008.09.047. PubMed
16. Eun CS, Han DS, Hyun YS, et al. The timing of bowel preparation is more important than the timing of colonoscopy in determining the quality of bowel cleansing. Dig Dis Sci. 2010;56(2):539-544. doi: 10.1007/s10620-010-1457-1. PubMed
17. Ergen WF, Pasricha T, Hubbard FJ, et al. Providing hospitalized patients with an educational booklet increases the quality of colonoscopy bowel preparation. Clin Gastroenterol Hepatol. 2016;14(6):858-864. doi: 10.1016/j.cgh.2015.11.015. PubMed
18. Yadlapati R, Johnston ER, Gluskin AB, et al. An automated inpatient split-dose bowel preparation system improves colonoscopy quality and reduces repeat procedures. J Clin Gastroenterol. 2018;52(8):709-714. doi: 10.1097/mcg.0000000000000849. PubMed
19. Birman-Deych E, Waterman AD, Yan Y, Nilasena DS, Radford MJ, Gage BF. The accuracy of ICD-9-CM codes for identifying cardiovascular and stroke risk factors. Med Care. 2005;43(5):480-485. doi: 10.1097/01.mlr.0000160417.39497.a9. PubMed
20. Parmar R, Martel M, Rostom A, Barkun AN. Validated scales for colon cleansing: a systematic review. J Clin Gastroenterol. 2016;111(2):197-204. doi: 10.1038/ajg.2015.417. PubMed

Issue
Journal of Hospital Medicine 14(5)
Issue
Journal of Hospital Medicine 14(5)
Page Number
278-283. Published online first April 8, 2019.
Page Number
278-283. Published online first April 8, 2019.
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Michael B Rothberg, MD, MPH; E-mail: rothbem@ccf.org; Telephone: 216-445-0719.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media

Sepsis Presenting in Hospitals versus Emergency Departments: Demographic, Resuscitation, and Outcome Patterns in a Multicenter Retrospective Cohort

Article Type
Changed
Sun, 06/30/2019 - 20:13

Sepsis is both the most expensive condition treated and the most common cause of death in hospitals in the United States.1-3 Most sepsis patients (as many as 80% to 90%) meet sepsis criteria on hospital arrival, but mortality and costs are higher when meeting criteria after admission.3-6 Mechanisms of this increased mortality for these distinct populations are not well explored. Patients who present septic in the emergency department (ED) and patients who present as inpatients likely present very different challenges for recognition, treatment, and monitoring.7 Yet, how these groups differ by demographic and clinical characteristics, the etiology and severity of infection, and patterns of resuscitation care are not well described. Literature on sepsis epidemiology on hospital wards is particularly limited.8

This knowledge gap is important. If hospital-presenting sepsis (HPS) contributes disproportionately to disease burdCHFens, it reflects a high-yield population deserving the focus of quality improvement (QI) initiatives. If specific causes of disparities were identified—eg, poor initial resuscitation— they could be specifically targeted for correction. Given that current treatment guidelines are uniform for the two populations,9,10 characterizing phenotypic differences could also have implications for both diagnostic and therapeutic recommendations, particularly if the groups display substantially differing clinical presentations. Our prior work has not probed these effects specifically, but suggested ED versus inpatient setting at the time of initial sepsis presentation might be an effect modifier for the association between several elements of fluid resuscitation and patient outcomes.11,12

We, therefore, conducted a retrospective analysis to ask four sequential questions: (1) Do patients with HPS, compared with EDPS, contribute adverse outcome out of proportion to case prevalence? (2) At the time of initial presentation, how do HPS patients differ from EDPS patients with respect to demographics, comorbidities, infectious etiologies, clinical presentations, and severity of illness (3) If holding observed baseline factors constant, does the physical location of sepsis presentation inherently increase the risk for treatment delays and mortality? (4) To what extent can differences in the likelihood for timely initial treatment between the ED and inpatient settings explain differences in mortality and patient outcomes?

We hypothesized a priori that HPS would reflect chronically sicker patients whom both received less timely resuscitation and who contributed disproportionately frequent bad outcomes. We expected disparities in timely resuscitation care would explain a large proportion of this difference.

METHODS

We performed a retrospective analysis of the Northwell Sepsis Database, a prospectively captured, multisite, real world, consecutive-sample cohort of all “severe sepsis” and septic shock patients treated at nine tertiary and community hospitals in New York from October 1, 2014, to March 31, 2016. We analyzed all patients from a previously published cohort.11

 

 

Database Design and Structure

The Northwell Sepsis Database has previously been described in detail.11,13,14 Briefly, all patients met clinical sepsis criteria: (1) infection AND (2) ≥2 (SIRS) criteria AND (3) ≥1 acute organ dysfunction criterion. Organ dysfunction criteria were hypotension, acute kidney injury (AKI), coagulopathy, altered gas exchange, elevated bilirubin (≥2.0 mg/dL), or altered mental status (AMS; clarified in Supplemental Table 1). All organ dysfunction was not otherwise explained by patients’ medical histories; eg, patients on warfarin anticoagulation were not documented to have coagulopathy based on international normalized ratio > 1.5. The time of the sepsis episode (and database inclusion) was the time of the first vital sign measurement or laboratory result where a patient simultaneously met all three inclusion criteria: infection, SIRS, and organ dysfunction. The database excludes patients who were <18 years, declined bundle interventions, had advance directives precluding interventions, or were admitted directly to palliative care or hospice. Abstractors assumed comorbidities were absent if not documented within the medical record and that physiologic abnormalities were absent if not measured by the treatment team. There were no missing data for the variables analyzed. We report analysis in adherence with the STROBE statement guidelines for observational research.

Exposure

The primary exposure was whether patients had EDPS versus HPS. We defined EDPS patients as meeting all objective clinical inclusion criteria while physically in the ED. We defined HPS as first meeting sepsis inclusion criteria outside the ED, regardless of the reason for admission, and regardless of whether patients were admitted through the ED or directly to the hospital. All ED patients were admitted to the hospital.

Outcomes

Process outcomes were full 3-hour bundle compliance, time to antibiotic administration, blood cultures before antibiotics, time to fluid initiation, the volume of administered fluid resuscitation, lactate result time, and whether repeat lactate was obtained (Supplemental Table 2). Treatment times were times of administration (rather than order time). The primary patient outcome was hospital mortality. Secondary patient outcomes were mechanical ventilation, ICU admission, ICU days, hospital length of stay (LOS). We discounted HPS patients’ LOS to include only days after meeting the inclusion criteria. Patients were excluded from the analysis of the ICU admission outcome if they were already in the ICU prior to meeting sepsis criteria.

Statistical Analysis

We report continuous variables as means (standard deviation) or medians (interquartile range), and categorical variables as frequencies (proportions), as appropriate. Summative statistics with 95% confidence intervals (CI) describe overall group contributions. We used generalized linear models to determine patient factors associated with EDPS versus HPS, entering random effects for individual study sites to control for intercenter variability.

Next, to generate a propensity-matched cohort, we computed propensity scores adjusted from a priori selected variables: age, sex, tertiary versus community hospital, congestive heart failure (CHF), renal failure, COPD, diabetes, liver failure, immunocompromise, primary source of infection, nosocomial source, temperature, initial lactate, presenting hypotension, altered gas exchange, AMS, AKI, and coagulopathy. We then matched subjects 1:1 without optimization or replacement, imposing a caliper width of 0.01; ie, we required matched pairs to have a <1.0% difference in propensity scores. The macro used to match subjects is publically available.15

We then compared resuscitation and patient outcomes in the matched cohort using generalized linear models, ie, doubly-robust estimation (DRE).16 When assessing patient outcomes corrected for resuscitation, we used mixed DRE/multivariable regression. We did this for two reasons: first, DRE has the advantage of only requiring only one approach (propensity vs covariate adjustments) to be correctly specified.16 Second, computing propensity scores adjusted for resuscitation would be inappropriate given that resuscitation occurs after the exposure allocation (HPS vs EDPS). However, these factors could still impact the outcome and in fact, we hypothesized they were potential mediators of the exposure effect. To interrogate this mediating relationship, we recapitulated the DRE modeling but added covariates for resuscitation factors. Resuscitation-adjusted models controlled for timeliness of antibiotics, fluids, and lactate results; blood cultures before antibiotics; repeat lactate obtained, and fluid volume in the first six hours. Since ICU days and LOS are subject to competing risks bias (LOS could be shorter if patients died earlier), we used proportional hazards models where “the event” was defined as a live discharge to censor for mortality and we report output as inverse hazard ratios. We also tested interaction coefficients for discrete bundle elements and HPS to determine if specific bundle elements were effect modifiers for the association between the presenting location and mortality risk. Finally, we estimated attributable risk differences by comparing adjusted odds ratios of adverse outcome with and without adjustment for resuscitation variables, as described by Sahai et al.17

As sensitivity analyses, we recomputed propensity scores and generated a new matched cohort that excluded HPS patients who met criteria for sepsis while already in the ICU for another reason (ie, excluding ICU-presenting sepsis). We then recapitulated all analyses as above for this cohort. We performed analyses using SAS version 9.4 (SAS Institute, Cary, North Carolina).

 

 

RESULTS

Prevalence and Outcome Contributions

Of the 11,182 sepsis patients in the database, we classified 2,509 (22%) as HPS (Figure 1). HPS contributed 785 (35%) of 2,241 sepsis-related mortalities, 1,241 (38%) mechanical ventilations, and 1,762 (34%) ICU admissions. Of 39,263 total ICU days and 127,178 hospital days, HPS contributed 18,104 (46.1%) and 44,412 (34.9%) days, respectively.

Patient Characteristics

Most HPS presented early in the hospital course, with 1,352 (53.9%) cases meeting study criteria within three days of admission. Median time from admission to meeting study criteria for HPS was two days (interquartile range: one to seven days). We report selected baseline patient characteristics in Table 1 and adjusted associations of baseline variables with HPS versus EDPS in Table 2. The full cohort characterization is available in Supplemental Table 3. Notably, HPS patients more often had CHF (aOR [adjusted odds ratio}: 1.31, CI: 1.18-1.47) or renal failure (aOR: 1.62, CI: 1.38-1.91), gastrointestinal source of infection (aOR: 1.84, CI: 1.48-2.29), hypothermia (aOR: 1.56, CI: 1.28-1.90) hypotension (aOR: 1.85, CI: 1.65-2.08), or altered gas exchange (aOR: 2.46, CI: 1.43-4.24). In contrast, HPS patients less frequently were admitted from skilled nursing facilities (aOR: 0.44, CI: 0.32-0.60), or had COPD (aOR: 0.53, CI: 0.36-0.76), fever (aOR: 0.70, CI: 0.52-0.91), tachypnea (aOR: 0.76, CI: 0.58-0.98), or AKI (aOR: 082, CI: 0.68-0.97). Other baseline variables were similar, including respiratory source, tachycardia, white cell abnormalities, AMS, and coagulopathies. These associations were preserved in the sensitivity analysis excluding ICU-presenting sepsis.

Propensity Matching

Propensity score matching yielded 1,942 matched pairs (n = 3,884, 77% of HPS patients, 22% of EDPS patients). Table 1 and Supplemental Table 3 show patient characteristics after propensity matching. Supplemental Table 4 shows the propensity model. The frequency densities are shown for the cohort as a function of propensity score in Supplemental Figure 1. After matching, frequencies between groups differed by <5% for all categorical variables assessed. In the sensitivity analysis, propensity matching (model in Supplemental Table 5) resulted in 1,233 matched pairs (n = 2,466, 49% of HPS patients, 14% of EDPS patients), with group differences comparable to the primary analysis.

Process Outcomes

We present propensity-matched differences in initial resuscitation in Figure 2A for all HPS patients, as well as non-ICU-presenting HPS, versus EDPS. HPS patients were roughly half as likely to receive fully 3-hour bundle compliant care (17.0% vs 30.3%, aOR: 0.47, CI: 0.40-0.57), to have blood cultures drawn within three hours prior to antibiotics (44.9% vs 67.2%, aOR: 0.40, CI: 0.35-0.46), or to receive fluid resuscitation initiated within two hours (11.1% vs 26.1%, aOR: 0.35, CI: 0.29-0.42). Antibiotic receipt within one hour was comparable (45.3% vs 48.1%, aOR: 0.89, CI: 0.79-1.01). However, differences emerged for antibiotics within three hours (66.2% vs 83.8%, aOR: 0.38, CI: 0.32-0.44) and persisted at six hours (77.0% vs 92.5%, aOR: 0.27, CI: 0.22-33). Excluding ICU-presenting sepsis from propensity matching exaggerated disparities in antibiotic receipt at one hour (43.4% vs 49.1%, aOR: 0.80, CI: 0.68-0.93), three hours (64.2% vs 86.1%, aOR: 0.29, CI: 0.24-0.35), and six hours (75.7% vs 93.0%, aOR: 0.23, CI: 0.18-0.30). HPS patients more frequently had repeat lactate obtained within 24 hours (62.4% vs 54.3%, aOR: 1.40, CI: 1.23-1.59).

 

 

Patient Outcomes

HPS patients had higher mortality (31.2% vs19.3%), mechanical ventilation (51.5% vs27.4%), and ICU admission (60.6% vs 46.5%) (Table 1 and Supplemental Table 6). Figure 2b shows propensity-matched and covariate-adjusted differences in patient outcomes before and after adjusting for initial resuscitation. aORs corresponded to approximate relative risk differences18 of 1.38 (CI: 1.28-1.48), 1.68 (CI: 1.57-1.79), and 1.72 (CI: 1.61-1.84) for mortality, mechanical ventilation, and ICU admission, respectively. HPS was associated with 83% longer mortality-censored ICU stays (five vs nine days, HR–1: 1.83, CI: 1.65-2.03), and 108% longer hospital stay (eight vs 17 days, HR–1: 2.08, CI: 1.93-2.24). After adjustment for resuscitation, all effect sizes decreased but persisted. The initial crystalloid volume was a significant negative effect modifier for mortality (Supplemental Table 7). That is, the magnitude of the association between HPS and greater mortality decreased by a factor of 0.89 per 10 mL/kg given (CI: 0.82-0.97). We did not observe significant interaction from other interventions, or overall bundle compliance, meaning these interventions’ association with mortality did not significantly differ between HPS versus EDPS.

The implied attributable risk difference from discrepancies in initial resuscitation was 23.3% for mortality, 35.2% for mechanical ventilation, and 7.6% for ICU admission (Figure 2B). Resuscitation explained 26.5% of longer ICU LOS and 16.7% of longer hospital LOS associated with HPS.

Figure 2C shows sensitivity analysis excluding ICU-presenting sepsis from propensity matching (ie, limiting HPS to hospital ward presentations). Again, HPS was associated with all adverse outcomes, though effect sizes were smaller than in the primary cohort for all outcomes except hospital LOS. In this cohort, resuscitation factors now explained 16.5% of HPS’ association with mortality, and 14.5% of the association with longer ICU LOS. However, they explained a greater proportion (13.0%) of ICU admissions. Attributable risk differences were comparable to the primary cohort for mechanical ventilation (37.6%) and hospital LOS (15.3%).

DISCUSSION

In this analysis of 11,182 sepsis and septic shock patients, HPS contributed 22% of prevalence but >35% of total sepsis mortalities, ICU utilization, and hospital days. HPS patients had higher comorbidity burdens and had clinical presentations less obviously attributable to infection with more severe organ dysfunction. EDPS received antibiotics within three hours about 1.62 times more often than do HPS patients. EDPS patients also receive fluids initiated within two hours about 1.82 times more often than HPS patients do. HPS had nearly 1.5-fold greater mortality and LOS, and nearly two-fold greater mechanical ventilation and ICU utilization. Resuscitation disparities could partially explain these associations. These patterns persisted when comparing only wards presenting HPS with EDPS.

Our analysis revealed several notable findings. First, these data confirm that HPS represents a potentially high-impact target population that contributes adverse outcomes disproportionately frequently with respect to case prevalence.

Our findings, unsurprisingly, revealed HPS and EDPS reflect dramatically different patient populations. We found that the two groups significantly differed by the majority of the baseline factors we compared. It may be worth asking if and how these substantial differences in illness etiology, chronic health, and acute physiology impact what we consider an optimal approach to management. Significant interaction effects of fluid volume on the association between HPS and mortality suggest differential treatment effects may exist between patients. Indeed, patients who newly arrive from the community and those who are several days into admission likely have different volume status. However, no interactions were noted with other bundle elements, such as timeliness of antibiotics or timeliness of initial fluids.

Another potentially concerning observation was that HPS patients were admitted much less frequently from skilled nursing facilities, as it could imply that this poorer-fairing population had a comparatively higher baseline functional status. The fact that 25% of EDPS cases were admitted from these facilities also underscores the need to engage skilled nursing facility providers in future sepsis initiatives.

We found marked disparities in resuscitation. Timely delivery of interventions, such as antibiotics and initial fluid resuscitation, occurred less than half as often for HPS, especially on hospital wards. While evidence supporting the efficacy of specific 3-hour bundle elements remains unsettled,19 a wealth of literature demonstrates a correlation between bundle uptake and decreased sepsis mortality, especially for early antibiotic administration.13,20-26 Some analysis suggests that differing initial resuscitation practices explain different mortality rates in the early goal-directed therapy trials.27 The comparatively poor performance for non-ICU HPS indicates further QI efforts are better focused on inpatient wards, rather than on EDs or ICUs where resuscitation is already delivered with substantially greater fidelity.

While resuscitation differences partially explained outcome discrepancies between groups, they did not account for as much variation as expected. Though resuscitation accounted for >35% of attributable mechanical ventilation risk, it explained only 16.5% of mortality differences for non-ICU HPS vs EDPS. We speculate that several factors may contribute.

First, HPS patients are already hospitalized for another acute insult and may be too physiologically brittle to derive equal benefit from initial resuscitation. Some literature suggests protocolized sepsis resuscitation may paradoxically be more effective in milder/earlier disease.28

Second, clinical information indicating septic organ dysfunction may become available too late in HPS—a possible data limitation where inpatient providers are counterintuitively more likely to miss early signs of patients’ deterioration and a subsequent therapeutic window. Several studies found that fluid resuscitation is associated with improved sepsis outcomes only when it is administered very early.11,29-31 In inpatient wards, decreased monitoring32 and human factors (eg, hospital workflow, provider-to-patient ratios, electronic documentation burdens)33,34 may hinder early diagnosis. In contrast, ED environments are explicitly designed to identify acutely ill patients and deliver intervention rapidly. If HPS patients were sicker when they were identified, this would also explain their more severe organ dysfunctions. Our data seems to support this possibility. HPS patients had tachypnea less frequently but more often had impaired gas exchange. This finding may suggest that early tachypnea was either less often detected or documented, or that it had progressed further by the time of detection.

Third, inpatients with sepsis may more often present with greater diagnostic complexity. We observed that HPS patients were more often euthermic and less often tachypneic. Beyond suggesting a greater diagnostic challenge, this also raises questions as to whether differences reflect patient physiology (response to infection) or iatrogenic factors (eg, prior antipyretics). Higher comorbidity and acute physiological burdens also limit the degree to which new organ dysfunction can be clearly attributed to infection. We note differences in the proportion of patients who received antibiotics increased over time, suggesting that HPS patients who received delayed antibiotics did so much later than their EDPS counterparts. This lag could also arise from diagnostic difficulty.

All three possibilities highlight a potential lead time effect, where the same measured three-hour period on the wards, between meeting sepsis criteria and starting treatment, actually reflects a longer period between (as yet unmeasurable) pathobiologic “time zero” and treatment versus the ED. The time of sepsis detection, as distinct from the time of sepsis onset, therefore proves difficult to evaluate and impossible to account for statistically.

Regardless, our findings suggest additional difficulty in both the recognition and resuscitation of inpatient sepsis. Inpatients, especially with infections, may need closer monitoring. How to cost effectively implement this monitoring is a challenge that deserves attention.

A more rational systems approach to HPS likely combines efforts to improve initial resuscitation with other initiatives aimed at both improving monitoring and preventing infection.

To be clear, we do not imply that timely initial resuscitation does not matter on the wards. Rather, resuscitation-focused QI alone does not appear to be sufficient to overcome differences in outcomes for HPS. The 23.3% attributable mortality risk we observed still implies that resuscitation differences could explain nearly one in four excess HPS mortalities. We previously showed that timely resuscitation is strongly associated with better outcomes.11,13,30 As discussed above, the unclear degree to which better resuscitation is a marker for more obvious presentations is a persistent limitation of prior investigations and the present study.

Taken together, the ultimate question that this study raises but cannot answer is whether the timely recognition of sepsis, rather than any specific treatment, is what truly improves outcomes.

In addition to those above, this study has several limitations. Our study did not differentiate HPS with respect to patients admitted for noninfectious reasons and who subsequently became septic versus nonseptic patients admitted for an infection who subsequently became septic from that infection. Nor could we discriminate between missed ED diagnoses and true delayed presentations. We note distinguishing these entities clinically can be equally challenging. Additionally, this was a propensity-matched retrospective analysis of an existing sepsis cohort, and the many limitations of both retrospective study and propensity matching apply.35,36 We note that randomizing patients to develop sepsis in the community versus hospital is not feasible and that two of our aims intended to describe overall patterns rather than causal effects. We could not ascertain robust measures of severity of illness (eg, SOFA) because a real world setting precludes required data points—eg, urine output is unreliably recorded. We also note incomplete overlap between inclusion criteria and either Sepsis-2 or -3 definitions,1,37 because we designed and populated our database prior to publication of Sepsis-3. Further, we could not account for surgical source control, the appropriateness of antimicrobial therapy, mechanical ventilation before sepsis onset, or most treatments given after initial resuscitation.

In conclusion, hospital-presenting sepsis accounted for adverse patient outcomes disproportionately to prevalence. HPS patients had more complex presentations, received timely antibiotics half as often ED-presenting sepsis, and had nearly twice the mortality odds. Resuscitation disparities explained roughly 25% of this difference.

 

 

Disclosures

The authors have no conflicts of interest to disclose.

Funding

This investigation was funded in part by a grant from the Center for Medicare and Medicaid Innovation to the High Value Healthcare Collaborative, of which the study sites’ umbrella health system was a part. This grant helped fund the underlying QI program and database in this study.

 

Files
References

1. Singer M, Deutschman CS, Seymour CW, et al. The third international consensus definitions for sepsis and septic shock (Sepsis-3). JAMA. 2016;315(8):801-810. doi: 10.1001/jama.2016.0287. PubMed
2. Torio CMA, Andrews RMA. National inpatient hospital costs: the most expensive conditions by payer, 2011. In. Statistical Brief No. 160. Rockville, MD: Agency for Healthcare Research and Quality; 2013. PubMed
3. Liu V, Escobar GJ, Greene JD, et al. Hospital deaths in patients with sepsis from 2 independent cohorts. JAMA. 2014;312(1):90-92. doi: 10.1001/jama.2014.5804. PubMed
4. Seymour CW, Liu VX, Iwashyna TJ, et al. Assessment of clinical criteria for sepsis: for the third international consensus definitions for sepsis and septic shock (Sepsis-3). JAMA. 2016;315(8):762-774. doi: 10.1001/jama.2016.0288. PubMed
5. Jones SL, Ashton CM, Kiehne LB, et al. Outcomes and resource use of sepsis-associated stays by presence on admission, severity, and hospital type. Med Care. 2016;54(3):303-310. doi: 10.1097/MLR.0000000000000481. PubMed
6. Page DB, Donnelly JP, Wang HE. Community-, healthcare-, and hospital-acquired severe sepsis hospitalizations in the university healthsystem consortium. Crit Care Med. 2015;43(9):1945-1951. doi: 10.1097/CCM.0000000000001164. PubMed
7. Rothman M, Levy M, Dellinger RP, et al. Sepsis as 2 problems: identifying sepsis at admission and predicting onset in the hospital using an electronic medical record-based acuity score. J Crit Care. 2016;38:237-244. doi: 10.1016/j.jcrc.2016.11.037. PubMed
8. Chan P, Peake S, Bellomo R, Jones D. Improving the recognition of, and response to in-hospital sepsis. Curr Infect Dis Rep. 2016;18(7):20. doi: 10.1007/s11908-016-0528-7. PubMed
9. Rhodes A, Evans LE, Alhazzani W, et al. Surviving sepsis campaign: international guidelines for management of sepsis and septic shock: 2016. Crit Care Med. 2017;45(3):486-552. doi: 10.1097/CCM.0000000000002255. PubMed
10. Levy MM, Evans LE, Rhodes A. The Surviving Sepsis Campaign Bundle: 2018 Update. Crit Care Med. 2018;46(6):997-1000. doi: 10.1097/CCM.0000000000003119. PubMed
11. Leisman DE, Goldman C, Doerfler ME, et al. Patterns and outcomes associated with timeliness of initial crystalloid resuscitation in a prospective sepsis and septic shock cohort. Crit Care Med. 2017;45(10):1596-1606. doi: 10.1097/CCM.0000000000002574. PubMed
12. Leisman DE, Doerfler ME, Schneider SM, Masick KD, D’Amore JA, D’Angelo JK. Predictors, prevalence, and outcomes of early crystalloid responsiveness among initially hypotensive patients with sepsis and septic shock. Crit Care Med. 2018;46(2):189-198. doi: 10.1097/CCM.0000000000002834. PubMed
13. Leisman DE, Doerfler ME, Ward MF, et al. Survival benefit and cost savings from compliance with a simplified 3-hour sepsis bundle in a series of prospective, multisite, observational cohorts. Crit Care Med. 2017;45(3):395-406. doi: 10.1097/CCM.0000000000002184. PubMed
14. Doerfler ME, D’Angelo J, Jacobsen D, et al. Methods for reducing sepsis mortality in emergency departments and inpatient units. Jt Comm J Qual Patient Saf. 2015;41(5):205-211. doi: 10.1016/S1553-7250(15)41027-X. PubMed
15. Murphy B, Fraeman KH. A general SAS® macro to implement optimal N:1 propensity score matching within a maximum radius. In: Paper 812-2017. Waltham, MA: Evidera; 2017. https://support.sas.com/resources/papers/proceedings17/0812-2017.pdf. Accessed February 20, 2019.
16. Funk MJ, Westreich D, Wiesen C, Stürmer T, Brookhart MA, Davidian M. Doubly robust estimation of causal effects. Am J Epidemiol. 2011;173(7):761-767. doi: 10.1093/aje/kwq439. PubMed
17. Sahai HK, Khushid A. Statistics in Epidemiology: Methods, Techniques, and Applications. Boca Raton, FL: CRC Press; 1995. 
18. VanderWeele TJ. On a square-root transformation of the odds ratio for a common outcome. Epidemiology. 2017;28(6):e58-e60. doi: 10.1097/EDE.0000000000000733. PubMed
19. Pepper DJ, Natanson C, Eichacker PQ. Evidence underpinning the centers for medicare & medicaid services’ severe sepsis and septic shock management bundle (SEP-1). Ann Intern Med. 2018;168(8):610-612. doi: 10.7326/L18-0140. PubMed
20. Levy MM, Rhodes A, Phillips GS, et al. Surviving sepsis campaign: association between performance metrics and outcomes in a 7.5-year study. Crit Care Med. 2015;43(1):3-12. doi: 10.1097/CCM.0000000000000723. PubMed
11. Liu VX, Morehouse JW, Marelich GP, et al. Multicenter Implementation of a Treatment Bundle for Patients with Sepsis and Intermediate Lactate Values. Am J Respir Crit Care Med. 2016;193(11):1264-1270. doi: 10.1164/rccm.201507-1489OC. PubMed
22. Miller RR, Dong L, Nelson NC, et al. Multicenter implementation of a severe sepsis and septic shock treatment bundle. Am J Respir Crit Care Med. 2013;188(1):77-82. doi: 10.1164/rccm.201212-2199OC. PubMed
23. Seymour CW, Gesten F, Prescott HC, et al. Time to treatment and mortality during mandated emergency care for sepsis. N Engl J Med. 2017;376(23):2235-2244. doi: 10.1056/NEJMoa1703058. PubMed
24. Pruinelli L, Westra BL, Yadav P, et al. Delay within the 3-hour surviving sepsis campaign guideline on mortality for patients with severe sepsis and septic shock. Crit Care Med. 2018;46(4):500-505. doi: 10.1097/CCM.0000000000002949. PubMed
25. Kumar A, Roberts D, Wood KE, et al. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med. 2006;34(6):1589-1596. doi: 10.1097/01.CCM.0000217961.75225.E9. PubMed
26. Liu VX, Fielding-Singh V, Greene JD, et al. The timing of early antibiotics and hospital mortality in sepsis. Am J Respir Crit Care Med. 2017;196(7):856-863. doi: 10.1164/rccm.201609-1848OC. PubMed
27. Kalil AC, Johnson DW, Lisco SJ, Sun J. Early goal-directed therapy for sepsis: a novel solution for discordant survival outcomes in clinical trials. Crit Care Med. 2017;45(4):607-614. doi: 10.1097/CCM.0000000000002235. PubMed
28. Kellum JA, Pike F, Yealy DM, et al. relationship between alternative resuscitation strategies, host response and injury biomarkers, and outcome in septic shock: analysis of the protocol-based care for early septic shock study. Crit Care Med. 2017;45(3):438-445. doi: 10.1097/CCM.0000000000002206. PubMed
29. Seymour CW, Cooke CR, Heckbert SR, et al. Prehospital intravenous access and fluid resuscitation in severe sepsis: an observational cohort study. Crit Care. 2014;18(5):533. doi: 10.1186/s13054-014-0533-x. PubMed
30. Leisman D, Wie B, Doerfler M, et al. Association of fluid resuscitation initiation within 30 minutes of severe sepsis and septic shock recognition with reduced mortality and length of stay. Ann Emerg Med. 2016;68(3):298-311. doi: 10.1016/j.annemergmed.2016.02.044. PubMed
31. Lee SJ, Ramar K, Park JG, Gajic O, Li G, Kashyap R. Increased fluid administration in the first three hours of sepsis resuscitation is associated with reduced mortality: a retrospective cohort study. Chest. 2014;146(4):908-915. doi: 10.1378/chest.13-2702. PubMed
32. Smyth MA, Daniels R, Perkins GD. Identification of sepsis among ward patients. Am J Respir Crit Care Med. 2015;192(8):910-911. doi: 10.1164/rccm.201507-1395ED. PubMed
33. Wenger N, Méan M, Castioni J, Marques-Vidal P, Waeber G, Garnier A. Allocation of internal medicine resident time in a Swiss hospital: a time and motion study of day and evening shifts. Ann Intern Med. 2017;166(8):579-586. doi: 10.7326/M16-2238. PubMed
34. Mamykina L, Vawdrey DK, Hripcsak G. How do residents spend their shift time? A time and motion study with a particular focus on the use of computers. Acad Med. 2016;91(6):827-832. doi: 10.1097/ACM.0000000000001148. PubMed
35. Kaji AH, Schriger D, Green S. Looking through the retrospectoscope: reducing bias in emergency medicine chart review studies. Ann Emerg Med. 2014;64(3):292-298. doi: 10.1016/j.annemergmed.2014.03.025. PubMed
36. Leisman DE. Ten pearls and pitfalls of propensity scores in critical care research: a guide for clinicians and researchers. Crit Care Med. 2019;47(2):176-185. doi: 10.1097/CCM.0000000000003567. PubMed
37. Levy MM, Fink MP, Marshall JC, et al. 2001 SCCM/ESICM/ACCP/ATS/SIS international sepsis definitions conference. Crit Care Med. 2003;31(4):1250-1256. doi: 10.1097/01.CCM.0000050454.01978.3B. PubMed

Article PDF
Issue
Journal of Hospital Medicine 14(6)
Publications
Topics
Page Number
340-348. Published online first April 8, 2019.
Sections
Files
Files
Article PDF
Article PDF

Sepsis is both the most expensive condition treated and the most common cause of death in hospitals in the United States.1-3 Most sepsis patients (as many as 80% to 90%) meet sepsis criteria on hospital arrival, but mortality and costs are higher when meeting criteria after admission.3-6 Mechanisms of this increased mortality for these distinct populations are not well explored. Patients who present septic in the emergency department (ED) and patients who present as inpatients likely present very different challenges for recognition, treatment, and monitoring.7 Yet, how these groups differ by demographic and clinical characteristics, the etiology and severity of infection, and patterns of resuscitation care are not well described. Literature on sepsis epidemiology on hospital wards is particularly limited.8

This knowledge gap is important. If hospital-presenting sepsis (HPS) contributes disproportionately to disease burdCHFens, it reflects a high-yield population deserving the focus of quality improvement (QI) initiatives. If specific causes of disparities were identified—eg, poor initial resuscitation— they could be specifically targeted for correction. Given that current treatment guidelines are uniform for the two populations,9,10 characterizing phenotypic differences could also have implications for both diagnostic and therapeutic recommendations, particularly if the groups display substantially differing clinical presentations. Our prior work has not probed these effects specifically, but suggested ED versus inpatient setting at the time of initial sepsis presentation might be an effect modifier for the association between several elements of fluid resuscitation and patient outcomes.11,12

We, therefore, conducted a retrospective analysis to ask four sequential questions: (1) Do patients with HPS, compared with EDPS, contribute adverse outcome out of proportion to case prevalence? (2) At the time of initial presentation, how do HPS patients differ from EDPS patients with respect to demographics, comorbidities, infectious etiologies, clinical presentations, and severity of illness (3) If holding observed baseline factors constant, does the physical location of sepsis presentation inherently increase the risk for treatment delays and mortality? (4) To what extent can differences in the likelihood for timely initial treatment between the ED and inpatient settings explain differences in mortality and patient outcomes?

We hypothesized a priori that HPS would reflect chronically sicker patients whom both received less timely resuscitation and who contributed disproportionately frequent bad outcomes. We expected disparities in timely resuscitation care would explain a large proportion of this difference.

METHODS

We performed a retrospective analysis of the Northwell Sepsis Database, a prospectively captured, multisite, real world, consecutive-sample cohort of all “severe sepsis” and septic shock patients treated at nine tertiary and community hospitals in New York from October 1, 2014, to March 31, 2016. We analyzed all patients from a previously published cohort.11

 

 

Database Design and Structure

The Northwell Sepsis Database has previously been described in detail.11,13,14 Briefly, all patients met clinical sepsis criteria: (1) infection AND (2) ≥2 (SIRS) criteria AND (3) ≥1 acute organ dysfunction criterion. Organ dysfunction criteria were hypotension, acute kidney injury (AKI), coagulopathy, altered gas exchange, elevated bilirubin (≥2.0 mg/dL), or altered mental status (AMS; clarified in Supplemental Table 1). All organ dysfunction was not otherwise explained by patients’ medical histories; eg, patients on warfarin anticoagulation were not documented to have coagulopathy based on international normalized ratio > 1.5. The time of the sepsis episode (and database inclusion) was the time of the first vital sign measurement or laboratory result where a patient simultaneously met all three inclusion criteria: infection, SIRS, and organ dysfunction. The database excludes patients who were <18 years, declined bundle interventions, had advance directives precluding interventions, or were admitted directly to palliative care or hospice. Abstractors assumed comorbidities were absent if not documented within the medical record and that physiologic abnormalities were absent if not measured by the treatment team. There were no missing data for the variables analyzed. We report analysis in adherence with the STROBE statement guidelines for observational research.

Exposure

The primary exposure was whether patients had EDPS versus HPS. We defined EDPS patients as meeting all objective clinical inclusion criteria while physically in the ED. We defined HPS as first meeting sepsis inclusion criteria outside the ED, regardless of the reason for admission, and regardless of whether patients were admitted through the ED or directly to the hospital. All ED patients were admitted to the hospital.

Outcomes

Process outcomes were full 3-hour bundle compliance, time to antibiotic administration, blood cultures before antibiotics, time to fluid initiation, the volume of administered fluid resuscitation, lactate result time, and whether repeat lactate was obtained (Supplemental Table 2). Treatment times were times of administration (rather than order time). The primary patient outcome was hospital mortality. Secondary patient outcomes were mechanical ventilation, ICU admission, ICU days, hospital length of stay (LOS). We discounted HPS patients’ LOS to include only days after meeting the inclusion criteria. Patients were excluded from the analysis of the ICU admission outcome if they were already in the ICU prior to meeting sepsis criteria.

Statistical Analysis

We report continuous variables as means (standard deviation) or medians (interquartile range), and categorical variables as frequencies (proportions), as appropriate. Summative statistics with 95% confidence intervals (CI) describe overall group contributions. We used generalized linear models to determine patient factors associated with EDPS versus HPS, entering random effects for individual study sites to control for intercenter variability.

Next, to generate a propensity-matched cohort, we computed propensity scores adjusted from a priori selected variables: age, sex, tertiary versus community hospital, congestive heart failure (CHF), renal failure, COPD, diabetes, liver failure, immunocompromise, primary source of infection, nosocomial source, temperature, initial lactate, presenting hypotension, altered gas exchange, AMS, AKI, and coagulopathy. We then matched subjects 1:1 without optimization or replacement, imposing a caliper width of 0.01; ie, we required matched pairs to have a <1.0% difference in propensity scores. The macro used to match subjects is publically available.15

We then compared resuscitation and patient outcomes in the matched cohort using generalized linear models, ie, doubly-robust estimation (DRE).16 When assessing patient outcomes corrected for resuscitation, we used mixed DRE/multivariable regression. We did this for two reasons: first, DRE has the advantage of only requiring only one approach (propensity vs covariate adjustments) to be correctly specified.16 Second, computing propensity scores adjusted for resuscitation would be inappropriate given that resuscitation occurs after the exposure allocation (HPS vs EDPS). However, these factors could still impact the outcome and in fact, we hypothesized they were potential mediators of the exposure effect. To interrogate this mediating relationship, we recapitulated the DRE modeling but added covariates for resuscitation factors. Resuscitation-adjusted models controlled for timeliness of antibiotics, fluids, and lactate results; blood cultures before antibiotics; repeat lactate obtained, and fluid volume in the first six hours. Since ICU days and LOS are subject to competing risks bias (LOS could be shorter if patients died earlier), we used proportional hazards models where “the event” was defined as a live discharge to censor for mortality and we report output as inverse hazard ratios. We also tested interaction coefficients for discrete bundle elements and HPS to determine if specific bundle elements were effect modifiers for the association between the presenting location and mortality risk. Finally, we estimated attributable risk differences by comparing adjusted odds ratios of adverse outcome with and without adjustment for resuscitation variables, as described by Sahai et al.17

As sensitivity analyses, we recomputed propensity scores and generated a new matched cohort that excluded HPS patients who met criteria for sepsis while already in the ICU for another reason (ie, excluding ICU-presenting sepsis). We then recapitulated all analyses as above for this cohort. We performed analyses using SAS version 9.4 (SAS Institute, Cary, North Carolina).

 

 

RESULTS

Prevalence and Outcome Contributions

Of the 11,182 sepsis patients in the database, we classified 2,509 (22%) as HPS (Figure 1). HPS contributed 785 (35%) of 2,241 sepsis-related mortalities, 1,241 (38%) mechanical ventilations, and 1,762 (34%) ICU admissions. Of 39,263 total ICU days and 127,178 hospital days, HPS contributed 18,104 (46.1%) and 44,412 (34.9%) days, respectively.

Patient Characteristics

Most HPS presented early in the hospital course, with 1,352 (53.9%) cases meeting study criteria within three days of admission. Median time from admission to meeting study criteria for HPS was two days (interquartile range: one to seven days). We report selected baseline patient characteristics in Table 1 and adjusted associations of baseline variables with HPS versus EDPS in Table 2. The full cohort characterization is available in Supplemental Table 3. Notably, HPS patients more often had CHF (aOR [adjusted odds ratio}: 1.31, CI: 1.18-1.47) or renal failure (aOR: 1.62, CI: 1.38-1.91), gastrointestinal source of infection (aOR: 1.84, CI: 1.48-2.29), hypothermia (aOR: 1.56, CI: 1.28-1.90) hypotension (aOR: 1.85, CI: 1.65-2.08), or altered gas exchange (aOR: 2.46, CI: 1.43-4.24). In contrast, HPS patients less frequently were admitted from skilled nursing facilities (aOR: 0.44, CI: 0.32-0.60), or had COPD (aOR: 0.53, CI: 0.36-0.76), fever (aOR: 0.70, CI: 0.52-0.91), tachypnea (aOR: 0.76, CI: 0.58-0.98), or AKI (aOR: 082, CI: 0.68-0.97). Other baseline variables were similar, including respiratory source, tachycardia, white cell abnormalities, AMS, and coagulopathies. These associations were preserved in the sensitivity analysis excluding ICU-presenting sepsis.

Propensity Matching

Propensity score matching yielded 1,942 matched pairs (n = 3,884, 77% of HPS patients, 22% of EDPS patients). Table 1 and Supplemental Table 3 show patient characteristics after propensity matching. Supplemental Table 4 shows the propensity model. The frequency densities are shown for the cohort as a function of propensity score in Supplemental Figure 1. After matching, frequencies between groups differed by <5% for all categorical variables assessed. In the sensitivity analysis, propensity matching (model in Supplemental Table 5) resulted in 1,233 matched pairs (n = 2,466, 49% of HPS patients, 14% of EDPS patients), with group differences comparable to the primary analysis.

Process Outcomes

We present propensity-matched differences in initial resuscitation in Figure 2A for all HPS patients, as well as non-ICU-presenting HPS, versus EDPS. HPS patients were roughly half as likely to receive fully 3-hour bundle compliant care (17.0% vs 30.3%, aOR: 0.47, CI: 0.40-0.57), to have blood cultures drawn within three hours prior to antibiotics (44.9% vs 67.2%, aOR: 0.40, CI: 0.35-0.46), or to receive fluid resuscitation initiated within two hours (11.1% vs 26.1%, aOR: 0.35, CI: 0.29-0.42). Antibiotic receipt within one hour was comparable (45.3% vs 48.1%, aOR: 0.89, CI: 0.79-1.01). However, differences emerged for antibiotics within three hours (66.2% vs 83.8%, aOR: 0.38, CI: 0.32-0.44) and persisted at six hours (77.0% vs 92.5%, aOR: 0.27, CI: 0.22-33). Excluding ICU-presenting sepsis from propensity matching exaggerated disparities in antibiotic receipt at one hour (43.4% vs 49.1%, aOR: 0.80, CI: 0.68-0.93), three hours (64.2% vs 86.1%, aOR: 0.29, CI: 0.24-0.35), and six hours (75.7% vs 93.0%, aOR: 0.23, CI: 0.18-0.30). HPS patients more frequently had repeat lactate obtained within 24 hours (62.4% vs 54.3%, aOR: 1.40, CI: 1.23-1.59).

 

 

Patient Outcomes

HPS patients had higher mortality (31.2% vs19.3%), mechanical ventilation (51.5% vs27.4%), and ICU admission (60.6% vs 46.5%) (Table 1 and Supplemental Table 6). Figure 2b shows propensity-matched and covariate-adjusted differences in patient outcomes before and after adjusting for initial resuscitation. aORs corresponded to approximate relative risk differences18 of 1.38 (CI: 1.28-1.48), 1.68 (CI: 1.57-1.79), and 1.72 (CI: 1.61-1.84) for mortality, mechanical ventilation, and ICU admission, respectively. HPS was associated with 83% longer mortality-censored ICU stays (five vs nine days, HR–1: 1.83, CI: 1.65-2.03), and 108% longer hospital stay (eight vs 17 days, HR–1: 2.08, CI: 1.93-2.24). After adjustment for resuscitation, all effect sizes decreased but persisted. The initial crystalloid volume was a significant negative effect modifier for mortality (Supplemental Table 7). That is, the magnitude of the association between HPS and greater mortality decreased by a factor of 0.89 per 10 mL/kg given (CI: 0.82-0.97). We did not observe significant interaction from other interventions, or overall bundle compliance, meaning these interventions’ association with mortality did not significantly differ between HPS versus EDPS.

The implied attributable risk difference from discrepancies in initial resuscitation was 23.3% for mortality, 35.2% for mechanical ventilation, and 7.6% for ICU admission (Figure 2B). Resuscitation explained 26.5% of longer ICU LOS and 16.7% of longer hospital LOS associated with HPS.

Figure 2C shows sensitivity analysis excluding ICU-presenting sepsis from propensity matching (ie, limiting HPS to hospital ward presentations). Again, HPS was associated with all adverse outcomes, though effect sizes were smaller than in the primary cohort for all outcomes except hospital LOS. In this cohort, resuscitation factors now explained 16.5% of HPS’ association with mortality, and 14.5% of the association with longer ICU LOS. However, they explained a greater proportion (13.0%) of ICU admissions. Attributable risk differences were comparable to the primary cohort for mechanical ventilation (37.6%) and hospital LOS (15.3%).

DISCUSSION

In this analysis of 11,182 sepsis and septic shock patients, HPS contributed 22% of prevalence but >35% of total sepsis mortalities, ICU utilization, and hospital days. HPS patients had higher comorbidity burdens and had clinical presentations less obviously attributable to infection with more severe organ dysfunction. EDPS received antibiotics within three hours about 1.62 times more often than do HPS patients. EDPS patients also receive fluids initiated within two hours about 1.82 times more often than HPS patients do. HPS had nearly 1.5-fold greater mortality and LOS, and nearly two-fold greater mechanical ventilation and ICU utilization. Resuscitation disparities could partially explain these associations. These patterns persisted when comparing only wards presenting HPS with EDPS.

Our analysis revealed several notable findings. First, these data confirm that HPS represents a potentially high-impact target population that contributes adverse outcomes disproportionately frequently with respect to case prevalence.

Our findings, unsurprisingly, revealed HPS and EDPS reflect dramatically different patient populations. We found that the two groups significantly differed by the majority of the baseline factors we compared. It may be worth asking if and how these substantial differences in illness etiology, chronic health, and acute physiology impact what we consider an optimal approach to management. Significant interaction effects of fluid volume on the association between HPS and mortality suggest differential treatment effects may exist between patients. Indeed, patients who newly arrive from the community and those who are several days into admission likely have different volume status. However, no interactions were noted with other bundle elements, such as timeliness of antibiotics or timeliness of initial fluids.

Another potentially concerning observation was that HPS patients were admitted much less frequently from skilled nursing facilities, as it could imply that this poorer-fairing population had a comparatively higher baseline functional status. The fact that 25% of EDPS cases were admitted from these facilities also underscores the need to engage skilled nursing facility providers in future sepsis initiatives.

We found marked disparities in resuscitation. Timely delivery of interventions, such as antibiotics and initial fluid resuscitation, occurred less than half as often for HPS, especially on hospital wards. While evidence supporting the efficacy of specific 3-hour bundle elements remains unsettled,19 a wealth of literature demonstrates a correlation between bundle uptake and decreased sepsis mortality, especially for early antibiotic administration.13,20-26 Some analysis suggests that differing initial resuscitation practices explain different mortality rates in the early goal-directed therapy trials.27 The comparatively poor performance for non-ICU HPS indicates further QI efforts are better focused on inpatient wards, rather than on EDs or ICUs where resuscitation is already delivered with substantially greater fidelity.

While resuscitation differences partially explained outcome discrepancies between groups, they did not account for as much variation as expected. Though resuscitation accounted for >35% of attributable mechanical ventilation risk, it explained only 16.5% of mortality differences for non-ICU HPS vs EDPS. We speculate that several factors may contribute.

First, HPS patients are already hospitalized for another acute insult and may be too physiologically brittle to derive equal benefit from initial resuscitation. Some literature suggests protocolized sepsis resuscitation may paradoxically be more effective in milder/earlier disease.28

Second, clinical information indicating septic organ dysfunction may become available too late in HPS—a possible data limitation where inpatient providers are counterintuitively more likely to miss early signs of patients’ deterioration and a subsequent therapeutic window. Several studies found that fluid resuscitation is associated with improved sepsis outcomes only when it is administered very early.11,29-31 In inpatient wards, decreased monitoring32 and human factors (eg, hospital workflow, provider-to-patient ratios, electronic documentation burdens)33,34 may hinder early diagnosis. In contrast, ED environments are explicitly designed to identify acutely ill patients and deliver intervention rapidly. If HPS patients were sicker when they were identified, this would also explain their more severe organ dysfunctions. Our data seems to support this possibility. HPS patients had tachypnea less frequently but more often had impaired gas exchange. This finding may suggest that early tachypnea was either less often detected or documented, or that it had progressed further by the time of detection.

Third, inpatients with sepsis may more often present with greater diagnostic complexity. We observed that HPS patients were more often euthermic and less often tachypneic. Beyond suggesting a greater diagnostic challenge, this also raises questions as to whether differences reflect patient physiology (response to infection) or iatrogenic factors (eg, prior antipyretics). Higher comorbidity and acute physiological burdens also limit the degree to which new organ dysfunction can be clearly attributed to infection. We note differences in the proportion of patients who received antibiotics increased over time, suggesting that HPS patients who received delayed antibiotics did so much later than their EDPS counterparts. This lag could also arise from diagnostic difficulty.

All three possibilities highlight a potential lead time effect, where the same measured three-hour period on the wards, between meeting sepsis criteria and starting treatment, actually reflects a longer period between (as yet unmeasurable) pathobiologic “time zero” and treatment versus the ED. The time of sepsis detection, as distinct from the time of sepsis onset, therefore proves difficult to evaluate and impossible to account for statistically.

Regardless, our findings suggest additional difficulty in both the recognition and resuscitation of inpatient sepsis. Inpatients, especially with infections, may need closer monitoring. How to cost effectively implement this monitoring is a challenge that deserves attention.

A more rational systems approach to HPS likely combines efforts to improve initial resuscitation with other initiatives aimed at both improving monitoring and preventing infection.

To be clear, we do not imply that timely initial resuscitation does not matter on the wards. Rather, resuscitation-focused QI alone does not appear to be sufficient to overcome differences in outcomes for HPS. The 23.3% attributable mortality risk we observed still implies that resuscitation differences could explain nearly one in four excess HPS mortalities. We previously showed that timely resuscitation is strongly associated with better outcomes.11,13,30 As discussed above, the unclear degree to which better resuscitation is a marker for more obvious presentations is a persistent limitation of prior investigations and the present study.

Taken together, the ultimate question that this study raises but cannot answer is whether the timely recognition of sepsis, rather than any specific treatment, is what truly improves outcomes.

In addition to those above, this study has several limitations. Our study did not differentiate HPS with respect to patients admitted for noninfectious reasons and who subsequently became septic versus nonseptic patients admitted for an infection who subsequently became septic from that infection. Nor could we discriminate between missed ED diagnoses and true delayed presentations. We note distinguishing these entities clinically can be equally challenging. Additionally, this was a propensity-matched retrospective analysis of an existing sepsis cohort, and the many limitations of both retrospective study and propensity matching apply.35,36 We note that randomizing patients to develop sepsis in the community versus hospital is not feasible and that two of our aims intended to describe overall patterns rather than causal effects. We could not ascertain robust measures of severity of illness (eg, SOFA) because a real world setting precludes required data points—eg, urine output is unreliably recorded. We also note incomplete overlap between inclusion criteria and either Sepsis-2 or -3 definitions,1,37 because we designed and populated our database prior to publication of Sepsis-3. Further, we could not account for surgical source control, the appropriateness of antimicrobial therapy, mechanical ventilation before sepsis onset, or most treatments given after initial resuscitation.

In conclusion, hospital-presenting sepsis accounted for adverse patient outcomes disproportionately to prevalence. HPS patients had more complex presentations, received timely antibiotics half as often ED-presenting sepsis, and had nearly twice the mortality odds. Resuscitation disparities explained roughly 25% of this difference.

 

 

Disclosures

The authors have no conflicts of interest to disclose.

Funding

This investigation was funded in part by a grant from the Center for Medicare and Medicaid Innovation to the High Value Healthcare Collaborative, of which the study sites’ umbrella health system was a part. This grant helped fund the underlying QI program and database in this study.

 

Sepsis is both the most expensive condition treated and the most common cause of death in hospitals in the United States.1-3 Most sepsis patients (as many as 80% to 90%) meet sepsis criteria on hospital arrival, but mortality and costs are higher when meeting criteria after admission.3-6 Mechanisms of this increased mortality for these distinct populations are not well explored. Patients who present septic in the emergency department (ED) and patients who present as inpatients likely present very different challenges for recognition, treatment, and monitoring.7 Yet, how these groups differ by demographic and clinical characteristics, the etiology and severity of infection, and patterns of resuscitation care are not well described. Literature on sepsis epidemiology on hospital wards is particularly limited.8

This knowledge gap is important. If hospital-presenting sepsis (HPS) contributes disproportionately to disease burdCHFens, it reflects a high-yield population deserving the focus of quality improvement (QI) initiatives. If specific causes of disparities were identified—eg, poor initial resuscitation— they could be specifically targeted for correction. Given that current treatment guidelines are uniform for the two populations,9,10 characterizing phenotypic differences could also have implications for both diagnostic and therapeutic recommendations, particularly if the groups display substantially differing clinical presentations. Our prior work has not probed these effects specifically, but suggested ED versus inpatient setting at the time of initial sepsis presentation might be an effect modifier for the association between several elements of fluid resuscitation and patient outcomes.11,12

We, therefore, conducted a retrospective analysis to ask four sequential questions: (1) Do patients with HPS, compared with EDPS, contribute adverse outcome out of proportion to case prevalence? (2) At the time of initial presentation, how do HPS patients differ from EDPS patients with respect to demographics, comorbidities, infectious etiologies, clinical presentations, and severity of illness (3) If holding observed baseline factors constant, does the physical location of sepsis presentation inherently increase the risk for treatment delays and mortality? (4) To what extent can differences in the likelihood for timely initial treatment between the ED and inpatient settings explain differences in mortality and patient outcomes?

We hypothesized a priori that HPS would reflect chronically sicker patients whom both received less timely resuscitation and who contributed disproportionately frequent bad outcomes. We expected disparities in timely resuscitation care would explain a large proportion of this difference.

METHODS

We performed a retrospective analysis of the Northwell Sepsis Database, a prospectively captured, multisite, real world, consecutive-sample cohort of all “severe sepsis” and septic shock patients treated at nine tertiary and community hospitals in New York from October 1, 2014, to March 31, 2016. We analyzed all patients from a previously published cohort.11

 

 

Database Design and Structure

The Northwell Sepsis Database has previously been described in detail.11,13,14 Briefly, all patients met clinical sepsis criteria: (1) infection AND (2) ≥2 (SIRS) criteria AND (3) ≥1 acute organ dysfunction criterion. Organ dysfunction criteria were hypotension, acute kidney injury (AKI), coagulopathy, altered gas exchange, elevated bilirubin (≥2.0 mg/dL), or altered mental status (AMS; clarified in Supplemental Table 1). All organ dysfunction was not otherwise explained by patients’ medical histories; eg, patients on warfarin anticoagulation were not documented to have coagulopathy based on international normalized ratio > 1.5. The time of the sepsis episode (and database inclusion) was the time of the first vital sign measurement or laboratory result where a patient simultaneously met all three inclusion criteria: infection, SIRS, and organ dysfunction. The database excludes patients who were <18 years, declined bundle interventions, had advance directives precluding interventions, or were admitted directly to palliative care or hospice. Abstractors assumed comorbidities were absent if not documented within the medical record and that physiologic abnormalities were absent if not measured by the treatment team. There were no missing data for the variables analyzed. We report analysis in adherence with the STROBE statement guidelines for observational research.

Exposure

The primary exposure was whether patients had EDPS versus HPS. We defined EDPS patients as meeting all objective clinical inclusion criteria while physically in the ED. We defined HPS as first meeting sepsis inclusion criteria outside the ED, regardless of the reason for admission, and regardless of whether patients were admitted through the ED or directly to the hospital. All ED patients were admitted to the hospital.

Outcomes

Process outcomes were full 3-hour bundle compliance, time to antibiotic administration, blood cultures before antibiotics, time to fluid initiation, the volume of administered fluid resuscitation, lactate result time, and whether repeat lactate was obtained (Supplemental Table 2). Treatment times were times of administration (rather than order time). The primary patient outcome was hospital mortality. Secondary patient outcomes were mechanical ventilation, ICU admission, ICU days, hospital length of stay (LOS). We discounted HPS patients’ LOS to include only days after meeting the inclusion criteria. Patients were excluded from the analysis of the ICU admission outcome if they were already in the ICU prior to meeting sepsis criteria.

Statistical Analysis

We report continuous variables as means (standard deviation) or medians (interquartile range), and categorical variables as frequencies (proportions), as appropriate. Summative statistics with 95% confidence intervals (CI) describe overall group contributions. We used generalized linear models to determine patient factors associated with EDPS versus HPS, entering random effects for individual study sites to control for intercenter variability.

Next, to generate a propensity-matched cohort, we computed propensity scores adjusted from a priori selected variables: age, sex, tertiary versus community hospital, congestive heart failure (CHF), renal failure, COPD, diabetes, liver failure, immunocompromise, primary source of infection, nosocomial source, temperature, initial lactate, presenting hypotension, altered gas exchange, AMS, AKI, and coagulopathy. We then matched subjects 1:1 without optimization or replacement, imposing a caliper width of 0.01; ie, we required matched pairs to have a <1.0% difference in propensity scores. The macro used to match subjects is publically available.15

We then compared resuscitation and patient outcomes in the matched cohort using generalized linear models, ie, doubly-robust estimation (DRE).16 When assessing patient outcomes corrected for resuscitation, we used mixed DRE/multivariable regression. We did this for two reasons: first, DRE has the advantage of only requiring only one approach (propensity vs covariate adjustments) to be correctly specified.16 Second, computing propensity scores adjusted for resuscitation would be inappropriate given that resuscitation occurs after the exposure allocation (HPS vs EDPS). However, these factors could still impact the outcome and in fact, we hypothesized they were potential mediators of the exposure effect. To interrogate this mediating relationship, we recapitulated the DRE modeling but added covariates for resuscitation factors. Resuscitation-adjusted models controlled for timeliness of antibiotics, fluids, and lactate results; blood cultures before antibiotics; repeat lactate obtained, and fluid volume in the first six hours. Since ICU days and LOS are subject to competing risks bias (LOS could be shorter if patients died earlier), we used proportional hazards models where “the event” was defined as a live discharge to censor for mortality and we report output as inverse hazard ratios. We also tested interaction coefficients for discrete bundle elements and HPS to determine if specific bundle elements were effect modifiers for the association between the presenting location and mortality risk. Finally, we estimated attributable risk differences by comparing adjusted odds ratios of adverse outcome with and without adjustment for resuscitation variables, as described by Sahai et al.17

As sensitivity analyses, we recomputed propensity scores and generated a new matched cohort that excluded HPS patients who met criteria for sepsis while already in the ICU for another reason (ie, excluding ICU-presenting sepsis). We then recapitulated all analyses as above for this cohort. We performed analyses using SAS version 9.4 (SAS Institute, Cary, North Carolina).

 

 

RESULTS

Prevalence and Outcome Contributions

Of the 11,182 sepsis patients in the database, we classified 2,509 (22%) as HPS (Figure 1). HPS contributed 785 (35%) of 2,241 sepsis-related mortalities, 1,241 (38%) mechanical ventilations, and 1,762 (34%) ICU admissions. Of 39,263 total ICU days and 127,178 hospital days, HPS contributed 18,104 (46.1%) and 44,412 (34.9%) days, respectively.

Patient Characteristics

Most HPS presented early in the hospital course, with 1,352 (53.9%) cases meeting study criteria within three days of admission. Median time from admission to meeting study criteria for HPS was two days (interquartile range: one to seven days). We report selected baseline patient characteristics in Table 1 and adjusted associations of baseline variables with HPS versus EDPS in Table 2. The full cohort characterization is available in Supplemental Table 3. Notably, HPS patients more often had CHF (aOR [adjusted odds ratio}: 1.31, CI: 1.18-1.47) or renal failure (aOR: 1.62, CI: 1.38-1.91), gastrointestinal source of infection (aOR: 1.84, CI: 1.48-2.29), hypothermia (aOR: 1.56, CI: 1.28-1.90) hypotension (aOR: 1.85, CI: 1.65-2.08), or altered gas exchange (aOR: 2.46, CI: 1.43-4.24). In contrast, HPS patients less frequently were admitted from skilled nursing facilities (aOR: 0.44, CI: 0.32-0.60), or had COPD (aOR: 0.53, CI: 0.36-0.76), fever (aOR: 0.70, CI: 0.52-0.91), tachypnea (aOR: 0.76, CI: 0.58-0.98), or AKI (aOR: 082, CI: 0.68-0.97). Other baseline variables were similar, including respiratory source, tachycardia, white cell abnormalities, AMS, and coagulopathies. These associations were preserved in the sensitivity analysis excluding ICU-presenting sepsis.

Propensity Matching

Propensity score matching yielded 1,942 matched pairs (n = 3,884, 77% of HPS patients, 22% of EDPS patients). Table 1 and Supplemental Table 3 show patient characteristics after propensity matching. Supplemental Table 4 shows the propensity model. The frequency densities are shown for the cohort as a function of propensity score in Supplemental Figure 1. After matching, frequencies between groups differed by <5% for all categorical variables assessed. In the sensitivity analysis, propensity matching (model in Supplemental Table 5) resulted in 1,233 matched pairs (n = 2,466, 49% of HPS patients, 14% of EDPS patients), with group differences comparable to the primary analysis.

Process Outcomes

We present propensity-matched differences in initial resuscitation in Figure 2A for all HPS patients, as well as non-ICU-presenting HPS, versus EDPS. HPS patients were roughly half as likely to receive fully 3-hour bundle compliant care (17.0% vs 30.3%, aOR: 0.47, CI: 0.40-0.57), to have blood cultures drawn within three hours prior to antibiotics (44.9% vs 67.2%, aOR: 0.40, CI: 0.35-0.46), or to receive fluid resuscitation initiated within two hours (11.1% vs 26.1%, aOR: 0.35, CI: 0.29-0.42). Antibiotic receipt within one hour was comparable (45.3% vs 48.1%, aOR: 0.89, CI: 0.79-1.01). However, differences emerged for antibiotics within three hours (66.2% vs 83.8%, aOR: 0.38, CI: 0.32-0.44) and persisted at six hours (77.0% vs 92.5%, aOR: 0.27, CI: 0.22-33). Excluding ICU-presenting sepsis from propensity matching exaggerated disparities in antibiotic receipt at one hour (43.4% vs 49.1%, aOR: 0.80, CI: 0.68-0.93), three hours (64.2% vs 86.1%, aOR: 0.29, CI: 0.24-0.35), and six hours (75.7% vs 93.0%, aOR: 0.23, CI: 0.18-0.30). HPS patients more frequently had repeat lactate obtained within 24 hours (62.4% vs 54.3%, aOR: 1.40, CI: 1.23-1.59).

 

 

Patient Outcomes

HPS patients had higher mortality (31.2% vs19.3%), mechanical ventilation (51.5% vs27.4%), and ICU admission (60.6% vs 46.5%) (Table 1 and Supplemental Table 6). Figure 2b shows propensity-matched and covariate-adjusted differences in patient outcomes before and after adjusting for initial resuscitation. aORs corresponded to approximate relative risk differences18 of 1.38 (CI: 1.28-1.48), 1.68 (CI: 1.57-1.79), and 1.72 (CI: 1.61-1.84) for mortality, mechanical ventilation, and ICU admission, respectively. HPS was associated with 83% longer mortality-censored ICU stays (five vs nine days, HR–1: 1.83, CI: 1.65-2.03), and 108% longer hospital stay (eight vs 17 days, HR–1: 2.08, CI: 1.93-2.24). After adjustment for resuscitation, all effect sizes decreased but persisted. The initial crystalloid volume was a significant negative effect modifier for mortality (Supplemental Table 7). That is, the magnitude of the association between HPS and greater mortality decreased by a factor of 0.89 per 10 mL/kg given (CI: 0.82-0.97). We did not observe significant interaction from other interventions, or overall bundle compliance, meaning these interventions’ association with mortality did not significantly differ between HPS versus EDPS.

The implied attributable risk difference from discrepancies in initial resuscitation was 23.3% for mortality, 35.2% for mechanical ventilation, and 7.6% for ICU admission (Figure 2B). Resuscitation explained 26.5% of longer ICU LOS and 16.7% of longer hospital LOS associated with HPS.

Figure 2C shows sensitivity analysis excluding ICU-presenting sepsis from propensity matching (ie, limiting HPS to hospital ward presentations). Again, HPS was associated with all adverse outcomes, though effect sizes were smaller than in the primary cohort for all outcomes except hospital LOS. In this cohort, resuscitation factors now explained 16.5% of HPS’ association with mortality, and 14.5% of the association with longer ICU LOS. However, they explained a greater proportion (13.0%) of ICU admissions. Attributable risk differences were comparable to the primary cohort for mechanical ventilation (37.6%) and hospital LOS (15.3%).

DISCUSSION

In this analysis of 11,182 sepsis and septic shock patients, HPS contributed 22% of prevalence but >35% of total sepsis mortalities, ICU utilization, and hospital days. HPS patients had higher comorbidity burdens and had clinical presentations less obviously attributable to infection with more severe organ dysfunction. EDPS received antibiotics within three hours about 1.62 times more often than do HPS patients. EDPS patients also receive fluids initiated within two hours about 1.82 times more often than HPS patients do. HPS had nearly 1.5-fold greater mortality and LOS, and nearly two-fold greater mechanical ventilation and ICU utilization. Resuscitation disparities could partially explain these associations. These patterns persisted when comparing only wards presenting HPS with EDPS.

Our analysis revealed several notable findings. First, these data confirm that HPS represents a potentially high-impact target population that contributes adverse outcomes disproportionately frequently with respect to case prevalence.

Our findings, unsurprisingly, revealed HPS and EDPS reflect dramatically different patient populations. We found that the two groups significantly differed by the majority of the baseline factors we compared. It may be worth asking if and how these substantial differences in illness etiology, chronic health, and acute physiology impact what we consider an optimal approach to management. Significant interaction effects of fluid volume on the association between HPS and mortality suggest differential treatment effects may exist between patients. Indeed, patients who newly arrive from the community and those who are several days into admission likely have different volume status. However, no interactions were noted with other bundle elements, such as timeliness of antibiotics or timeliness of initial fluids.

Another potentially concerning observation was that HPS patients were admitted much less frequently from skilled nursing facilities, as it could imply that this poorer-fairing population had a comparatively higher baseline functional status. The fact that 25% of EDPS cases were admitted from these facilities also underscores the need to engage skilled nursing facility providers in future sepsis initiatives.

We found marked disparities in resuscitation. Timely delivery of interventions, such as antibiotics and initial fluid resuscitation, occurred less than half as often for HPS, especially on hospital wards. While evidence supporting the efficacy of specific 3-hour bundle elements remains unsettled,19 a wealth of literature demonstrates a correlation between bundle uptake and decreased sepsis mortality, especially for early antibiotic administration.13,20-26 Some analysis suggests that differing initial resuscitation practices explain different mortality rates in the early goal-directed therapy trials.27 The comparatively poor performance for non-ICU HPS indicates further QI efforts are better focused on inpatient wards, rather than on EDs or ICUs where resuscitation is already delivered with substantially greater fidelity.

While resuscitation differences partially explained outcome discrepancies between groups, they did not account for as much variation as expected. Though resuscitation accounted for >35% of attributable mechanical ventilation risk, it explained only 16.5% of mortality differences for non-ICU HPS vs EDPS. We speculate that several factors may contribute.

First, HPS patients are already hospitalized for another acute insult and may be too physiologically brittle to derive equal benefit from initial resuscitation. Some literature suggests protocolized sepsis resuscitation may paradoxically be more effective in milder/earlier disease.28

Second, clinical information indicating septic organ dysfunction may become available too late in HPS—a possible data limitation where inpatient providers are counterintuitively more likely to miss early signs of patients’ deterioration and a subsequent therapeutic window. Several studies found that fluid resuscitation is associated with improved sepsis outcomes only when it is administered very early.11,29-31 In inpatient wards, decreased monitoring32 and human factors (eg, hospital workflow, provider-to-patient ratios, electronic documentation burdens)33,34 may hinder early diagnosis. In contrast, ED environments are explicitly designed to identify acutely ill patients and deliver intervention rapidly. If HPS patients were sicker when they were identified, this would also explain their more severe organ dysfunctions. Our data seems to support this possibility. HPS patients had tachypnea less frequently but more often had impaired gas exchange. This finding may suggest that early tachypnea was either less often detected or documented, or that it had progressed further by the time of detection.

Third, inpatients with sepsis may more often present with greater diagnostic complexity. We observed that HPS patients were more often euthermic and less often tachypneic. Beyond suggesting a greater diagnostic challenge, this also raises questions as to whether differences reflect patient physiology (response to infection) or iatrogenic factors (eg, prior antipyretics). Higher comorbidity and acute physiological burdens also limit the degree to which new organ dysfunction can be clearly attributed to infection. We note differences in the proportion of patients who received antibiotics increased over time, suggesting that HPS patients who received delayed antibiotics did so much later than their EDPS counterparts. This lag could also arise from diagnostic difficulty.

All three possibilities highlight a potential lead time effect, where the same measured three-hour period on the wards, between meeting sepsis criteria and starting treatment, actually reflects a longer period between (as yet unmeasurable) pathobiologic “time zero” and treatment versus the ED. The time of sepsis detection, as distinct from the time of sepsis onset, therefore proves difficult to evaluate and impossible to account for statistically.

Regardless, our findings suggest additional difficulty in both the recognition and resuscitation of inpatient sepsis. Inpatients, especially with infections, may need closer monitoring. How to cost effectively implement this monitoring is a challenge that deserves attention.

A more rational systems approach to HPS likely combines efforts to improve initial resuscitation with other initiatives aimed at both improving monitoring and preventing infection.

To be clear, we do not imply that timely initial resuscitation does not matter on the wards. Rather, resuscitation-focused QI alone does not appear to be sufficient to overcome differences in outcomes for HPS. The 23.3% attributable mortality risk we observed still implies that resuscitation differences could explain nearly one in four excess HPS mortalities. We previously showed that timely resuscitation is strongly associated with better outcomes.11,13,30 As discussed above, the unclear degree to which better resuscitation is a marker for more obvious presentations is a persistent limitation of prior investigations and the present study.

Taken together, the ultimate question that this study raises but cannot answer is whether the timely recognition of sepsis, rather than any specific treatment, is what truly improves outcomes.

In addition to those above, this study has several limitations. Our study did not differentiate HPS with respect to patients admitted for noninfectious reasons and who subsequently became septic versus nonseptic patients admitted for an infection who subsequently became septic from that infection. Nor could we discriminate between missed ED diagnoses and true delayed presentations. We note distinguishing these entities clinically can be equally challenging. Additionally, this was a propensity-matched retrospective analysis of an existing sepsis cohort, and the many limitations of both retrospective study and propensity matching apply.35,36 We note that randomizing patients to develop sepsis in the community versus hospital is not feasible and that two of our aims intended to describe overall patterns rather than causal effects. We could not ascertain robust measures of severity of illness (eg, SOFA) because a real world setting precludes required data points—eg, urine output is unreliably recorded. We also note incomplete overlap between inclusion criteria and either Sepsis-2 or -3 definitions,1,37 because we designed and populated our database prior to publication of Sepsis-3. Further, we could not account for surgical source control, the appropriateness of antimicrobial therapy, mechanical ventilation before sepsis onset, or most treatments given after initial resuscitation.

In conclusion, hospital-presenting sepsis accounted for adverse patient outcomes disproportionately to prevalence. HPS patients had more complex presentations, received timely antibiotics half as often ED-presenting sepsis, and had nearly twice the mortality odds. Resuscitation disparities explained roughly 25% of this difference.

 

 

Disclosures

The authors have no conflicts of interest to disclose.

Funding

This investigation was funded in part by a grant from the Center for Medicare and Medicaid Innovation to the High Value Healthcare Collaborative, of which the study sites’ umbrella health system was a part. This grant helped fund the underlying QI program and database in this study.

 

References

1. Singer M, Deutschman CS, Seymour CW, et al. The third international consensus definitions for sepsis and septic shock (Sepsis-3). JAMA. 2016;315(8):801-810. doi: 10.1001/jama.2016.0287. PubMed
2. Torio CMA, Andrews RMA. National inpatient hospital costs: the most expensive conditions by payer, 2011. In. Statistical Brief No. 160. Rockville, MD: Agency for Healthcare Research and Quality; 2013. PubMed
3. Liu V, Escobar GJ, Greene JD, et al. Hospital deaths in patients with sepsis from 2 independent cohorts. JAMA. 2014;312(1):90-92. doi: 10.1001/jama.2014.5804. PubMed
4. Seymour CW, Liu VX, Iwashyna TJ, et al. Assessment of clinical criteria for sepsis: for the third international consensus definitions for sepsis and septic shock (Sepsis-3). JAMA. 2016;315(8):762-774. doi: 10.1001/jama.2016.0288. PubMed
5. Jones SL, Ashton CM, Kiehne LB, et al. Outcomes and resource use of sepsis-associated stays by presence on admission, severity, and hospital type. Med Care. 2016;54(3):303-310. doi: 10.1097/MLR.0000000000000481. PubMed
6. Page DB, Donnelly JP, Wang HE. Community-, healthcare-, and hospital-acquired severe sepsis hospitalizations in the university healthsystem consortium. Crit Care Med. 2015;43(9):1945-1951. doi: 10.1097/CCM.0000000000001164. PubMed
7. Rothman M, Levy M, Dellinger RP, et al. Sepsis as 2 problems: identifying sepsis at admission and predicting onset in the hospital using an electronic medical record-based acuity score. J Crit Care. 2016;38:237-244. doi: 10.1016/j.jcrc.2016.11.037. PubMed
8. Chan P, Peake S, Bellomo R, Jones D. Improving the recognition of, and response to in-hospital sepsis. Curr Infect Dis Rep. 2016;18(7):20. doi: 10.1007/s11908-016-0528-7. PubMed
9. Rhodes A, Evans LE, Alhazzani W, et al. Surviving sepsis campaign: international guidelines for management of sepsis and septic shock: 2016. Crit Care Med. 2017;45(3):486-552. doi: 10.1097/CCM.0000000000002255. PubMed
10. Levy MM, Evans LE, Rhodes A. The Surviving Sepsis Campaign Bundle: 2018 Update. Crit Care Med. 2018;46(6):997-1000. doi: 10.1097/CCM.0000000000003119. PubMed
11. Leisman DE, Goldman C, Doerfler ME, et al. Patterns and outcomes associated with timeliness of initial crystalloid resuscitation in a prospective sepsis and septic shock cohort. Crit Care Med. 2017;45(10):1596-1606. doi: 10.1097/CCM.0000000000002574. PubMed
12. Leisman DE, Doerfler ME, Schneider SM, Masick KD, D’Amore JA, D’Angelo JK. Predictors, prevalence, and outcomes of early crystalloid responsiveness among initially hypotensive patients with sepsis and septic shock. Crit Care Med. 2018;46(2):189-198. doi: 10.1097/CCM.0000000000002834. PubMed
13. Leisman DE, Doerfler ME, Ward MF, et al. Survival benefit and cost savings from compliance with a simplified 3-hour sepsis bundle in a series of prospective, multisite, observational cohorts. Crit Care Med. 2017;45(3):395-406. doi: 10.1097/CCM.0000000000002184. PubMed
14. Doerfler ME, D’Angelo J, Jacobsen D, et al. Methods for reducing sepsis mortality in emergency departments and inpatient units. Jt Comm J Qual Patient Saf. 2015;41(5):205-211. doi: 10.1016/S1553-7250(15)41027-X. PubMed
15. Murphy B, Fraeman KH. A general SAS® macro to implement optimal N:1 propensity score matching within a maximum radius. In: Paper 812-2017. Waltham, MA: Evidera; 2017. https://support.sas.com/resources/papers/proceedings17/0812-2017.pdf. Accessed February 20, 2019.
16. Funk MJ, Westreich D, Wiesen C, Stürmer T, Brookhart MA, Davidian M. Doubly robust estimation of causal effects. Am J Epidemiol. 2011;173(7):761-767. doi: 10.1093/aje/kwq439. PubMed
17. Sahai HK, Khushid A. Statistics in Epidemiology: Methods, Techniques, and Applications. Boca Raton, FL: CRC Press; 1995. 
18. VanderWeele TJ. On a square-root transformation of the odds ratio for a common outcome. Epidemiology. 2017;28(6):e58-e60. doi: 10.1097/EDE.0000000000000733. PubMed
19. Pepper DJ, Natanson C, Eichacker PQ. Evidence underpinning the centers for medicare & medicaid services’ severe sepsis and septic shock management bundle (SEP-1). Ann Intern Med. 2018;168(8):610-612. doi: 10.7326/L18-0140. PubMed
20. Levy MM, Rhodes A, Phillips GS, et al. Surviving sepsis campaign: association between performance metrics and outcomes in a 7.5-year study. Crit Care Med. 2015;43(1):3-12. doi: 10.1097/CCM.0000000000000723. PubMed
11. Liu VX, Morehouse JW, Marelich GP, et al. Multicenter Implementation of a Treatment Bundle for Patients with Sepsis and Intermediate Lactate Values. Am J Respir Crit Care Med. 2016;193(11):1264-1270. doi: 10.1164/rccm.201507-1489OC. PubMed
22. Miller RR, Dong L, Nelson NC, et al. Multicenter implementation of a severe sepsis and septic shock treatment bundle. Am J Respir Crit Care Med. 2013;188(1):77-82. doi: 10.1164/rccm.201212-2199OC. PubMed
23. Seymour CW, Gesten F, Prescott HC, et al. Time to treatment and mortality during mandated emergency care for sepsis. N Engl J Med. 2017;376(23):2235-2244. doi: 10.1056/NEJMoa1703058. PubMed
24. Pruinelli L, Westra BL, Yadav P, et al. Delay within the 3-hour surviving sepsis campaign guideline on mortality for patients with severe sepsis and septic shock. Crit Care Med. 2018;46(4):500-505. doi: 10.1097/CCM.0000000000002949. PubMed
25. Kumar A, Roberts D, Wood KE, et al. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med. 2006;34(6):1589-1596. doi: 10.1097/01.CCM.0000217961.75225.E9. PubMed
26. Liu VX, Fielding-Singh V, Greene JD, et al. The timing of early antibiotics and hospital mortality in sepsis. Am J Respir Crit Care Med. 2017;196(7):856-863. doi: 10.1164/rccm.201609-1848OC. PubMed
27. Kalil AC, Johnson DW, Lisco SJ, Sun J. Early goal-directed therapy for sepsis: a novel solution for discordant survival outcomes in clinical trials. Crit Care Med. 2017;45(4):607-614. doi: 10.1097/CCM.0000000000002235. PubMed
28. Kellum JA, Pike F, Yealy DM, et al. relationship between alternative resuscitation strategies, host response and injury biomarkers, and outcome in septic shock: analysis of the protocol-based care for early septic shock study. Crit Care Med. 2017;45(3):438-445. doi: 10.1097/CCM.0000000000002206. PubMed
29. Seymour CW, Cooke CR, Heckbert SR, et al. Prehospital intravenous access and fluid resuscitation in severe sepsis: an observational cohort study. Crit Care. 2014;18(5):533. doi: 10.1186/s13054-014-0533-x. PubMed
30. Leisman D, Wie B, Doerfler M, et al. Association of fluid resuscitation initiation within 30 minutes of severe sepsis and septic shock recognition with reduced mortality and length of stay. Ann Emerg Med. 2016;68(3):298-311. doi: 10.1016/j.annemergmed.2016.02.044. PubMed
31. Lee SJ, Ramar K, Park JG, Gajic O, Li G, Kashyap R. Increased fluid administration in the first three hours of sepsis resuscitation is associated with reduced mortality: a retrospective cohort study. Chest. 2014;146(4):908-915. doi: 10.1378/chest.13-2702. PubMed
32. Smyth MA, Daniels R, Perkins GD. Identification of sepsis among ward patients. Am J Respir Crit Care Med. 2015;192(8):910-911. doi: 10.1164/rccm.201507-1395ED. PubMed
33. Wenger N, Méan M, Castioni J, Marques-Vidal P, Waeber G, Garnier A. Allocation of internal medicine resident time in a Swiss hospital: a time and motion study of day and evening shifts. Ann Intern Med. 2017;166(8):579-586. doi: 10.7326/M16-2238. PubMed
34. Mamykina L, Vawdrey DK, Hripcsak G. How do residents spend their shift time? A time and motion study with a particular focus on the use of computers. Acad Med. 2016;91(6):827-832. doi: 10.1097/ACM.0000000000001148. PubMed
35. Kaji AH, Schriger D, Green S. Looking through the retrospectoscope: reducing bias in emergency medicine chart review studies. Ann Emerg Med. 2014;64(3):292-298. doi: 10.1016/j.annemergmed.2014.03.025. PubMed
36. Leisman DE. Ten pearls and pitfalls of propensity scores in critical care research: a guide for clinicians and researchers. Crit Care Med. 2019;47(2):176-185. doi: 10.1097/CCM.0000000000003567. PubMed
37. Levy MM, Fink MP, Marshall JC, et al. 2001 SCCM/ESICM/ACCP/ATS/SIS international sepsis definitions conference. Crit Care Med. 2003;31(4):1250-1256. doi: 10.1097/01.CCM.0000050454.01978.3B. PubMed

References

1. Singer M, Deutschman CS, Seymour CW, et al. The third international consensus definitions for sepsis and septic shock (Sepsis-3). JAMA. 2016;315(8):801-810. doi: 10.1001/jama.2016.0287. PubMed
2. Torio CMA, Andrews RMA. National inpatient hospital costs: the most expensive conditions by payer, 2011. In. Statistical Brief No. 160. Rockville, MD: Agency for Healthcare Research and Quality; 2013. PubMed
3. Liu V, Escobar GJ, Greene JD, et al. Hospital deaths in patients with sepsis from 2 independent cohorts. JAMA. 2014;312(1):90-92. doi: 10.1001/jama.2014.5804. PubMed
4. Seymour CW, Liu VX, Iwashyna TJ, et al. Assessment of clinical criteria for sepsis: for the third international consensus definitions for sepsis and septic shock (Sepsis-3). JAMA. 2016;315(8):762-774. doi: 10.1001/jama.2016.0288. PubMed
5. Jones SL, Ashton CM, Kiehne LB, et al. Outcomes and resource use of sepsis-associated stays by presence on admission, severity, and hospital type. Med Care. 2016;54(3):303-310. doi: 10.1097/MLR.0000000000000481. PubMed
6. Page DB, Donnelly JP, Wang HE. Community-, healthcare-, and hospital-acquired severe sepsis hospitalizations in the university healthsystem consortium. Crit Care Med. 2015;43(9):1945-1951. doi: 10.1097/CCM.0000000000001164. PubMed
7. Rothman M, Levy M, Dellinger RP, et al. Sepsis as 2 problems: identifying sepsis at admission and predicting onset in the hospital using an electronic medical record-based acuity score. J Crit Care. 2016;38:237-244. doi: 10.1016/j.jcrc.2016.11.037. PubMed
8. Chan P, Peake S, Bellomo R, Jones D. Improving the recognition of, and response to in-hospital sepsis. Curr Infect Dis Rep. 2016;18(7):20. doi: 10.1007/s11908-016-0528-7. PubMed
9. Rhodes A, Evans LE, Alhazzani W, et al. Surviving sepsis campaign: international guidelines for management of sepsis and septic shock: 2016. Crit Care Med. 2017;45(3):486-552. doi: 10.1097/CCM.0000000000002255. PubMed
10. Levy MM, Evans LE, Rhodes A. The Surviving Sepsis Campaign Bundle: 2018 Update. Crit Care Med. 2018;46(6):997-1000. doi: 10.1097/CCM.0000000000003119. PubMed
11. Leisman DE, Goldman C, Doerfler ME, et al. Patterns and outcomes associated with timeliness of initial crystalloid resuscitation in a prospective sepsis and septic shock cohort. Crit Care Med. 2017;45(10):1596-1606. doi: 10.1097/CCM.0000000000002574. PubMed
12. Leisman DE, Doerfler ME, Schneider SM, Masick KD, D’Amore JA, D’Angelo JK. Predictors, prevalence, and outcomes of early crystalloid responsiveness among initially hypotensive patients with sepsis and septic shock. Crit Care Med. 2018;46(2):189-198. doi: 10.1097/CCM.0000000000002834. PubMed
13. Leisman DE, Doerfler ME, Ward MF, et al. Survival benefit and cost savings from compliance with a simplified 3-hour sepsis bundle in a series of prospective, multisite, observational cohorts. Crit Care Med. 2017;45(3):395-406. doi: 10.1097/CCM.0000000000002184. PubMed
14. Doerfler ME, D’Angelo J, Jacobsen D, et al. Methods for reducing sepsis mortality in emergency departments and inpatient units. Jt Comm J Qual Patient Saf. 2015;41(5):205-211. doi: 10.1016/S1553-7250(15)41027-X. PubMed
15. Murphy B, Fraeman KH. A general SAS® macro to implement optimal N:1 propensity score matching within a maximum radius. In: Paper 812-2017. Waltham, MA: Evidera; 2017. https://support.sas.com/resources/papers/proceedings17/0812-2017.pdf. Accessed February 20, 2019.
16. Funk MJ, Westreich D, Wiesen C, Stürmer T, Brookhart MA, Davidian M. Doubly robust estimation of causal effects. Am J Epidemiol. 2011;173(7):761-767. doi: 10.1093/aje/kwq439. PubMed
17. Sahai HK, Khushid A. Statistics in Epidemiology: Methods, Techniques, and Applications. Boca Raton, FL: CRC Press; 1995. 
18. VanderWeele TJ. On a square-root transformation of the odds ratio for a common outcome. Epidemiology. 2017;28(6):e58-e60. doi: 10.1097/EDE.0000000000000733. PubMed
19. Pepper DJ, Natanson C, Eichacker PQ. Evidence underpinning the centers for medicare & medicaid services’ severe sepsis and septic shock management bundle (SEP-1). Ann Intern Med. 2018;168(8):610-612. doi: 10.7326/L18-0140. PubMed
20. Levy MM, Rhodes A, Phillips GS, et al. Surviving sepsis campaign: association between performance metrics and outcomes in a 7.5-year study. Crit Care Med. 2015;43(1):3-12. doi: 10.1097/CCM.0000000000000723. PubMed
11. Liu VX, Morehouse JW, Marelich GP, et al. Multicenter Implementation of a Treatment Bundle for Patients with Sepsis and Intermediate Lactate Values. Am J Respir Crit Care Med. 2016;193(11):1264-1270. doi: 10.1164/rccm.201507-1489OC. PubMed
22. Miller RR, Dong L, Nelson NC, et al. Multicenter implementation of a severe sepsis and septic shock treatment bundle. Am J Respir Crit Care Med. 2013;188(1):77-82. doi: 10.1164/rccm.201212-2199OC. PubMed
23. Seymour CW, Gesten F, Prescott HC, et al. Time to treatment and mortality during mandated emergency care for sepsis. N Engl J Med. 2017;376(23):2235-2244. doi: 10.1056/NEJMoa1703058. PubMed
24. Pruinelli L, Westra BL, Yadav P, et al. Delay within the 3-hour surviving sepsis campaign guideline on mortality for patients with severe sepsis and septic shock. Crit Care Med. 2018;46(4):500-505. doi: 10.1097/CCM.0000000000002949. PubMed
25. Kumar A, Roberts D, Wood KE, et al. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med. 2006;34(6):1589-1596. doi: 10.1097/01.CCM.0000217961.75225.E9. PubMed
26. Liu VX, Fielding-Singh V, Greene JD, et al. The timing of early antibiotics and hospital mortality in sepsis. Am J Respir Crit Care Med. 2017;196(7):856-863. doi: 10.1164/rccm.201609-1848OC. PubMed
27. Kalil AC, Johnson DW, Lisco SJ, Sun J. Early goal-directed therapy for sepsis: a novel solution for discordant survival outcomes in clinical trials. Crit Care Med. 2017;45(4):607-614. doi: 10.1097/CCM.0000000000002235. PubMed
28. Kellum JA, Pike F, Yealy DM, et al. relationship between alternative resuscitation strategies, host response and injury biomarkers, and outcome in septic shock: analysis of the protocol-based care for early septic shock study. Crit Care Med. 2017;45(3):438-445. doi: 10.1097/CCM.0000000000002206. PubMed
29. Seymour CW, Cooke CR, Heckbert SR, et al. Prehospital intravenous access and fluid resuscitation in severe sepsis: an observational cohort study. Crit Care. 2014;18(5):533. doi: 10.1186/s13054-014-0533-x. PubMed
30. Leisman D, Wie B, Doerfler M, et al. Association of fluid resuscitation initiation within 30 minutes of severe sepsis and septic shock recognition with reduced mortality and length of stay. Ann Emerg Med. 2016;68(3):298-311. doi: 10.1016/j.annemergmed.2016.02.044. PubMed
31. Lee SJ, Ramar K, Park JG, Gajic O, Li G, Kashyap R. Increased fluid administration in the first three hours of sepsis resuscitation is associated with reduced mortality: a retrospective cohort study. Chest. 2014;146(4):908-915. doi: 10.1378/chest.13-2702. PubMed
32. Smyth MA, Daniels R, Perkins GD. Identification of sepsis among ward patients. Am J Respir Crit Care Med. 2015;192(8):910-911. doi: 10.1164/rccm.201507-1395ED. PubMed
33. Wenger N, Méan M, Castioni J, Marques-Vidal P, Waeber G, Garnier A. Allocation of internal medicine resident time in a Swiss hospital: a time and motion study of day and evening shifts. Ann Intern Med. 2017;166(8):579-586. doi: 10.7326/M16-2238. PubMed
34. Mamykina L, Vawdrey DK, Hripcsak G. How do residents spend their shift time? A time and motion study with a particular focus on the use of computers. Acad Med. 2016;91(6):827-832. doi: 10.1097/ACM.0000000000001148. PubMed
35. Kaji AH, Schriger D, Green S. Looking through the retrospectoscope: reducing bias in emergency medicine chart review studies. Ann Emerg Med. 2014;64(3):292-298. doi: 10.1016/j.annemergmed.2014.03.025. PubMed
36. Leisman DE. Ten pearls and pitfalls of propensity scores in critical care research: a guide for clinicians and researchers. Crit Care Med. 2019;47(2):176-185. doi: 10.1097/CCM.0000000000003567. PubMed
37. Levy MM, Fink MP, Marshall JC, et al. 2001 SCCM/ESICM/ACCP/ATS/SIS international sepsis definitions conference. Crit Care Med. 2003;31(4):1250-1256. doi: 10.1097/01.CCM.0000050454.01978.3B. PubMed

Issue
Journal of Hospital Medicine 14(6)
Issue
Journal of Hospital Medicine 14(6)
Page Number
340-348. Published online first April 8, 2019.
Page Number
340-348. Published online first April 8, 2019.
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Daniel E Leisman, BS; E-mail: deleisman@gmail.com; Telephone: 516-941-8468.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Resuming Anticoagulation following Upper Gastrointestinal Bleeding among Patients with Nonvalvular Atrial Fibrillation—A Microsimulation Analysis

Article Type
Changed
Sun, 07/28/2019 - 14:58

Anticoagulation is commonly used in the management of atrial fibrillation to reduce the risk of ischemic stroke. Warfarin and other anticoagulants increase the risk of hemorrhagic complications, including upper gastrointestinal bleeding (UGIB). Following UGIB, management of anticoagulation is highly variable. Many patients permanently discontinue anticoagulation, while others continue without interruption.1-4 Among patients who resume warfarin, different cohorts have measured median times to resumption ranging from four days to 50 days.1-3 Outcomes data are sparse, and clinical guidelines offer little direction.5

Following UGIB, the balance between the risks and benefits of anticoagulation changes over time. Rebleeding risk is highest immediately after the event and declines quickly; therefore, rapid resumption of anticoagulation causes patient harm.3 Meanwhile, the risk of stroke remains constant, and delay in resumption of anticoagulation is associated with increased risk of stroke and death.1 At some point in time following the initial UGIB, the expected harm from bleeding would equal the expected harm from stroke. This time point would represent the optimal time to restart anticoagulation.

Trial data are unlikely to identify the optimal time for restarting anticoagulation. A randomized trial comparing discrete reinitiation times (eg, two weeks vs six weeks) may easily miss the optimal timing. Moreover, because the daily probability of thromboembolic events is low, large numbers of patients would be required to power such a study. In addition, a number of oral anticoagulants are now approved for prevention of thromboembolic stroke in atrial fibrillation, and each drug may have different optimal timing.

In contrast to randomized trials that would be impracticable for addressing this clinical issue, microsimulation modeling can provide granular information regarding the optimal time to restart anticoagulation. Herein, we set out to estimate the expected benefit of reinitiation of warfarin, the most commonly used oral anticoagulant,6 or apixaban, the direct oral anticoagulant with the most favorable risk profile,7 as a function of days after UGIB.

METHODS

We previously described a microsimulation model of anticoagulation among patients with nonvalvular atrial fibrillation (NVAF; hereafter, we refer to this model as the Personalized Anticoagulation Decision-Making Assistance model, or PADMA).8,9 For this study, we extended this model to incorporate the probability of rebleeding following UGIB and include apixaban as an alternative to warfarin. This model begins with a synthetic population following UGIB, the members of which are at varying risk for thromboembolism, recurrent UGIB, and other hemorrhages. For each patient, the model simulates a number of possible events (eg, thromboembolic stroke, intracranial hemorrhage, rebleeding, and other major extracranial hemorrhages) on each day of an acute period of 90 days after hemostasis. Patients who survive until the end of the acute period enter a simulation with annual, rather than daily, cycles. Our model then estimates total quality-adjusted life-years (QALYs) for each patient, discounted to the present. We report the average discounted QALYs produced by the model for the same population if all individuals in our input population were to resume either warfarin or apixaban on a specific day. Input parameters and ranges are summarized in Table 1, a simplified schematic of our model is shown in the Supplemental Appendix, and additional details regarding model structure and assumptions can be found in earlier work.8,9 We simulated from a health system perspective over a lifelong time horizon. All analyses were performed in version 14 of Stata (StataCorp, LLC, College Station, Texas).

 

 

Synthetic Population

To generate a population reflective of the comorbidities and age distribution of the US population with NVAF, we merged relevant variables from the National Health and Nutrition Examination Survey (NHANES; 2011-2012), using multiple imputation to correct for missing variables.10 We then bootstrapped to national population estimates by age and sex to arrive at a hypothetical population of the United States.11 Because NHANES does not include atrial fibrillation, we applied sex- and age-specific prevalence rates from the AnTicoagulation and Risk Factors In Atrial Fibrillation study.12 We then calculated commonly used risk scores (CHA2DS2-Vasc and HAS-BLED) for each patient and limited the population to patients with a CHA2DS2-Vasc score of one or greater.13,14 The population resuming apixaban was further limited to patients whose creatinine clearance was 25 mL/min or greater in keeping with the entry criteria in the phase 3 clinical trial on which the medication’s approval was based.15

To estimate patient-specific probability of rebleeding, we generated a Rockall score for each patient.16 Although the discrimination of the Rockall score is limited for individual patients, as with all other tools used to predict rebleeding following UGIB, the Rockall score has demonstrated reasonable calibration across a threefold risk gradient.17-19 International consensus guidelines recommend the Rockall score as one of two risk prediction tools for clinical use in the management of patients with UGIB.20 In addition, because the Rockall score includes some demographic components (five of a possible 11 points), our estimates of rebleeding risk are covariant with other patient-specific risks. We assumed that the endoscopic components of the Rockall score were present in our cohort at the same frequency as in the original derivation and are independent of known patient risk factors.16 For example, 441 out of 4,025 patients in the original Rockall derivation cohort presented with a systolic blood pressure less than 100 mm Hg. We assumed that an independent and random 10.96% of the cohort would present with shock, which confers two points in the Rockall score.

The population was replicated 60 times, with identical copies of the population resuming anticoagulation on each of days 1-60 (where day zero represents hemostasis). Intermediate data regarding our simulated population can be found in the Supplemental Appendix and in prior work.

Event Type, Severity, and Mortality

Each patient in our simulation could sustain several discrete and independent events: ischemic stroke, intracranial hemorrhage, recurrent UGIB, or extracranial major hemorrhage other than recurrent UGIB. As in prior analyses using the PADMA model, we did not consider minor hemorrhagic events.8

The probability of each event was conditional on the corresponding risk scoring system. Patient-specific probability of ischemic stroke was conditional on CHA2DS2-Vasc score.21,22 Patient-specific probability of intracranial hemorrhage was conditional on HAS-BLED score, with the proportions of intracranial hemorrhage of each considered subtype (intracerebral, subarachnoid, or subdural) bootstrapped from previously-published data.21-24 Patient-specific probability of rebleeding was conditional on Rockall score from the combined Rockall and Vreeburg validation cohorts.17 Patient-specific probability of extracranial major hemorrhage was conditional on HAS-BLED score.21 To avoid double-counting of UGIB, we subtracted the baseline risk of UGIB from the overall rate of extracranial major hemorrhages using previously-published data regarding relative frequency and a bootstrapping approach.25

 

 

Probability of Rebleeding Over Time

To estimate the decrease in rebleeding risk over time, we searched the Medline database for systematic reviews of recurrent bleeding following UGIB using the strategy detailed in the Supplemental Appendix. Using the interval rates of rebleeding we identified, we calculated implied daily rates of rebleeding at the midpoint of each interval. For example, 39.5% of rebleeding events occurred within three days of hemostasis, implying a daily rate of approximately 13.2% on day two (32 of 81 events over a three-day period). We repeated this process to estimate daily rates at the midpoint of each reported time interval and fitted an exponential decay function.26 Our exponential fitted these datapoints quite well, but we lacked sufficient data to test other survival functions (eg, Gompertz, lognormal, etc.). Our fitted exponential can be expressed as:

P rebleeding = b 0 *exp(b 1 *day)

where b0 = 0.1843 (SE: 0.0136) and b1 = –0.1563 (SE: 0.0188). For example, a mean of 3.9% of rebleeding episodes will occur on day 10 (0.1843 *exp(–0.1563 *10)).

Relative Risks of Events with Anticoagulation

For patients resuming warfarin, the probabilities of each event were adjusted based on patient-specific daily INR. All INRs were assumed to be 1.0 until the day of warfarin reinitiation, after which interpolated trajectories of postinitiation INR measurements were sampled for each patient from an earlier study of clinical warfarin initiation.27 Relative risks of ischemic stroke and hemorrhagic events were calculated based on each day’s INR.

For patients taking apixaban, we assumed that the medication would reach full therapeutic effect one day after reinitiation. Based on available evidence, we applied the relative risks of each event with apixaban compared with warfarin.25

Future Disability and Mortality

Each event in our simulation resulted in hospitalization. Length of stay was sampled for each diagnosis.28 The disutility of hospitalization was estimated based on length of stay.8 Inpatient mortality and future disability were predicted for each event as previously described.8 We assumed that recurrent episodes of UGIB conferred morbidity and mortality identical to extracranial major hemorrhages more broadly.29,30

 

 

Disutilities

We used a multiplicative model for disutility with baseline utilities conditional on age and sex.31 Each day after resumption of anticoagulation carried a disutility of 0.012 for warfarin or 0.002 for apixaban, which we assumed to be equivalent to aspirin in disutility.32 Long-term disutility and life expectancy were conditional on modified Rankin Score (mRS).33,34 We discounted all QALYs to day zero using standard exponential discounting and a discount rate centered at 3%. We then computed the average discounted QALYs among the cohort of patients that resumed anticoagulation on each day following the index UGIB.

Sensitivity Analyses and Metamodel

To assess sensitivity to continuously varying input parameters, such as discount rate, the proportion of extracranial major hemorrhages that are upper GI bleeds, and inpatient mortality from extracranial major hemorrhage, we constructed a metamodel (a regression model of our microsimulation results).35 We tested for interactions among input parameters and dropped parameters that were not statistically significant predictors of discounted QALYs from our metamodel. We then tested for interactions between each parameter and day resuming anticoagulation to determine which factors may impact the optimal day of reinitiation. Finally, we used predicted marginal effects from our metamodel to assess the change in optimal day across the ranges of each input parameter when other parameters were held at their medians.

RESULTS

Resuming warfarin on day zero produced the fewest QALYs. With delay in reinitiation of anticoagulation, expected QALYs increased, peaked, and then declined for all scenarios. In our base-case simulation of warfarin, peak utility was achieved by resumption 41 days after the index UGIB. Resumption between days 32 and 51 produced greater than 99.9% of peak utility. In our base-case simulation of apixaban, peak utility was achieved by resumption 32 days after the index UGIB. Resumption between days 21 and 47 produced greater than 99.9% of peak utility. Results for warfarin and apixaban are shown in Figures 1 and 2, respectively.

The optimal day of warfarin reinitiation was most sensitive to CHA2DS2-Vasc scores and varied by around 11 days between a CHA2DS2-Vasc score of one and a CHA2DS2-Vasc score of six (the 5th and 95th percentiles, respectively) when all other parameters are held at their medians. Results were comparatively insensitive to rebleeding risk. Varying Rockall score from two to seven (the 5th and 95th percentiles, respectively) added three days to optimal warfarin resumption. Varying other parameters from the 5th to the 95th percentile (including HAS-BLED score, sex, age, and discount rate) changed expected QALYs but did not change the optimal day of reinitiation of warfarin. Optimal day of reinitiation for warfarin stratified by CHA2DS2-Vasc score is shown in Table 2.



Sensitivity analyses for apixaban produced broadly similar results, but with greater sensitivity to rebleeding risk. Optimal day of reinitiation varied by 15 days over the examined range of CHA2DS2-Vasc scores (Table 2) and by six days over the range of Rockall scores (Supplemental Appendix). Other input parameters, including HAS-BLED score, age, sex, and discount rate, changed expected QALYs and were significant in our metamodel but did not affect the optimal day of reinitiation. Metamodel results for both warfarin and apixaban are included in the Supplemental Appendix.

 

 

DISCUSSION

Anticoagulation is frequently prescribed for patients with NVAF, and hemorrhagic complications are common. Although anticoagulants are withheld following hemorrhages, scant evidence to inform the optimal timing of reinitiation is available. In this microsimulation analysis, we found that the optimal time to reinitiate anticoagulation following UGIB is around 41 days for warfarin and around 32 days for apixaban. We have further demonstrated that the optimal timing of reinitiation can vary by nearly two weeks, depending on a patient’s underlying risk of stroke, and that early reinitiation is more sensitive to rebleeding risk than late reinitiation.

Prior work has shown that early reinitiation of anticoagulation leads to higher rates of recurrent hemorrhage while failure to reinitiate anticoagulation is associated with higher rates of stroke and mortality.1-4,36 Our results add to the literature in a number of important ways. First, our model not only confirms that anticoagulation should be restarted but also suggests when this action should be taken. The competing risks of bleeding and stroke have left clinicians with little guidance; we have quantified the clinical reasoning required for the decision to resume anticoagulation. Second, by including the disutility of hospitalization and long-term disability, our model more accurately represents the complex tradeoffs between recurrent hemorrhage and (potentially disabling) stroke than would a comparison of event rates. Third, our model is conditional upon patient risk factors, allowing clinicians to personalize the timing of anticoagulation resumption. Theory would suggest that patients at higher risk of ischemic stroke benefit from earlier resumption of anticoagulation, while patients at higher risk of hemorrhage benefit from delayed reinitiation. We have quantified the extent to which patient-specific risks should change timing. Fourth, we offer a means of improving expected health outcomes that requires little more than appropriate scheduling. Current practice regarding resuming anticoagulation is widely variable. Many patients never resume warfarin, and those that do resume do so after highly varied periods of time.1-5,36 We offer a means of standardizing clinical practice and improving expected patient outcomes.



Interestingly, patient-specific risk of rebleeding had little effect on our primary outcome for warfarin, and a greater effect in our simulation of apixaban. It would seem that rebleeding risk, which decreases roughly exponentially, is sufficiently low by the time period at which warfarin should be resumed that patient-specific hemorrhage risk factors have little impact. Meanwhile, at the shorter post-event intervals at which apixaban can be resumed, both stroke risk and patient-specific bleeding risk are worthy considerations.

Our model is subject to several important limitations. First, our predictions of the optimal day as a function of risk scores can only be as well-calibrated as the input scoring systems. It is intuitive that patients with higher risk of rebleeding benefit from delayed reinitiation, while patients with higher risk of thromboembolic stroke benefit from earlier reinitiation. Still, clinicians seeking to operationalize competing risks through these two scores—or, indeed, any score—should be mindful of their limited calibration and shared variance. In other words, while the optimal day of reinitiation is likely in the range we have predicted and varies to the degree demonstrated here, the optimal day we have predicted for each score is likely overly precise. However, while better-calibrated prediction models would improve the accuracy of our model, we believe ours to be the best estimate of timing given available data and this approach to be the most appropriate way to personalize anticoagulation resumption.

Our simulation of apixaban carries an additional source of potential miscalibration. In the clinical trials that led to their approval, apixaban and other direct oral anticoagulants (DOACs) were compared with warfarin over longer periods of time than the acute period simulated in this work. Over a short period of time, patients treated with more rapidly therapeutic medications (in this case, apixaban) would receive more days of effective therapy compared with a slower-onset medication, such as warfarin. Therefore, the relative risks experienced by patients are likely different over the time period we have simulated compared with those measured over longer periods of time (as in phase 3 clinical trials). Our results for apixaban should be viewed as more limited than our estimates for warfarin. More broadly, simulation analyses are intended to predict overall outcomes that are difficult to measure. While other frameworks to assess model credibility exist, the fact remains that no extant datasets can directly validate our predictions.37

Our findings are limited to patients with NVAF. Anticoagulants are prescribed for a variety of indications with widely varied underlying risks and benefits. Models constructed for these conditions would likely produce different timing for resumption of anticoagulation. Unfortunately, large scale cohort studies to inform such models are lacking. Similarly, we simulated UGIB, and our results should not be generalized to populations with other types of bleeding (eg, intracranial hemorrhage). Again, cohort studies of other types of bleeding would be necessary to understand the risks of anticoagulation over time in such populations.

Higher-quality data regarding risk of rebleeding over time would improve our estimates. Our literature search identified only one systematic review that could be used to estimate the risk of recurrent UGIB over time. These data are not adequate to interrogate other forms this survival curve could take, such as Gompertz or Weibull distributions. Recurrence risk almost certainly declines over time, but how quickly it declines carries additional uncertainty.

Despite these limitations, we believe our results to be the best estimates to date of the optimal time of anticoagulation reinitiation following UGIB. Our findings could help inform clinical practice guidelines and reduce variation in care where current practice guidelines are largely silent. Given the potential ease of implementing scheduling changes, our results represent an opportunity to improve patient outcomes with little resource investment.

In conclusion, after UGIB associated with anticoagulation, our model suggests that warfarin is optimally restarted approximately six weeks following hemostasis and that apixaban is optimally restarted approximately one month following hemostasis. Modest changes to this timing based on probability of thromboembolic stroke are reasonable.

 

 

Disclosures

The authors have nothing to disclose.

Funding

The authors received no specific funding for this work.

Files
References

1. Witt DM, Delate T, Garcia DA, et al. Risk of thromboembolism, recurrent hemorrhage, and death after warfarin therapy interruption for gastrointestinal tract bleeding. Arch Intern Med. 2012;172(19):1484-1491. doi: 10.1001/archinternmed.2012.4261. PubMed
2. Sengupta N, Feuerstein JD, Patwardhan VR, et al. The risks of thromboembolism vs recurrent gastrointestinal bleeding after interruption of systemic anticoagulation in hospitalized inpatients with gastrointestinal bleeding: a prospective study. Am J Gastroenterol. 2015;110(2):328-335. doi: 10.1038/ajg.2014.398. PubMed
3. Qureshi W, Mittal C, Patsias I, et al. Restarting anticoagulation and outcomes after major gastrointestinal bleeding in atrial fibrillation. Am J Cardiol. 2014;113(4):662-668. doi: 10.1016/j.amjcard.2013.10.044. PubMed
4. Milling TJ, Spyropoulos AC. Re-initiation of dabigatran and direct factor Xa antagonists after a major bleed. Am J Emerg Med. 2016;34(11):19-25. doi: 10.1016/j.ajem.2016.09.049. PubMed
5. Brotman DJ, Jaffer AK. Resuming anticoagulation in the first week following gastrointestinal tract hemorrhage. Arch Intern Med. 2012;172(19):1492-1493. doi: 10.1001/archinternmed.2012.4309. PubMed
6. Barnes GD, Lucas E, Alexander GC, Goldberger ZD. National trends in ambulatory oral anticoagulant use. Am J Med. 2015;128(12):1300-5. doi: 10.1016/j.amjmed.2015.05.044. PubMed
7. Noseworthy PA, Yao X, Abraham NS, Sangaralingham LR, McBane RD, Shah ND. Direct comparison of dabigatran, rivaroxaban, and apixaban for effectiveness and safety in nonvalvular atrial fibrillation. Chest. 2016;150(6):1302-1312. doi: 10.1016/j.chest.2016.07.013. PubMed
8. Pappas MA, Barnes GD, Vijan S. Personalizing bridging anticoagulation in patients with nonvalvular atrial fibrillation—a microsimulation analysis. J Gen Intern Med. 2017;32(4):464-470. doi: 10.1007/s11606-016-3932-7. PubMed
9. Pappas MA, Vijan S, Rothberg MB, Singer DE. Reducing age bias in decision analyses of anticoagulation for patients with nonvalvular atrial fibrillation – a microsimulation study. PloS One. 2018;13(7):e0199593. doi: 10.1371/journal.pone.0199593. PubMed
10. National Center for Health Statistics. National Health and Nutrition Examination Survey. https://www.cdc.gov/nchs/nhanes/about_nhanes.htm. Accessed August 30, 2018.
11. United States Census Bureau. Age and sex composition in the United States: 2014. https://www.census.gov/data/tables/2014/demo/age-and-sex/2014-age-sex-composition.html. Accessed August 30, 2018.
12. Go AS, Hylek EM, Phillips KA, et al. Prevalence of diagnosed atrial fibrillation in adults: national implications for rhythm management and stroke prevention: the AnTicoagulation and Risk Factors in Atrial Fibrillation (ATRIA) study. JAMA. 2001;285(18):2370-2375. doi: 10.1001/jama.285.18.2370. PubMed
13. Lip GYH, Nieuwlaat R, Pisters R, Lane DA, Crijns HJGM. Refining clinical risk stratification for predicting stroke and thromboembolism in atrial fibrillation using a novel risk factor-based approach: the euro heart survey on atrial fibrillation. Chest. 2010;137(2):263-272. doi: 10.1378/chest.09-1584. PubMed
14. Pisters R, Lane DA, Nieuwlaat R, de Vos CB, Crijns HJGM, Lip GYH. A novel user-friendly score (HAS-BLED) to assess 1-year risk of major bleeding in patients with atrial fibrillation. Chest. 2010;138(5):1093-1100. doi: 10.1378/chest.10-0134. PubMed
15. Granger CB, Alexander JH, McMurray JJV, et al. Apixaban versus warfarin in patients with atrial fibrillation. N Engl J Med. 2011;365(11):981-992. doi: 10.1056/NEJMoa1107039. 
16. Rockall TA, Logan RF, Devlin HB, Northfield TC. Risk assessment after acute upper gastrointestinal haemorrhage. Gut. 1996;38(3):316-321. doi: 10.1136/gut.38.3.316. PubMed
17. Vreeburg EM, Terwee CB, Snel P, et al. Validation of the Rockall risk scoring system in upper gastrointestinal bleeding. Gut. 1999;44(3):331-335. doi: 10.1136/gut.44.3.331. PubMed
18. Enns RA, Gagnon YM, Barkun AN, et al. Validation of the Rockall scoring system for outcomes from non-variceal upper gastrointestinal bleeding in a Canadian setting. World J Gastroenterol. 2006;12(48):7779-7785. doi: 10.3748/wjg.v12.i48.7779. PubMed
19. Stanley AJ, Laine L, Dalton HR, et al. Comparison of risk scoring systems for patients presenting with upper gastrointestinal bleeding: international multicentre prospective study. BMJ. 2017;356:i6432. doi: 10.1136/bmj.i6432. PubMed
20. Barkun AN, Bardou M, Kuipers EJ, et al. International consensus recommendations on the management of patients with nonvariceal upper gastrointestinal bleeding. Ann Intern Med. 2010;152(2):101-113. doi: 10.7326/0003-4819-152-2-201001190-00009. PubMed
21. Friberg L, Rosenqvist M, Lip GYH. Evaluation of risk stratification schemes for ischaemic stroke and bleeding in 182 678 patients with atrial fibrillation: the Swedish atrial fibrillation cohort study. Eur Heart J. 2012;33(12):1500-1510. doi: 10.1093/eurheartj/ehr488. PubMed
22. Friberg L, Rosenqvist M, Lip GYH. Net clinical benefit of warfarin in patients with atrial fibrillation: a report from the Swedish atrial fibrillation cohort study. Circulation. 2012;125(19):2298-2307. doi: 10.1161/CIRCULATIONAHA.111.055079. PubMed
23. Hart RG, Diener HC, Yang S, et al. Intracranial hemorrhage in atrial fibrillation patients during anticoagulation with warfarin or dabigatran: the RE-LY trial. Stroke. 2012;43(6):1511-1517. doi: 10.1161/STROKEAHA.112.650614. PubMed
24. Hankey GJ, Stevens SR, Piccini JP, et al. Intracranial hemorrhage among patients with atrial fibrillation anticoagulated with warfarin or rivaroxaban: the rivaroxaban once daily, oral, direct factor Xa inhibition compared with vitamin K antagonism for prevention of stroke and embolism trial in atrial fibrillation. Stroke. 2014;45(5):1304-1312. doi: 10.1161/STROKEAHA.113.004506. PubMed
25. Eikelboom JW, Wallentin L, Connolly SJ, et al. Risk of bleeding with 2 doses of dabigatran compared with warfarin in older and younger patients with atrial fibrillation : an analysis of the randomized evaluation of long-term anticoagulant therapy (RE-LY trial). Circulation. 2011;123(21):2363-2372. doi: 10.1161/CIRCULATIONAHA.110.004747. PubMed
26. El Ouali S, Barkun A, Martel M, Maggio D. Timing of rebleeding in high-risk peptic ulcer bleeding after successful hemostasis: a systematic review. Can J Gastroenterol Hepatol. 2014;28(10):543-548. doi: 0.1016/S0016-5085(14)60738-1. PubMed
27. Kimmel SE, French B, Kasner SE, et al. A pharmacogenetic versus a clinical algorithm for warfarin dosing. N Engl J Med. 2013;369(24):2283-2293. doi: 10.1056/NEJMoa1310669. PubMed
28. Healthcare Cost and Utilization Project (HCUP), Agency for Healthcare Research and Quality. HCUP Databases. https://www.hcup-us.ahrq.gov/nisoverview.jsp. Accessed August 31, 2018.
29. Guerrouij M, Uppal CS, Alklabi A, Douketis JD. The clinical impact of bleeding during oral anticoagulant therapy: assessment of morbidity, mortality and post-bleed anticoagulant management. J Thromb Thrombolysis. 2011;31(4):419-423. doi: 10.1007/s11239-010-0536-7. PubMed
30. Fang MC, Go AS, Chang Y, et al. Death and disability from warfarin-associated intracranial and extracranial hemorrhages. Am J Med. 2007;120(8):700-705. doi: 10.1016/j.amjmed.2006.07.034. PubMed
31. Guertin JR, Feeny D, Tarride JE. Age- and sex-specific Canadian utility norms, based on the 2013-2014 Canadian Community Health Survey. CMAJ. 2018;190(6):E155-E161. doi: 10.1503/cmaj.170317. PubMed
32. Gage BF, Cardinalli AB, Albers GW, Owens DK. Cost-effectiveness of warfarin and aspirin for prophylaxis of stroke in patients with nonvalvular atrial fibrillation. JAMA. 1995;274(23):1839-1845. doi: 10.1001/jama.1995.03530230025025. PubMed
33. Fang MC, Go AS, Chang Y, et al. Long-term survival after ischemic stroke in patients with atrial fibrillation. Neurology. 2014;82(12):1033-1037. doi: 10.1212/WNL.0000000000000248. PubMed
34. Hong KS, Saver JL. Quantifying the value of stroke disability outcomes: WHO global burden of disease project disability weights for each level of the modified Rankin scale * Supplemental Mathematical Appendix. Stroke. 2009;40(12):3828-3833. doi: 10.1161/STROKEAHA.109.561365. PubMed
35. Jalal H, Dowd B, Sainfort F, Kuntz KM. Linear regression metamodeling as a tool to summarize and present simulation model results. Med Decis Mak. 2013;33(7):880-890. doi: 10.1177/0272989X13492014. PubMed
36. Staerk L, Lip GYH, Olesen JB, et al. Stroke and recurrent haemorrhage associated with antithrombotic treatment after gastrointestinal bleeding in patients with atrial fibrillation: nationwide cohort study. BMJ. 2015;351:h5876. doi: 10.1136/bmj.h5876. PubMed
37. Kopec JA, Finès P, Manuel DG, et al. Validation of population-based disease simulation models: a review of concepts and methods. BMC Public Health. 2010;10(1):710. doi: 10.1186/1471-2458-10-710. PubMed
38. Smith EE, Shobha N, Dai D, et al. Risk score for in-hospital ischemic stroke mortality derived and validated within the Get With The Guidelines-Stroke Program. Circulation. 2010;122(15):1496-1504. doi: 10.1161/CIRCULATIONAHA.109.932822. PubMed
39. Smith EE, Shobha N, Dai D, et al. A risk score for in-hospital death in patients admitted with ischemic or hemorrhagic stroke. J Am Heart Assoc. 2013;2(1):e005207. doi: 10.1161/JAHA.112.005207. PubMed
40. Busl KM, Prabhakaran S. Predictors of mortality in nontraumatic subdural hematoma. J Neurosurg. 2013;119(5):1296-1301. doi: 10.3171/2013.4.JNS122236. PubMed
41. Murphy SL, Kochanek KD, Xu J, Heron M. Deaths: final data for 2012. Natl Vital Stat Rep. 2015;63(9):1-117. http://www.ncbi.nlm.nih.gov/pubmed/26759855. Accessed August 31, 2018. 
42. Dachs RJ, Burton JH, Joslin J. A user’s guide to the NINDS rt-PA stroke trial database. PLOS Med. 2008;5(5):e113. doi: 10.1371/journal.pmed.0050113. PubMed
43. Ashburner JM, Go AS, Reynolds K, et al. Comparison of frequency and outcome of major gastrointestinal hemorrhage in patients with atrial fibrillation on versus not receiving warfarin therapy (from the ATRIA and ATRIA-CVRN cohorts). Am J Cardiol. 2015;115(1):40-46. doi: 10.1016/j.amjcard.2014.10.006. PubMed
44. Weinstein MC, Siegel JE, Gold MR, Kamlet MS, Russell LB. Recommendations of the panel on cost-effectiveness in health and medicine. JAMA. 1996;276(15):1253-1258. doi: 10.1001/jama.1996.03540150055031. PubMed

Article PDF
Issue
Journal of Hospital Medicine 14(7)
Publications
Topics
Page Number
394-400. Published online first April 8, 2019.
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Anticoagulation is commonly used in the management of atrial fibrillation to reduce the risk of ischemic stroke. Warfarin and other anticoagulants increase the risk of hemorrhagic complications, including upper gastrointestinal bleeding (UGIB). Following UGIB, management of anticoagulation is highly variable. Many patients permanently discontinue anticoagulation, while others continue without interruption.1-4 Among patients who resume warfarin, different cohorts have measured median times to resumption ranging from four days to 50 days.1-3 Outcomes data are sparse, and clinical guidelines offer little direction.5

Following UGIB, the balance between the risks and benefits of anticoagulation changes over time. Rebleeding risk is highest immediately after the event and declines quickly; therefore, rapid resumption of anticoagulation causes patient harm.3 Meanwhile, the risk of stroke remains constant, and delay in resumption of anticoagulation is associated with increased risk of stroke and death.1 At some point in time following the initial UGIB, the expected harm from bleeding would equal the expected harm from stroke. This time point would represent the optimal time to restart anticoagulation.

Trial data are unlikely to identify the optimal time for restarting anticoagulation. A randomized trial comparing discrete reinitiation times (eg, two weeks vs six weeks) may easily miss the optimal timing. Moreover, because the daily probability of thromboembolic events is low, large numbers of patients would be required to power such a study. In addition, a number of oral anticoagulants are now approved for prevention of thromboembolic stroke in atrial fibrillation, and each drug may have different optimal timing.

In contrast to randomized trials that would be impracticable for addressing this clinical issue, microsimulation modeling can provide granular information regarding the optimal time to restart anticoagulation. Herein, we set out to estimate the expected benefit of reinitiation of warfarin, the most commonly used oral anticoagulant,6 or apixaban, the direct oral anticoagulant with the most favorable risk profile,7 as a function of days after UGIB.

METHODS

We previously described a microsimulation model of anticoagulation among patients with nonvalvular atrial fibrillation (NVAF; hereafter, we refer to this model as the Personalized Anticoagulation Decision-Making Assistance model, or PADMA).8,9 For this study, we extended this model to incorporate the probability of rebleeding following UGIB and include apixaban as an alternative to warfarin. This model begins with a synthetic population following UGIB, the members of which are at varying risk for thromboembolism, recurrent UGIB, and other hemorrhages. For each patient, the model simulates a number of possible events (eg, thromboembolic stroke, intracranial hemorrhage, rebleeding, and other major extracranial hemorrhages) on each day of an acute period of 90 days after hemostasis. Patients who survive until the end of the acute period enter a simulation with annual, rather than daily, cycles. Our model then estimates total quality-adjusted life-years (QALYs) for each patient, discounted to the present. We report the average discounted QALYs produced by the model for the same population if all individuals in our input population were to resume either warfarin or apixaban on a specific day. Input parameters and ranges are summarized in Table 1, a simplified schematic of our model is shown in the Supplemental Appendix, and additional details regarding model structure and assumptions can be found in earlier work.8,9 We simulated from a health system perspective over a lifelong time horizon. All analyses were performed in version 14 of Stata (StataCorp, LLC, College Station, Texas).

 

 

Synthetic Population

To generate a population reflective of the comorbidities and age distribution of the US population with NVAF, we merged relevant variables from the National Health and Nutrition Examination Survey (NHANES; 2011-2012), using multiple imputation to correct for missing variables.10 We then bootstrapped to national population estimates by age and sex to arrive at a hypothetical population of the United States.11 Because NHANES does not include atrial fibrillation, we applied sex- and age-specific prevalence rates from the AnTicoagulation and Risk Factors In Atrial Fibrillation study.12 We then calculated commonly used risk scores (CHA2DS2-Vasc and HAS-BLED) for each patient and limited the population to patients with a CHA2DS2-Vasc score of one or greater.13,14 The population resuming apixaban was further limited to patients whose creatinine clearance was 25 mL/min or greater in keeping with the entry criteria in the phase 3 clinical trial on which the medication’s approval was based.15

To estimate patient-specific probability of rebleeding, we generated a Rockall score for each patient.16 Although the discrimination of the Rockall score is limited for individual patients, as with all other tools used to predict rebleeding following UGIB, the Rockall score has demonstrated reasonable calibration across a threefold risk gradient.17-19 International consensus guidelines recommend the Rockall score as one of two risk prediction tools for clinical use in the management of patients with UGIB.20 In addition, because the Rockall score includes some demographic components (five of a possible 11 points), our estimates of rebleeding risk are covariant with other patient-specific risks. We assumed that the endoscopic components of the Rockall score were present in our cohort at the same frequency as in the original derivation and are independent of known patient risk factors.16 For example, 441 out of 4,025 patients in the original Rockall derivation cohort presented with a systolic blood pressure less than 100 mm Hg. We assumed that an independent and random 10.96% of the cohort would present with shock, which confers two points in the Rockall score.

The population was replicated 60 times, with identical copies of the population resuming anticoagulation on each of days 1-60 (where day zero represents hemostasis). Intermediate data regarding our simulated population can be found in the Supplemental Appendix and in prior work.

Event Type, Severity, and Mortality

Each patient in our simulation could sustain several discrete and independent events: ischemic stroke, intracranial hemorrhage, recurrent UGIB, or extracranial major hemorrhage other than recurrent UGIB. As in prior analyses using the PADMA model, we did not consider minor hemorrhagic events.8

The probability of each event was conditional on the corresponding risk scoring system. Patient-specific probability of ischemic stroke was conditional on CHA2DS2-Vasc score.21,22 Patient-specific probability of intracranial hemorrhage was conditional on HAS-BLED score, with the proportions of intracranial hemorrhage of each considered subtype (intracerebral, subarachnoid, or subdural) bootstrapped from previously-published data.21-24 Patient-specific probability of rebleeding was conditional on Rockall score from the combined Rockall and Vreeburg validation cohorts.17 Patient-specific probability of extracranial major hemorrhage was conditional on HAS-BLED score.21 To avoid double-counting of UGIB, we subtracted the baseline risk of UGIB from the overall rate of extracranial major hemorrhages using previously-published data regarding relative frequency and a bootstrapping approach.25

 

 

Probability of Rebleeding Over Time

To estimate the decrease in rebleeding risk over time, we searched the Medline database for systematic reviews of recurrent bleeding following UGIB using the strategy detailed in the Supplemental Appendix. Using the interval rates of rebleeding we identified, we calculated implied daily rates of rebleeding at the midpoint of each interval. For example, 39.5% of rebleeding events occurred within three days of hemostasis, implying a daily rate of approximately 13.2% on day two (32 of 81 events over a three-day period). We repeated this process to estimate daily rates at the midpoint of each reported time interval and fitted an exponential decay function.26 Our exponential fitted these datapoints quite well, but we lacked sufficient data to test other survival functions (eg, Gompertz, lognormal, etc.). Our fitted exponential can be expressed as:

P rebleeding = b 0 *exp(b 1 *day)

where b0 = 0.1843 (SE: 0.0136) and b1 = –0.1563 (SE: 0.0188). For example, a mean of 3.9% of rebleeding episodes will occur on day 10 (0.1843 *exp(–0.1563 *10)).

Relative Risks of Events with Anticoagulation

For patients resuming warfarin, the probabilities of each event were adjusted based on patient-specific daily INR. All INRs were assumed to be 1.0 until the day of warfarin reinitiation, after which interpolated trajectories of postinitiation INR measurements were sampled for each patient from an earlier study of clinical warfarin initiation.27 Relative risks of ischemic stroke and hemorrhagic events were calculated based on each day’s INR.

For patients taking apixaban, we assumed that the medication would reach full therapeutic effect one day after reinitiation. Based on available evidence, we applied the relative risks of each event with apixaban compared with warfarin.25

Future Disability and Mortality

Each event in our simulation resulted in hospitalization. Length of stay was sampled for each diagnosis.28 The disutility of hospitalization was estimated based on length of stay.8 Inpatient mortality and future disability were predicted for each event as previously described.8 We assumed that recurrent episodes of UGIB conferred morbidity and mortality identical to extracranial major hemorrhages more broadly.29,30

 

 

Disutilities

We used a multiplicative model for disutility with baseline utilities conditional on age and sex.31 Each day after resumption of anticoagulation carried a disutility of 0.012 for warfarin or 0.002 for apixaban, which we assumed to be equivalent to aspirin in disutility.32 Long-term disutility and life expectancy were conditional on modified Rankin Score (mRS).33,34 We discounted all QALYs to day zero using standard exponential discounting and a discount rate centered at 3%. We then computed the average discounted QALYs among the cohort of patients that resumed anticoagulation on each day following the index UGIB.

Sensitivity Analyses and Metamodel

To assess sensitivity to continuously varying input parameters, such as discount rate, the proportion of extracranial major hemorrhages that are upper GI bleeds, and inpatient mortality from extracranial major hemorrhage, we constructed a metamodel (a regression model of our microsimulation results).35 We tested for interactions among input parameters and dropped parameters that were not statistically significant predictors of discounted QALYs from our metamodel. We then tested for interactions between each parameter and day resuming anticoagulation to determine which factors may impact the optimal day of reinitiation. Finally, we used predicted marginal effects from our metamodel to assess the change in optimal day across the ranges of each input parameter when other parameters were held at their medians.

RESULTS

Resuming warfarin on day zero produced the fewest QALYs. With delay in reinitiation of anticoagulation, expected QALYs increased, peaked, and then declined for all scenarios. In our base-case simulation of warfarin, peak utility was achieved by resumption 41 days after the index UGIB. Resumption between days 32 and 51 produced greater than 99.9% of peak utility. In our base-case simulation of apixaban, peak utility was achieved by resumption 32 days after the index UGIB. Resumption between days 21 and 47 produced greater than 99.9% of peak utility. Results for warfarin and apixaban are shown in Figures 1 and 2, respectively.

The optimal day of warfarin reinitiation was most sensitive to CHA2DS2-Vasc scores and varied by around 11 days between a CHA2DS2-Vasc score of one and a CHA2DS2-Vasc score of six (the 5th and 95th percentiles, respectively) when all other parameters are held at their medians. Results were comparatively insensitive to rebleeding risk. Varying Rockall score from two to seven (the 5th and 95th percentiles, respectively) added three days to optimal warfarin resumption. Varying other parameters from the 5th to the 95th percentile (including HAS-BLED score, sex, age, and discount rate) changed expected QALYs but did not change the optimal day of reinitiation of warfarin. Optimal day of reinitiation for warfarin stratified by CHA2DS2-Vasc score is shown in Table 2.



Sensitivity analyses for apixaban produced broadly similar results, but with greater sensitivity to rebleeding risk. Optimal day of reinitiation varied by 15 days over the examined range of CHA2DS2-Vasc scores (Table 2) and by six days over the range of Rockall scores (Supplemental Appendix). Other input parameters, including HAS-BLED score, age, sex, and discount rate, changed expected QALYs and were significant in our metamodel but did not affect the optimal day of reinitiation. Metamodel results for both warfarin and apixaban are included in the Supplemental Appendix.

 

 

DISCUSSION

Anticoagulation is frequently prescribed for patients with NVAF, and hemorrhagic complications are common. Although anticoagulants are withheld following hemorrhages, scant evidence to inform the optimal timing of reinitiation is available. In this microsimulation analysis, we found that the optimal time to reinitiate anticoagulation following UGIB is around 41 days for warfarin and around 32 days for apixaban. We have further demonstrated that the optimal timing of reinitiation can vary by nearly two weeks, depending on a patient’s underlying risk of stroke, and that early reinitiation is more sensitive to rebleeding risk than late reinitiation.

Prior work has shown that early reinitiation of anticoagulation leads to higher rates of recurrent hemorrhage while failure to reinitiate anticoagulation is associated with higher rates of stroke and mortality.1-4,36 Our results add to the literature in a number of important ways. First, our model not only confirms that anticoagulation should be restarted but also suggests when this action should be taken. The competing risks of bleeding and stroke have left clinicians with little guidance; we have quantified the clinical reasoning required for the decision to resume anticoagulation. Second, by including the disutility of hospitalization and long-term disability, our model more accurately represents the complex tradeoffs between recurrent hemorrhage and (potentially disabling) stroke than would a comparison of event rates. Third, our model is conditional upon patient risk factors, allowing clinicians to personalize the timing of anticoagulation resumption. Theory would suggest that patients at higher risk of ischemic stroke benefit from earlier resumption of anticoagulation, while patients at higher risk of hemorrhage benefit from delayed reinitiation. We have quantified the extent to which patient-specific risks should change timing. Fourth, we offer a means of improving expected health outcomes that requires little more than appropriate scheduling. Current practice regarding resuming anticoagulation is widely variable. Many patients never resume warfarin, and those that do resume do so after highly varied periods of time.1-5,36 We offer a means of standardizing clinical practice and improving expected patient outcomes.



Interestingly, patient-specific risk of rebleeding had little effect on our primary outcome for warfarin, and a greater effect in our simulation of apixaban. It would seem that rebleeding risk, which decreases roughly exponentially, is sufficiently low by the time period at which warfarin should be resumed that patient-specific hemorrhage risk factors have little impact. Meanwhile, at the shorter post-event intervals at which apixaban can be resumed, both stroke risk and patient-specific bleeding risk are worthy considerations.

Our model is subject to several important limitations. First, our predictions of the optimal day as a function of risk scores can only be as well-calibrated as the input scoring systems. It is intuitive that patients with higher risk of rebleeding benefit from delayed reinitiation, while patients with higher risk of thromboembolic stroke benefit from earlier reinitiation. Still, clinicians seeking to operationalize competing risks through these two scores—or, indeed, any score—should be mindful of their limited calibration and shared variance. In other words, while the optimal day of reinitiation is likely in the range we have predicted and varies to the degree demonstrated here, the optimal day we have predicted for each score is likely overly precise. However, while better-calibrated prediction models would improve the accuracy of our model, we believe ours to be the best estimate of timing given available data and this approach to be the most appropriate way to personalize anticoagulation resumption.

Our simulation of apixaban carries an additional source of potential miscalibration. In the clinical trials that led to their approval, apixaban and other direct oral anticoagulants (DOACs) were compared with warfarin over longer periods of time than the acute period simulated in this work. Over a short period of time, patients treated with more rapidly therapeutic medications (in this case, apixaban) would receive more days of effective therapy compared with a slower-onset medication, such as warfarin. Therefore, the relative risks experienced by patients are likely different over the time period we have simulated compared with those measured over longer periods of time (as in phase 3 clinical trials). Our results for apixaban should be viewed as more limited than our estimates for warfarin. More broadly, simulation analyses are intended to predict overall outcomes that are difficult to measure. While other frameworks to assess model credibility exist, the fact remains that no extant datasets can directly validate our predictions.37

Our findings are limited to patients with NVAF. Anticoagulants are prescribed for a variety of indications with widely varied underlying risks and benefits. Models constructed for these conditions would likely produce different timing for resumption of anticoagulation. Unfortunately, large scale cohort studies to inform such models are lacking. Similarly, we simulated UGIB, and our results should not be generalized to populations with other types of bleeding (eg, intracranial hemorrhage). Again, cohort studies of other types of bleeding would be necessary to understand the risks of anticoagulation over time in such populations.

Higher-quality data regarding risk of rebleeding over time would improve our estimates. Our literature search identified only one systematic review that could be used to estimate the risk of recurrent UGIB over time. These data are not adequate to interrogate other forms this survival curve could take, such as Gompertz or Weibull distributions. Recurrence risk almost certainly declines over time, but how quickly it declines carries additional uncertainty.

Despite these limitations, we believe our results to be the best estimates to date of the optimal time of anticoagulation reinitiation following UGIB. Our findings could help inform clinical practice guidelines and reduce variation in care where current practice guidelines are largely silent. Given the potential ease of implementing scheduling changes, our results represent an opportunity to improve patient outcomes with little resource investment.

In conclusion, after UGIB associated with anticoagulation, our model suggests that warfarin is optimally restarted approximately six weeks following hemostasis and that apixaban is optimally restarted approximately one month following hemostasis. Modest changes to this timing based on probability of thromboembolic stroke are reasonable.

 

 

Disclosures

The authors have nothing to disclose.

Funding

The authors received no specific funding for this work.

Anticoagulation is commonly used in the management of atrial fibrillation to reduce the risk of ischemic stroke. Warfarin and other anticoagulants increase the risk of hemorrhagic complications, including upper gastrointestinal bleeding (UGIB). Following UGIB, management of anticoagulation is highly variable. Many patients permanently discontinue anticoagulation, while others continue without interruption.1-4 Among patients who resume warfarin, different cohorts have measured median times to resumption ranging from four days to 50 days.1-3 Outcomes data are sparse, and clinical guidelines offer little direction.5

Following UGIB, the balance between the risks and benefits of anticoagulation changes over time. Rebleeding risk is highest immediately after the event and declines quickly; therefore, rapid resumption of anticoagulation causes patient harm.3 Meanwhile, the risk of stroke remains constant, and delay in resumption of anticoagulation is associated with increased risk of stroke and death.1 At some point in time following the initial UGIB, the expected harm from bleeding would equal the expected harm from stroke. This time point would represent the optimal time to restart anticoagulation.

Trial data are unlikely to identify the optimal time for restarting anticoagulation. A randomized trial comparing discrete reinitiation times (eg, two weeks vs six weeks) may easily miss the optimal timing. Moreover, because the daily probability of thromboembolic events is low, large numbers of patients would be required to power such a study. In addition, a number of oral anticoagulants are now approved for prevention of thromboembolic stroke in atrial fibrillation, and each drug may have different optimal timing.

In contrast to randomized trials that would be impracticable for addressing this clinical issue, microsimulation modeling can provide granular information regarding the optimal time to restart anticoagulation. Herein, we set out to estimate the expected benefit of reinitiation of warfarin, the most commonly used oral anticoagulant,6 or apixaban, the direct oral anticoagulant with the most favorable risk profile,7 as a function of days after UGIB.

METHODS

We previously described a microsimulation model of anticoagulation among patients with nonvalvular atrial fibrillation (NVAF; hereafter, we refer to this model as the Personalized Anticoagulation Decision-Making Assistance model, or PADMA).8,9 For this study, we extended this model to incorporate the probability of rebleeding following UGIB and include apixaban as an alternative to warfarin. This model begins with a synthetic population following UGIB, the members of which are at varying risk for thromboembolism, recurrent UGIB, and other hemorrhages. For each patient, the model simulates a number of possible events (eg, thromboembolic stroke, intracranial hemorrhage, rebleeding, and other major extracranial hemorrhages) on each day of an acute period of 90 days after hemostasis. Patients who survive until the end of the acute period enter a simulation with annual, rather than daily, cycles. Our model then estimates total quality-adjusted life-years (QALYs) for each patient, discounted to the present. We report the average discounted QALYs produced by the model for the same population if all individuals in our input population were to resume either warfarin or apixaban on a specific day. Input parameters and ranges are summarized in Table 1, a simplified schematic of our model is shown in the Supplemental Appendix, and additional details regarding model structure and assumptions can be found in earlier work.8,9 We simulated from a health system perspective over a lifelong time horizon. All analyses were performed in version 14 of Stata (StataCorp, LLC, College Station, Texas).

 

 

Synthetic Population

To generate a population reflective of the comorbidities and age distribution of the US population with NVAF, we merged relevant variables from the National Health and Nutrition Examination Survey (NHANES; 2011-2012), using multiple imputation to correct for missing variables.10 We then bootstrapped to national population estimates by age and sex to arrive at a hypothetical population of the United States.11 Because NHANES does not include atrial fibrillation, we applied sex- and age-specific prevalence rates from the AnTicoagulation and Risk Factors In Atrial Fibrillation study.12 We then calculated commonly used risk scores (CHA2DS2-Vasc and HAS-BLED) for each patient and limited the population to patients with a CHA2DS2-Vasc score of one or greater.13,14 The population resuming apixaban was further limited to patients whose creatinine clearance was 25 mL/min or greater in keeping with the entry criteria in the phase 3 clinical trial on which the medication’s approval was based.15

To estimate patient-specific probability of rebleeding, we generated a Rockall score for each patient.16 Although the discrimination of the Rockall score is limited for individual patients, as with all other tools used to predict rebleeding following UGIB, the Rockall score has demonstrated reasonable calibration across a threefold risk gradient.17-19 International consensus guidelines recommend the Rockall score as one of two risk prediction tools for clinical use in the management of patients with UGIB.20 In addition, because the Rockall score includes some demographic components (five of a possible 11 points), our estimates of rebleeding risk are covariant with other patient-specific risks. We assumed that the endoscopic components of the Rockall score were present in our cohort at the same frequency as in the original derivation and are independent of known patient risk factors.16 For example, 441 out of 4,025 patients in the original Rockall derivation cohort presented with a systolic blood pressure less than 100 mm Hg. We assumed that an independent and random 10.96% of the cohort would present with shock, which confers two points in the Rockall score.

The population was replicated 60 times, with identical copies of the population resuming anticoagulation on each of days 1-60 (where day zero represents hemostasis). Intermediate data regarding our simulated population can be found in the Supplemental Appendix and in prior work.

Event Type, Severity, and Mortality

Each patient in our simulation could sustain several discrete and independent events: ischemic stroke, intracranial hemorrhage, recurrent UGIB, or extracranial major hemorrhage other than recurrent UGIB. As in prior analyses using the PADMA model, we did not consider minor hemorrhagic events.8

The probability of each event was conditional on the corresponding risk scoring system. Patient-specific probability of ischemic stroke was conditional on CHA2DS2-Vasc score.21,22 Patient-specific probability of intracranial hemorrhage was conditional on HAS-BLED score, with the proportions of intracranial hemorrhage of each considered subtype (intracerebral, subarachnoid, or subdural) bootstrapped from previously-published data.21-24 Patient-specific probability of rebleeding was conditional on Rockall score from the combined Rockall and Vreeburg validation cohorts.17 Patient-specific probability of extracranial major hemorrhage was conditional on HAS-BLED score.21 To avoid double-counting of UGIB, we subtracted the baseline risk of UGIB from the overall rate of extracranial major hemorrhages using previously-published data regarding relative frequency and a bootstrapping approach.25

 

 

Probability of Rebleeding Over Time

To estimate the decrease in rebleeding risk over time, we searched the Medline database for systematic reviews of recurrent bleeding following UGIB using the strategy detailed in the Supplemental Appendix. Using the interval rates of rebleeding we identified, we calculated implied daily rates of rebleeding at the midpoint of each interval. For example, 39.5% of rebleeding events occurred within three days of hemostasis, implying a daily rate of approximately 13.2% on day two (32 of 81 events over a three-day period). We repeated this process to estimate daily rates at the midpoint of each reported time interval and fitted an exponential decay function.26 Our exponential fitted these datapoints quite well, but we lacked sufficient data to test other survival functions (eg, Gompertz, lognormal, etc.). Our fitted exponential can be expressed as:

P rebleeding = b 0 *exp(b 1 *day)

where b0 = 0.1843 (SE: 0.0136) and b1 = –0.1563 (SE: 0.0188). For example, a mean of 3.9% of rebleeding episodes will occur on day 10 (0.1843 *exp(–0.1563 *10)).

Relative Risks of Events with Anticoagulation

For patients resuming warfarin, the probabilities of each event were adjusted based on patient-specific daily INR. All INRs were assumed to be 1.0 until the day of warfarin reinitiation, after which interpolated trajectories of postinitiation INR measurements were sampled for each patient from an earlier study of clinical warfarin initiation.27 Relative risks of ischemic stroke and hemorrhagic events were calculated based on each day’s INR.

For patients taking apixaban, we assumed that the medication would reach full therapeutic effect one day after reinitiation. Based on available evidence, we applied the relative risks of each event with apixaban compared with warfarin.25

Future Disability and Mortality

Each event in our simulation resulted in hospitalization. Length of stay was sampled for each diagnosis.28 The disutility of hospitalization was estimated based on length of stay.8 Inpatient mortality and future disability were predicted for each event as previously described.8 We assumed that recurrent episodes of UGIB conferred morbidity and mortality identical to extracranial major hemorrhages more broadly.29,30

 

 

Disutilities

We used a multiplicative model for disutility with baseline utilities conditional on age and sex.31 Each day after resumption of anticoagulation carried a disutility of 0.012 for warfarin or 0.002 for apixaban, which we assumed to be equivalent to aspirin in disutility.32 Long-term disutility and life expectancy were conditional on modified Rankin Score (mRS).33,34 We discounted all QALYs to day zero using standard exponential discounting and a discount rate centered at 3%. We then computed the average discounted QALYs among the cohort of patients that resumed anticoagulation on each day following the index UGIB.

Sensitivity Analyses and Metamodel

To assess sensitivity to continuously varying input parameters, such as discount rate, the proportion of extracranial major hemorrhages that are upper GI bleeds, and inpatient mortality from extracranial major hemorrhage, we constructed a metamodel (a regression model of our microsimulation results).35 We tested for interactions among input parameters and dropped parameters that were not statistically significant predictors of discounted QALYs from our metamodel. We then tested for interactions between each parameter and day resuming anticoagulation to determine which factors may impact the optimal day of reinitiation. Finally, we used predicted marginal effects from our metamodel to assess the change in optimal day across the ranges of each input parameter when other parameters were held at their medians.

RESULTS

Resuming warfarin on day zero produced the fewest QALYs. With delay in reinitiation of anticoagulation, expected QALYs increased, peaked, and then declined for all scenarios. In our base-case simulation of warfarin, peak utility was achieved by resumption 41 days after the index UGIB. Resumption between days 32 and 51 produced greater than 99.9% of peak utility. In our base-case simulation of apixaban, peak utility was achieved by resumption 32 days after the index UGIB. Resumption between days 21 and 47 produced greater than 99.9% of peak utility. Results for warfarin and apixaban are shown in Figures 1 and 2, respectively.

The optimal day of warfarin reinitiation was most sensitive to CHA2DS2-Vasc scores and varied by around 11 days between a CHA2DS2-Vasc score of one and a CHA2DS2-Vasc score of six (the 5th and 95th percentiles, respectively) when all other parameters are held at their medians. Results were comparatively insensitive to rebleeding risk. Varying Rockall score from two to seven (the 5th and 95th percentiles, respectively) added three days to optimal warfarin resumption. Varying other parameters from the 5th to the 95th percentile (including HAS-BLED score, sex, age, and discount rate) changed expected QALYs but did not change the optimal day of reinitiation of warfarin. Optimal day of reinitiation for warfarin stratified by CHA2DS2-Vasc score is shown in Table 2.



Sensitivity analyses for apixaban produced broadly similar results, but with greater sensitivity to rebleeding risk. Optimal day of reinitiation varied by 15 days over the examined range of CHA2DS2-Vasc scores (Table 2) and by six days over the range of Rockall scores (Supplemental Appendix). Other input parameters, including HAS-BLED score, age, sex, and discount rate, changed expected QALYs and were significant in our metamodel but did not affect the optimal day of reinitiation. Metamodel results for both warfarin and apixaban are included in the Supplemental Appendix.

 

 

DISCUSSION

Anticoagulation is frequently prescribed for patients with NVAF, and hemorrhagic complications are common. Although anticoagulants are withheld following hemorrhages, scant evidence to inform the optimal timing of reinitiation is available. In this microsimulation analysis, we found that the optimal time to reinitiate anticoagulation following UGIB is around 41 days for warfarin and around 32 days for apixaban. We have further demonstrated that the optimal timing of reinitiation can vary by nearly two weeks, depending on a patient’s underlying risk of stroke, and that early reinitiation is more sensitive to rebleeding risk than late reinitiation.

Prior work has shown that early reinitiation of anticoagulation leads to higher rates of recurrent hemorrhage while failure to reinitiate anticoagulation is associated with higher rates of stroke and mortality.1-4,36 Our results add to the literature in a number of important ways. First, our model not only confirms that anticoagulation should be restarted but also suggests when this action should be taken. The competing risks of bleeding and stroke have left clinicians with little guidance; we have quantified the clinical reasoning required for the decision to resume anticoagulation. Second, by including the disutility of hospitalization and long-term disability, our model more accurately represents the complex tradeoffs between recurrent hemorrhage and (potentially disabling) stroke than would a comparison of event rates. Third, our model is conditional upon patient risk factors, allowing clinicians to personalize the timing of anticoagulation resumption. Theory would suggest that patients at higher risk of ischemic stroke benefit from earlier resumption of anticoagulation, while patients at higher risk of hemorrhage benefit from delayed reinitiation. We have quantified the extent to which patient-specific risks should change timing. Fourth, we offer a means of improving expected health outcomes that requires little more than appropriate scheduling. Current practice regarding resuming anticoagulation is widely variable. Many patients never resume warfarin, and those that do resume do so after highly varied periods of time.1-5,36 We offer a means of standardizing clinical practice and improving expected patient outcomes.



Interestingly, patient-specific risk of rebleeding had little effect on our primary outcome for warfarin, and a greater effect in our simulation of apixaban. It would seem that rebleeding risk, which decreases roughly exponentially, is sufficiently low by the time period at which warfarin should be resumed that patient-specific hemorrhage risk factors have little impact. Meanwhile, at the shorter post-event intervals at which apixaban can be resumed, both stroke risk and patient-specific bleeding risk are worthy considerations.

Our model is subject to several important limitations. First, our predictions of the optimal day as a function of risk scores can only be as well-calibrated as the input scoring systems. It is intuitive that patients with higher risk of rebleeding benefit from delayed reinitiation, while patients with higher risk of thromboembolic stroke benefit from earlier reinitiation. Still, clinicians seeking to operationalize competing risks through these two scores—or, indeed, any score—should be mindful of their limited calibration and shared variance. In other words, while the optimal day of reinitiation is likely in the range we have predicted and varies to the degree demonstrated here, the optimal day we have predicted for each score is likely overly precise. However, while better-calibrated prediction models would improve the accuracy of our model, we believe ours to be the best estimate of timing given available data and this approach to be the most appropriate way to personalize anticoagulation resumption.

Our simulation of apixaban carries an additional source of potential miscalibration. In the clinical trials that led to their approval, apixaban and other direct oral anticoagulants (DOACs) were compared with warfarin over longer periods of time than the acute period simulated in this work. Over a short period of time, patients treated with more rapidly therapeutic medications (in this case, apixaban) would receive more days of effective therapy compared with a slower-onset medication, such as warfarin. Therefore, the relative risks experienced by patients are likely different over the time period we have simulated compared with those measured over longer periods of time (as in phase 3 clinical trials). Our results for apixaban should be viewed as more limited than our estimates for warfarin. More broadly, simulation analyses are intended to predict overall outcomes that are difficult to measure. While other frameworks to assess model credibility exist, the fact remains that no extant datasets can directly validate our predictions.37

Our findings are limited to patients with NVAF. Anticoagulants are prescribed for a variety of indications with widely varied underlying risks and benefits. Models constructed for these conditions would likely produce different timing for resumption of anticoagulation. Unfortunately, large scale cohort studies to inform such models are lacking. Similarly, we simulated UGIB, and our results should not be generalized to populations with other types of bleeding (eg, intracranial hemorrhage). Again, cohort studies of other types of bleeding would be necessary to understand the risks of anticoagulation over time in such populations.

Higher-quality data regarding risk of rebleeding over time would improve our estimates. Our literature search identified only one systematic review that could be used to estimate the risk of recurrent UGIB over time. These data are not adequate to interrogate other forms this survival curve could take, such as Gompertz or Weibull distributions. Recurrence risk almost certainly declines over time, but how quickly it declines carries additional uncertainty.

Despite these limitations, we believe our results to be the best estimates to date of the optimal time of anticoagulation reinitiation following UGIB. Our findings could help inform clinical practice guidelines and reduce variation in care where current practice guidelines are largely silent. Given the potential ease of implementing scheduling changes, our results represent an opportunity to improve patient outcomes with little resource investment.

In conclusion, after UGIB associated with anticoagulation, our model suggests that warfarin is optimally restarted approximately six weeks following hemostasis and that apixaban is optimally restarted approximately one month following hemostasis. Modest changes to this timing based on probability of thromboembolic stroke are reasonable.

 

 

Disclosures

The authors have nothing to disclose.

Funding

The authors received no specific funding for this work.

References

1. Witt DM, Delate T, Garcia DA, et al. Risk of thromboembolism, recurrent hemorrhage, and death after warfarin therapy interruption for gastrointestinal tract bleeding. Arch Intern Med. 2012;172(19):1484-1491. doi: 10.1001/archinternmed.2012.4261. PubMed
2. Sengupta N, Feuerstein JD, Patwardhan VR, et al. The risks of thromboembolism vs recurrent gastrointestinal bleeding after interruption of systemic anticoagulation in hospitalized inpatients with gastrointestinal bleeding: a prospective study. Am J Gastroenterol. 2015;110(2):328-335. doi: 10.1038/ajg.2014.398. PubMed
3. Qureshi W, Mittal C, Patsias I, et al. Restarting anticoagulation and outcomes after major gastrointestinal bleeding in atrial fibrillation. Am J Cardiol. 2014;113(4):662-668. doi: 10.1016/j.amjcard.2013.10.044. PubMed
4. Milling TJ, Spyropoulos AC. Re-initiation of dabigatran and direct factor Xa antagonists after a major bleed. Am J Emerg Med. 2016;34(11):19-25. doi: 10.1016/j.ajem.2016.09.049. PubMed
5. Brotman DJ, Jaffer AK. Resuming anticoagulation in the first week following gastrointestinal tract hemorrhage. Arch Intern Med. 2012;172(19):1492-1493. doi: 10.1001/archinternmed.2012.4309. PubMed
6. Barnes GD, Lucas E, Alexander GC, Goldberger ZD. National trends in ambulatory oral anticoagulant use. Am J Med. 2015;128(12):1300-5. doi: 10.1016/j.amjmed.2015.05.044. PubMed
7. Noseworthy PA, Yao X, Abraham NS, Sangaralingham LR, McBane RD, Shah ND. Direct comparison of dabigatran, rivaroxaban, and apixaban for effectiveness and safety in nonvalvular atrial fibrillation. Chest. 2016;150(6):1302-1312. doi: 10.1016/j.chest.2016.07.013. PubMed
8. Pappas MA, Barnes GD, Vijan S. Personalizing bridging anticoagulation in patients with nonvalvular atrial fibrillation—a microsimulation analysis. J Gen Intern Med. 2017;32(4):464-470. doi: 10.1007/s11606-016-3932-7. PubMed
9. Pappas MA, Vijan S, Rothberg MB, Singer DE. Reducing age bias in decision analyses of anticoagulation for patients with nonvalvular atrial fibrillation – a microsimulation study. PloS One. 2018;13(7):e0199593. doi: 10.1371/journal.pone.0199593. PubMed
10. National Center for Health Statistics. National Health and Nutrition Examination Survey. https://www.cdc.gov/nchs/nhanes/about_nhanes.htm. Accessed August 30, 2018.
11. United States Census Bureau. Age and sex composition in the United States: 2014. https://www.census.gov/data/tables/2014/demo/age-and-sex/2014-age-sex-composition.html. Accessed August 30, 2018.
12. Go AS, Hylek EM, Phillips KA, et al. Prevalence of diagnosed atrial fibrillation in adults: national implications for rhythm management and stroke prevention: the AnTicoagulation and Risk Factors in Atrial Fibrillation (ATRIA) study. JAMA. 2001;285(18):2370-2375. doi: 10.1001/jama.285.18.2370. PubMed
13. Lip GYH, Nieuwlaat R, Pisters R, Lane DA, Crijns HJGM. Refining clinical risk stratification for predicting stroke and thromboembolism in atrial fibrillation using a novel risk factor-based approach: the euro heart survey on atrial fibrillation. Chest. 2010;137(2):263-272. doi: 10.1378/chest.09-1584. PubMed
14. Pisters R, Lane DA, Nieuwlaat R, de Vos CB, Crijns HJGM, Lip GYH. A novel user-friendly score (HAS-BLED) to assess 1-year risk of major bleeding in patients with atrial fibrillation. Chest. 2010;138(5):1093-1100. doi: 10.1378/chest.10-0134. PubMed
15. Granger CB, Alexander JH, McMurray JJV, et al. Apixaban versus warfarin in patients with atrial fibrillation. N Engl J Med. 2011;365(11):981-992. doi: 10.1056/NEJMoa1107039. 
16. Rockall TA, Logan RF, Devlin HB, Northfield TC. Risk assessment after acute upper gastrointestinal haemorrhage. Gut. 1996;38(3):316-321. doi: 10.1136/gut.38.3.316. PubMed
17. Vreeburg EM, Terwee CB, Snel P, et al. Validation of the Rockall risk scoring system in upper gastrointestinal bleeding. Gut. 1999;44(3):331-335. doi: 10.1136/gut.44.3.331. PubMed
18. Enns RA, Gagnon YM, Barkun AN, et al. Validation of the Rockall scoring system for outcomes from non-variceal upper gastrointestinal bleeding in a Canadian setting. World J Gastroenterol. 2006;12(48):7779-7785. doi: 10.3748/wjg.v12.i48.7779. PubMed
19. Stanley AJ, Laine L, Dalton HR, et al. Comparison of risk scoring systems for patients presenting with upper gastrointestinal bleeding: international multicentre prospective study. BMJ. 2017;356:i6432. doi: 10.1136/bmj.i6432. PubMed
20. Barkun AN, Bardou M, Kuipers EJ, et al. International consensus recommendations on the management of patients with nonvariceal upper gastrointestinal bleeding. Ann Intern Med. 2010;152(2):101-113. doi: 10.7326/0003-4819-152-2-201001190-00009. PubMed
21. Friberg L, Rosenqvist M, Lip GYH. Evaluation of risk stratification schemes for ischaemic stroke and bleeding in 182 678 patients with atrial fibrillation: the Swedish atrial fibrillation cohort study. Eur Heart J. 2012;33(12):1500-1510. doi: 10.1093/eurheartj/ehr488. PubMed
22. Friberg L, Rosenqvist M, Lip GYH. Net clinical benefit of warfarin in patients with atrial fibrillation: a report from the Swedish atrial fibrillation cohort study. Circulation. 2012;125(19):2298-2307. doi: 10.1161/CIRCULATIONAHA.111.055079. PubMed
23. Hart RG, Diener HC, Yang S, et al. Intracranial hemorrhage in atrial fibrillation patients during anticoagulation with warfarin or dabigatran: the RE-LY trial. Stroke. 2012;43(6):1511-1517. doi: 10.1161/STROKEAHA.112.650614. PubMed
24. Hankey GJ, Stevens SR, Piccini JP, et al. Intracranial hemorrhage among patients with atrial fibrillation anticoagulated with warfarin or rivaroxaban: the rivaroxaban once daily, oral, direct factor Xa inhibition compared with vitamin K antagonism for prevention of stroke and embolism trial in atrial fibrillation. Stroke. 2014;45(5):1304-1312. doi: 10.1161/STROKEAHA.113.004506. PubMed
25. Eikelboom JW, Wallentin L, Connolly SJ, et al. Risk of bleeding with 2 doses of dabigatran compared with warfarin in older and younger patients with atrial fibrillation : an analysis of the randomized evaluation of long-term anticoagulant therapy (RE-LY trial). Circulation. 2011;123(21):2363-2372. doi: 10.1161/CIRCULATIONAHA.110.004747. PubMed
26. El Ouali S, Barkun A, Martel M, Maggio D. Timing of rebleeding in high-risk peptic ulcer bleeding after successful hemostasis: a systematic review. Can J Gastroenterol Hepatol. 2014;28(10):543-548. doi: 0.1016/S0016-5085(14)60738-1. PubMed
27. Kimmel SE, French B, Kasner SE, et al. A pharmacogenetic versus a clinical algorithm for warfarin dosing. N Engl J Med. 2013;369(24):2283-2293. doi: 10.1056/NEJMoa1310669. PubMed
28. Healthcare Cost and Utilization Project (HCUP), Agency for Healthcare Research and Quality. HCUP Databases. https://www.hcup-us.ahrq.gov/nisoverview.jsp. Accessed August 31, 2018.
29. Guerrouij M, Uppal CS, Alklabi A, Douketis JD. The clinical impact of bleeding during oral anticoagulant therapy: assessment of morbidity, mortality and post-bleed anticoagulant management. J Thromb Thrombolysis. 2011;31(4):419-423. doi: 10.1007/s11239-010-0536-7. PubMed
30. Fang MC, Go AS, Chang Y, et al. Death and disability from warfarin-associated intracranial and extracranial hemorrhages. Am J Med. 2007;120(8):700-705. doi: 10.1016/j.amjmed.2006.07.034. PubMed
31. Guertin JR, Feeny D, Tarride JE. Age- and sex-specific Canadian utility norms, based on the 2013-2014 Canadian Community Health Survey. CMAJ. 2018;190(6):E155-E161. doi: 10.1503/cmaj.170317. PubMed
32. Gage BF, Cardinalli AB, Albers GW, Owens DK. Cost-effectiveness of warfarin and aspirin for prophylaxis of stroke in patients with nonvalvular atrial fibrillation. JAMA. 1995;274(23):1839-1845. doi: 10.1001/jama.1995.03530230025025. PubMed
33. Fang MC, Go AS, Chang Y, et al. Long-term survival after ischemic stroke in patients with atrial fibrillation. Neurology. 2014;82(12):1033-1037. doi: 10.1212/WNL.0000000000000248. PubMed
34. Hong KS, Saver JL. Quantifying the value of stroke disability outcomes: WHO global burden of disease project disability weights for each level of the modified Rankin scale * Supplemental Mathematical Appendix. Stroke. 2009;40(12):3828-3833. doi: 10.1161/STROKEAHA.109.561365. PubMed
35. Jalal H, Dowd B, Sainfort F, Kuntz KM. Linear regression metamodeling as a tool to summarize and present simulation model results. Med Decis Mak. 2013;33(7):880-890. doi: 10.1177/0272989X13492014. PubMed
36. Staerk L, Lip GYH, Olesen JB, et al. Stroke and recurrent haemorrhage associated with antithrombotic treatment after gastrointestinal bleeding in patients with atrial fibrillation: nationwide cohort study. BMJ. 2015;351:h5876. doi: 10.1136/bmj.h5876. PubMed
37. Kopec JA, Finès P, Manuel DG, et al. Validation of population-based disease simulation models: a review of concepts and methods. BMC Public Health. 2010;10(1):710. doi: 10.1186/1471-2458-10-710. PubMed
38. Smith EE, Shobha N, Dai D, et al. Risk score for in-hospital ischemic stroke mortality derived and validated within the Get With The Guidelines-Stroke Program. Circulation. 2010;122(15):1496-1504. doi: 10.1161/CIRCULATIONAHA.109.932822. PubMed
39. Smith EE, Shobha N, Dai D, et al. A risk score for in-hospital death in patients admitted with ischemic or hemorrhagic stroke. J Am Heart Assoc. 2013;2(1):e005207. doi: 10.1161/JAHA.112.005207. PubMed
40. Busl KM, Prabhakaran S. Predictors of mortality in nontraumatic subdural hematoma. J Neurosurg. 2013;119(5):1296-1301. doi: 10.3171/2013.4.JNS122236. PubMed
41. Murphy SL, Kochanek KD, Xu J, Heron M. Deaths: final data for 2012. Natl Vital Stat Rep. 2015;63(9):1-117. http://www.ncbi.nlm.nih.gov/pubmed/26759855. Accessed August 31, 2018. 
42. Dachs RJ, Burton JH, Joslin J. A user’s guide to the NINDS rt-PA stroke trial database. PLOS Med. 2008;5(5):e113. doi: 10.1371/journal.pmed.0050113. PubMed
43. Ashburner JM, Go AS, Reynolds K, et al. Comparison of frequency and outcome of major gastrointestinal hemorrhage in patients with atrial fibrillation on versus not receiving warfarin therapy (from the ATRIA and ATRIA-CVRN cohorts). Am J Cardiol. 2015;115(1):40-46. doi: 10.1016/j.amjcard.2014.10.006. PubMed
44. Weinstein MC, Siegel JE, Gold MR, Kamlet MS, Russell LB. Recommendations of the panel on cost-effectiveness in health and medicine. JAMA. 1996;276(15):1253-1258. doi: 10.1001/jama.1996.03540150055031. PubMed

References

1. Witt DM, Delate T, Garcia DA, et al. Risk of thromboembolism, recurrent hemorrhage, and death after warfarin therapy interruption for gastrointestinal tract bleeding. Arch Intern Med. 2012;172(19):1484-1491. doi: 10.1001/archinternmed.2012.4261. PubMed
2. Sengupta N, Feuerstein JD, Patwardhan VR, et al. The risks of thromboembolism vs recurrent gastrointestinal bleeding after interruption of systemic anticoagulation in hospitalized inpatients with gastrointestinal bleeding: a prospective study. Am J Gastroenterol. 2015;110(2):328-335. doi: 10.1038/ajg.2014.398. PubMed
3. Qureshi W, Mittal C, Patsias I, et al. Restarting anticoagulation and outcomes after major gastrointestinal bleeding in atrial fibrillation. Am J Cardiol. 2014;113(4):662-668. doi: 10.1016/j.amjcard.2013.10.044. PubMed
4. Milling TJ, Spyropoulos AC. Re-initiation of dabigatran and direct factor Xa antagonists after a major bleed. Am J Emerg Med. 2016;34(11):19-25. doi: 10.1016/j.ajem.2016.09.049. PubMed
5. Brotman DJ, Jaffer AK. Resuming anticoagulation in the first week following gastrointestinal tract hemorrhage. Arch Intern Med. 2012;172(19):1492-1493. doi: 10.1001/archinternmed.2012.4309. PubMed
6. Barnes GD, Lucas E, Alexander GC, Goldberger ZD. National trends in ambulatory oral anticoagulant use. Am J Med. 2015;128(12):1300-5. doi: 10.1016/j.amjmed.2015.05.044. PubMed
7. Noseworthy PA, Yao X, Abraham NS, Sangaralingham LR, McBane RD, Shah ND. Direct comparison of dabigatran, rivaroxaban, and apixaban for effectiveness and safety in nonvalvular atrial fibrillation. Chest. 2016;150(6):1302-1312. doi: 10.1016/j.chest.2016.07.013. PubMed
8. Pappas MA, Barnes GD, Vijan S. Personalizing bridging anticoagulation in patients with nonvalvular atrial fibrillation—a microsimulation analysis. J Gen Intern Med. 2017;32(4):464-470. doi: 10.1007/s11606-016-3932-7. PubMed
9. Pappas MA, Vijan S, Rothberg MB, Singer DE. Reducing age bias in decision analyses of anticoagulation for patients with nonvalvular atrial fibrillation – a microsimulation study. PloS One. 2018;13(7):e0199593. doi: 10.1371/journal.pone.0199593. PubMed
10. National Center for Health Statistics. National Health and Nutrition Examination Survey. https://www.cdc.gov/nchs/nhanes/about_nhanes.htm. Accessed August 30, 2018.
11. United States Census Bureau. Age and sex composition in the United States: 2014. https://www.census.gov/data/tables/2014/demo/age-and-sex/2014-age-sex-composition.html. Accessed August 30, 2018.
12. Go AS, Hylek EM, Phillips KA, et al. Prevalence of diagnosed atrial fibrillation in adults: national implications for rhythm management and stroke prevention: the AnTicoagulation and Risk Factors in Atrial Fibrillation (ATRIA) study. JAMA. 2001;285(18):2370-2375. doi: 10.1001/jama.285.18.2370. PubMed
13. Lip GYH, Nieuwlaat R, Pisters R, Lane DA, Crijns HJGM. Refining clinical risk stratification for predicting stroke and thromboembolism in atrial fibrillation using a novel risk factor-based approach: the euro heart survey on atrial fibrillation. Chest. 2010;137(2):263-272. doi: 10.1378/chest.09-1584. PubMed
14. Pisters R, Lane DA, Nieuwlaat R, de Vos CB, Crijns HJGM, Lip GYH. A novel user-friendly score (HAS-BLED) to assess 1-year risk of major bleeding in patients with atrial fibrillation. Chest. 2010;138(5):1093-1100. doi: 10.1378/chest.10-0134. PubMed
15. Granger CB, Alexander JH, McMurray JJV, et al. Apixaban versus warfarin in patients with atrial fibrillation. N Engl J Med. 2011;365(11):981-992. doi: 10.1056/NEJMoa1107039. 
16. Rockall TA, Logan RF, Devlin HB, Northfield TC. Risk assessment after acute upper gastrointestinal haemorrhage. Gut. 1996;38(3):316-321. doi: 10.1136/gut.38.3.316. PubMed
17. Vreeburg EM, Terwee CB, Snel P, et al. Validation of the Rockall risk scoring system in upper gastrointestinal bleeding. Gut. 1999;44(3):331-335. doi: 10.1136/gut.44.3.331. PubMed
18. Enns RA, Gagnon YM, Barkun AN, et al. Validation of the Rockall scoring system for outcomes from non-variceal upper gastrointestinal bleeding in a Canadian setting. World J Gastroenterol. 2006;12(48):7779-7785. doi: 10.3748/wjg.v12.i48.7779. PubMed
19. Stanley AJ, Laine L, Dalton HR, et al. Comparison of risk scoring systems for patients presenting with upper gastrointestinal bleeding: international multicentre prospective study. BMJ. 2017;356:i6432. doi: 10.1136/bmj.i6432. PubMed
20. Barkun AN, Bardou M, Kuipers EJ, et al. International consensus recommendations on the management of patients with nonvariceal upper gastrointestinal bleeding. Ann Intern Med. 2010;152(2):101-113. doi: 10.7326/0003-4819-152-2-201001190-00009. PubMed
21. Friberg L, Rosenqvist M, Lip GYH. Evaluation of risk stratification schemes for ischaemic stroke and bleeding in 182 678 patients with atrial fibrillation: the Swedish atrial fibrillation cohort study. Eur Heart J. 2012;33(12):1500-1510. doi: 10.1093/eurheartj/ehr488. PubMed
22. Friberg L, Rosenqvist M, Lip GYH. Net clinical benefit of warfarin in patients with atrial fibrillation: a report from the Swedish atrial fibrillation cohort study. Circulation. 2012;125(19):2298-2307. doi: 10.1161/CIRCULATIONAHA.111.055079. PubMed
23. Hart RG, Diener HC, Yang S, et al. Intracranial hemorrhage in atrial fibrillation patients during anticoagulation with warfarin or dabigatran: the RE-LY trial. Stroke. 2012;43(6):1511-1517. doi: 10.1161/STROKEAHA.112.650614. PubMed
24. Hankey GJ, Stevens SR, Piccini JP, et al. Intracranial hemorrhage among patients with atrial fibrillation anticoagulated with warfarin or rivaroxaban: the rivaroxaban once daily, oral, direct factor Xa inhibition compared with vitamin K antagonism for prevention of stroke and embolism trial in atrial fibrillation. Stroke. 2014;45(5):1304-1312. doi: 10.1161/STROKEAHA.113.004506. PubMed
25. Eikelboom JW, Wallentin L, Connolly SJ, et al. Risk of bleeding with 2 doses of dabigatran compared with warfarin in older and younger patients with atrial fibrillation : an analysis of the randomized evaluation of long-term anticoagulant therapy (RE-LY trial). Circulation. 2011;123(21):2363-2372. doi: 10.1161/CIRCULATIONAHA.110.004747. PubMed
26. El Ouali S, Barkun A, Martel M, Maggio D. Timing of rebleeding in high-risk peptic ulcer bleeding after successful hemostasis: a systematic review. Can J Gastroenterol Hepatol. 2014;28(10):543-548. doi: 0.1016/S0016-5085(14)60738-1. PubMed
27. Kimmel SE, French B, Kasner SE, et al. A pharmacogenetic versus a clinical algorithm for warfarin dosing. N Engl J Med. 2013;369(24):2283-2293. doi: 10.1056/NEJMoa1310669. PubMed
28. Healthcare Cost and Utilization Project (HCUP), Agency for Healthcare Research and Quality. HCUP Databases. https://www.hcup-us.ahrq.gov/nisoverview.jsp. Accessed August 31, 2018.
29. Guerrouij M, Uppal CS, Alklabi A, Douketis JD. The clinical impact of bleeding during oral anticoagulant therapy: assessment of morbidity, mortality and post-bleed anticoagulant management. J Thromb Thrombolysis. 2011;31(4):419-423. doi: 10.1007/s11239-010-0536-7. PubMed
30. Fang MC, Go AS, Chang Y, et al. Death and disability from warfarin-associated intracranial and extracranial hemorrhages. Am J Med. 2007;120(8):700-705. doi: 10.1016/j.amjmed.2006.07.034. PubMed
31. Guertin JR, Feeny D, Tarride JE. Age- and sex-specific Canadian utility norms, based on the 2013-2014 Canadian Community Health Survey. CMAJ. 2018;190(6):E155-E161. doi: 10.1503/cmaj.170317. PubMed
32. Gage BF, Cardinalli AB, Albers GW, Owens DK. Cost-effectiveness of warfarin and aspirin for prophylaxis of stroke in patients with nonvalvular atrial fibrillation. JAMA. 1995;274(23):1839-1845. doi: 10.1001/jama.1995.03530230025025. PubMed
33. Fang MC, Go AS, Chang Y, et al. Long-term survival after ischemic stroke in patients with atrial fibrillation. Neurology. 2014;82(12):1033-1037. doi: 10.1212/WNL.0000000000000248. PubMed
34. Hong KS, Saver JL. Quantifying the value of stroke disability outcomes: WHO global burden of disease project disability weights for each level of the modified Rankin scale * Supplemental Mathematical Appendix. Stroke. 2009;40(12):3828-3833. doi: 10.1161/STROKEAHA.109.561365. PubMed
35. Jalal H, Dowd B, Sainfort F, Kuntz KM. Linear regression metamodeling as a tool to summarize and present simulation model results. Med Decis Mak. 2013;33(7):880-890. doi: 10.1177/0272989X13492014. PubMed
36. Staerk L, Lip GYH, Olesen JB, et al. Stroke and recurrent haemorrhage associated with antithrombotic treatment after gastrointestinal bleeding in patients with atrial fibrillation: nationwide cohort study. BMJ. 2015;351:h5876. doi: 10.1136/bmj.h5876. PubMed
37. Kopec JA, Finès P, Manuel DG, et al. Validation of population-based disease simulation models: a review of concepts and methods. BMC Public Health. 2010;10(1):710. doi: 10.1186/1471-2458-10-710. PubMed
38. Smith EE, Shobha N, Dai D, et al. Risk score for in-hospital ischemic stroke mortality derived and validated within the Get With The Guidelines-Stroke Program. Circulation. 2010;122(15):1496-1504. doi: 10.1161/CIRCULATIONAHA.109.932822. PubMed
39. Smith EE, Shobha N, Dai D, et al. A risk score for in-hospital death in patients admitted with ischemic or hemorrhagic stroke. J Am Heart Assoc. 2013;2(1):e005207. doi: 10.1161/JAHA.112.005207. PubMed
40. Busl KM, Prabhakaran S. Predictors of mortality in nontraumatic subdural hematoma. J Neurosurg. 2013;119(5):1296-1301. doi: 10.3171/2013.4.JNS122236. PubMed
41. Murphy SL, Kochanek KD, Xu J, Heron M. Deaths: final data for 2012. Natl Vital Stat Rep. 2015;63(9):1-117. http://www.ncbi.nlm.nih.gov/pubmed/26759855. Accessed August 31, 2018. 
42. Dachs RJ, Burton JH, Joslin J. A user’s guide to the NINDS rt-PA stroke trial database. PLOS Med. 2008;5(5):e113. doi: 10.1371/journal.pmed.0050113. PubMed
43. Ashburner JM, Go AS, Reynolds K, et al. Comparison of frequency and outcome of major gastrointestinal hemorrhage in patients with atrial fibrillation on versus not receiving warfarin therapy (from the ATRIA and ATRIA-CVRN cohorts). Am J Cardiol. 2015;115(1):40-46. doi: 10.1016/j.amjcard.2014.10.006. PubMed
44. Weinstein MC, Siegel JE, Gold MR, Kamlet MS, Russell LB. Recommendations of the panel on cost-effectiveness in health and medicine. JAMA. 1996;276(15):1253-1258. doi: 10.1001/jama.1996.03540150055031. PubMed

Issue
Journal of Hospital Medicine 14(7)
Issue
Journal of Hospital Medicine 14(7)
Page Number
394-400. Published online first April 8, 2019.
Page Number
394-400. Published online first April 8, 2019.
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Matthew A Pappas, MD; E-mail: pappasm@ccf.org; Telephone: 216-444-9565
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

An Academic Research Coach: An Innovative Approach to Increasing Scholarly Productivity in Medicine

Article Type
Changed
Sun, 08/18/2019 - 20:03

Historically, academic medicine faculty were predominantly physician-scientists.1 During the past decade, the number of clinician-educators and nontenured clinicians has grown.2 Many academically oriented clinical faculty at our institution would like to participate in and learn how to conduct quality scholarship. While institutional requirements vary, scholarly work is often required for promotion,3 and faculty may also desire to support the scholarly work of residents. Moreover, a core program component of the Accreditation Council of Graduate Medical Education standards requires faculty to “maintain an environment of inquiry and scholarship with an active research component.”4 Yet clinical faculty often find academic projects to be challenging. Similar to residents, clinical academic faculty frequently lack formal training in health services research or quality improvement science, have insufficient mentorship, and typically have limited uncommitted time and resources.5

One approach to this problem has been to pair junior clinicians with traditional physician scientists as mentors.6,7 This type of mentorship for clinical faculty is increasingly difficult to access because of growing pressure on physician-scientist faculty to conduct their own research, seek extramural funding, meet clinical expectations, and mentor fellows and faculty in their own disciplines.8 Moreover, senior research faculty may not be prepared or have the time to teach junior faculty how to deal with common stumbling blocks (eg, institutional review board [IRB] applications, statistically testable hypothesis development, and statistical analysis).8,9 Seminars or works-in-progress sessions are another strategy to bolster scholarly work, but the experience at our institution is that such sessions are often not relevant at the time of delivery and can be intimidating to clinical faculty who lack extensive knowledge about research methods and prior research experience.

Another approach to supporting the research efforts of academic clinicians is to fund a consulting statistician. However, without sufficient content expertise, statisticians may be frustrated in their efforts to assist clinicians who struggle to formulate a testable question or to work directly with data collected. Statisticians may be inexperienced in writing IRB applications or implementing protocols in a clinical or educational setting. Furthermore, statistical consultations are often limited in scope10 and, in our setting, rarely produce a durable improvement in the research skills of the faculty member or the enduring partnership required to complete a longer-term project. Because of these shortcomings, we have found that purely statistical support resources are often underutilized and ineffective.

Other models to facilitate scholarship have been employed, but few focus on facilitating scholarship of clinical faculty. One strategy involved supporting hospitalist’s academic productivity by reducing hospitalists’ full-time equivalent (FTE) and providing mentorship.11 For many, this approach is likely cost-prohibitive. Others have focused primarily on resident and fellow scholarships.5,6

In this report, we describe an educational innovation to educate and support the scholarly work of academic hospitalists and internists by using an academic research coach. We recruited a health researcher with extensive experience in research methods and strong interpersonal skills with the ability to explain and teach research concepts in an accessible manner. We sought an individual who would provide high-yield single consultations, join project teams to provide ongoing mentorship from conception to completion, and consequently, bolster scholarly productivity and learning among nonresearch clinicians in our Division. We anticipated that providing support for multiple aspects of a project would be more likely to help faculty overcome barriers to research and disseminate their project results as scholarly output.

 

 

METHODS

The coach initiative was implemented in the Division of General Internal Medicine at the University of Washington. The Division has over 200 members (60 hospitalists), including clinical instructors and acting instructors, who have not yet been appointed to the regular faculty (clinician-educators and physician scientists), and full-time clinical faculty. Division members staff clinical services at four area hospitals and 10 affiliated internal medicine and specialty clinics. Eligible clients were all Division members, although the focus of the initial program targeted hospitalists at our three primary teaching hospitals. Fellows, residents, students, and faculty from within and outside the Division were welcome to participate in a project involving coaching as long as a Division faculty member was engaged in the project.

Program Description

The overall goal of the coach initiative was to support the scholarly work of primarily clinical Division members. Given our focus was on clinical faculty with little training on research methodology, we did not expect the coach to secure grant funding for the position. Instead, we aimed to increase the quality and quantity of scholarship through publications, abstracts, and small grants. We defined scholarly work broadly: clinical research, quality improvement, medical education research, and other forms of scientific inquiry or synthesis. The coach was established as a 0.50 FTE position with a 12-month annually renewable appointment. The role was deemed that of a coach instead of a mentor because the coach was available to all Division members and involved task-oriented consultations with check-ins to facilitate projects, rather than a deeper more developmental relationship that typically exists with mentoring. The Division leadership identified support for scholarly activity as a high priority and mentorship as an unmet need based on faculty feedback. Clinical revenue supported the position.

Necessary qualifications, determined prior to hiring, included a PhD in health services or related field (eg, epidemiology) or a master’s degree with five years of experience in project management, clinical research, and study design. The position also called for expertise in articulating research questions, selecting study designs, navigating the IRB approval process, collecting/managing data, analyzing statistics, and mentoring and teaching clinical faculty in their scholarly endeavors. A track record in generating academic output (manuscripts and abstracts at regional/national meetings) was required. We circulated a description of the position to Division faculty and to leadership in our School of Public Health.

Based on these criteria, an inaugural coach was hired (author C.M.M.). The coach had a PhD in epidemiology, 10 years of research experience, 16 publications, and had recently finished a National Institutes of Health (NIH) career development award. At the time of hiring, she was a Clinical Assistant Professor in the School of Dentistry, which provided additional FTE. She had no extramural funding but was applying for NIH-level grants and had received several small grants.

To ensure uptake of the coach’s services, we realized that it was necessary to delineate the scope of services available, clarify availability of the coach, and define expectations regarding authorship. We used an iterative process that took into consideration the coach’s expertise, services most needed by the Division’s clinicians, and discussions with Division leadership and faculty at faculty meetings across hospitals and clinics. A range of services and authorship expectations were defined. Consensus was reached that the coach should be invited to coauthor projects where design, analysis, and/or substantial intellectual content was provided and for which authorship criteria were met.12 Collegial reviews by the coach of already developed manuscripts and time-limited, low-intensity consultations that did not involve substantial intellectual contributions did not warrant authorship.12 On this basis, we created and distributed a flyer to publicize these guidelines and invite Division members to contact the coach (Figure 1).

The coach attended Division, section, and clinical group meetings to publicize the initiative. The coach also individually met with faculty throughout the Division, explained her role, described services available, and answered questions. The marketing effort was continuous and calibrated with more or less exposure depending on existing projects and the coach’s availability. In addition, the coach coordinated with the director of the Division’s faculty development program to cohost works-in-progress seminars, identify coach clients to present at these meetings, and provide brief presentations on a basic research skill at meetings. Faculty built rapport with the coach through these activities and became more comfortable reaching out for assistance. Because of the large size of the Division, it was decided to roll out the initiative in a stepwise fashion, starting with hospitalists before expanding to the rest of the Division.

Most faculty contacted the coach by e-mail to request a consultation, at which time the coach requested that they complete a preconsultation handout (Figure 2). Initial coaching appointments lasted one hour and were in-person. Coaching entailed an in-depth analysis of the project plan and advice on how to move the project forward. The coach provided tailored scholarly project advice and expertise in research methods. After initial consultations, she would review grant proposals, IRB applications, manuscripts, case report forms, abstracts, and other products. Her efforts typically focused on improving the methods and scientific and technical writing. Assistance with statistical analysis was provided on a case-by-case basis to maintain broad availability. To address statistically complex questions, the coach had five hours of monthly access to a PhD biostatistician via an on-campus consulting service. Follow-up appointments were encouraged and provided as needed by e-mail, phone, or in-person. The coach conducted regular reach outs to facilitate projects. However, execution of the research was generally the responsibility of the faculty member.

 

 

Program Evaluation

To characterize the reach and scope of the program, the coach tracked the number of faculty supported, types of services provided, status of initiated projects, numbers of grants generated, and the dissemination of scholarly products including papers and abstracts. We used these metrics to create summary reports to identify successes and areas for improvement. Monthly meetings between the coach and Division leadership were used to fine-tune the approach.

We surveyed coach clients anonymously to assess their satisfaction with the coach initiative. Using Likert scale questions where 1 = completely disagree and 5 = completely agree, we asked (1) if they would recommend the coach to colleagues, (2) if their work was higher quality because of the coach, (3) if they were overall satisfied with the coach, (4) whether the Division should continue to support the coach, and (5) if the coach’s lack of clinical training negatively affected their experience. This work was considered a quality improvement initiative for which IRB approval was not required.

RESULTS

Over 18 months, the coach supported a 49 Division members including 30 hospitalists and 63 projects. Projects included a wide range of scholarship: medical education research, qualitative research, clinical quality improvement projects, observational studies, and a randomized clinical trial. Many clients (n = 16) used the coach for more than one project. The scope of work included limited support projects (identifying research resource and brainstorming project feasibility) lasting one to two sessions (n = 25), projects with a limited scope (collegial reviews of manuscripts and assistance with IRB submissions) but requiring more than two consultations (n = 24), and ongoing in-depth support projects (contributions on design, data collection, analysis, and manuscript writing) that required three consultations or more (n = 14). The majority of Division members (75%) supported did not have master’s level training in a health services-related area, six had NIH or other national-level funding, and two had small grants funded by local sources prior to providing support. The number of Division faculty on a given project ranged from one to four.

The coach directly supported 13 manuscripts with coach authorship, seven manuscripts without authorship, 11 abstracts, and four grant submissions (Appendix). The coach was a coauthor on all the abstracts and a coinvestigator on the grant applications. Of the 13 publications the coach coauthored, 11 publications have been accepted to peer-reviewed journals and two are currently in the submission process. The types of articles published included one medical evaluation report, one qualitative study, one randomized clinical trial, three quality assessment/improvement reports, and five epidemiologic studies. The types of abstracts included one qualitative report, one systematic review, one randomized clinical trial, two quality improvement projects, two epidemiologic studies, and four medical education projects. Three of four small grants submitted to local and national funders were funded.

The coach’s influence extended beyond the Division. Forty-eight university faculty, fellows, or students not affiliated with general internal medicine benefited from coach coaching: 26 were authors on papers and/or abstracts coauthored by the coach, 17 on manuscripts the coach reviewed without authorship, and five participated in consultations.

The coach found the experience rewarding. She enjoyed working on the methodologic aspects of projects and benefited from being included as coauthor on papers.

Twenty-nine of the 43 faculty (67%) still at the institution responded to the program assessment survey. Faculty strongly agreed that they would recommend the coach to colleagues (average ± standard deviation [SD]: 4.7 ± 0.5), that it improved the quality of their work (4.5 ± 0.9), that they were overall satisfied with the coaching (4.6 ± 0.7), and that the Division should continue to support the coach (4.9 ± 0.4). Faculty did not agree that the lack of clinical training of the coach was a barrier (2.0 ± 1.3).

 

 

DISCUSSION

The coach program was highly utilized, well regarded, and delivered substantial, tangible, and academic output. We anticipate the coach initiative will continue to be a valuable resource for our Division and could prove to be a valuable model for other institutions seeking to bolster the scholarly work of clinical academicians.

Several lessons emerged through the course of this project. First, we realized it is essential to select a coach who is both knowledgeable and approachable. We found that after meeting the coach, many faculty sought her help who otherwise would not have. An explicit, ongoing marketing strategy with regular contact with faculty at meetings was a key to receiving consult requests.

Second, the lack of a clinical background did not seem to hinder the coach’s ability to coach clinicians. The coach acknowledged her lack of clinical experience and relied on clients to explain the clinical context of projects. We also learned that the coach’s substantial experience with the logistics of research was invaluable. For example, the coach had substantial experience with the IRB process and her pre-reviews of IRB applications made for a short and relatively seamless experience navigating the IRB process. The coach also facilitated collaborations and leveraged existing resources at our institution. For example, for a qualitative research project, the coach helped identify a health services faculty member with this specific expertise, which led to a successful collaboration and publication. Although a more junior coach with less established qualifications may be helpful with research methods and with the research process, our endeavor suggests that having a more highly trained and experienced researcher was extremely valuable. Finally, we learned that for a Division of our size, the 0.50 FTE allotted to the coach is a minimum requirement. The coach spent approximately four hours a week on marketing, attending faculty meetings and conducting brief didactics, two hours per week on administration, and 14 hours per week on consultations. Faculty generally received support soon after their requests, but there were occasional wait times, which may have delayed some projects.

Academic leaders at our institution have noted the success of our coach initiative and have created a demand for coach services. We are exploring funding models that would allow for the expansion of coach services to other departments and divisions. We are in the initial stages of creating an Academic Scholarship Support Core under the supervision of the coach. Within this Core, we envision that various research support services will be triaged to staff with appropriate expertise; for example, a regulatory coordinator would review IRB applications while a master’s level statistician would conduct statistical analyses.

We have also transitioned to a new coach and have continued to experience success with the program. Our initial coach (author C.M.M.) obtained an NIH R01, a foundation grant, and took over a summer program that trains dental faculty in clinical research methods leaving insufficient time for coaching. Our new coach also has a PhD in epidemiology with NIH R01 funding but has more available FTE. Both of our coaches are graduates of our School of Public Health and institutions with such schools may have good access to the expertise needed. Nonclinical PhDs are often almost entirely reliant on grants, and some nongrant support is often attractive to these researchers. Additionally, PhDs who are junior or mid-career faculty that have the needed training are relatively affordable, particularly when the resource is made available to large number of faculty. For example, our first coach cost $48,000 a year for 50% FTE.

A limitation to our assessment of the coach initiative was the lack of pre- and postintervention metrics of scholarly productivity. We cannot definitively say that the Division’s scholarly output has increased because of the coach. Nevertheless, we are confident that the coach’s coaching has enhanced the scholarly work of individual clinicians and provided value to the Division as a whole. The coach program has been a success in our Division. Other institutions facing the challenge of supporting the research efforts of academic clinicians may consider this model as a worthy investment.

 

 

Disclosures

The authors have nothing to disclose.

Files
References

1. Marks AR. Physician-scientist, heal thyself. J Clin Invest. 2007;117(1):2. https://doi.org/10.1172/JCI31031.
2. Bunton SA, Corrice AM. Trends in tenure for clinical M.D. faculty in U.S. medical schools: a 25-year review. Association of American Medical Colleges: Analysis in Brief. 2010;9(9):1-2; https://www.aamc.org/download/139778/data/aibvol9_no9.pdf. Accessed March 7, 2019.
3. Bunton SA, Mallon WT. The continued evolution of faculty appointment and tenure policies at U.S. medical schools. Acad Med. 2007;82(3):281-289. https://doi.org/10.1097/ACM.0b013e3180307e87.
4. Accreditation Council for Graduate Medical Education. ACGME Common Program Requirements. 2017; http://www.acgme.org/What-We-Do/Accreditation/Common-Program-Requirements. Accessed March 7, 2019.
5. Penrose LL, Yeomans ER, Praderio C, Prien SD. An incremental approach to improving scholarly activity. J Grad Med Educ. 2012;4(4):496-499. https://doi.org/10.4300/JGME-D-11-00185.1.
6. Manring MM, Panzo JA, Mayerson JL. A framework for improving resident research participation and scholarly output. J Surg Educ. 2014;71(1):8-13. https://doi.org/10.1016/j.jsurg.2013.07.011.
7. Palacio A, Campbell DT, Moore M, Symes S, Tamariz L. Predictors of scholarly success among internal medicine residents. Am J Med. 2013;126(2):181-185. https:doi.org/10.1016/j.amjmed.2012.10.003.
8. Physician-Scientist Workforce Working Group. Physician-scientist workforce (PSW) report 2014. https://report.nih.gov/Workforce/PSW/challenges.aspx. Accessed December 27, 2018.
9. Straus SE, Johnson MO, Marquez C, Feldman MD. Characteristics of successful and failed mentoring relationships: a qualitative study across two academic health centers. Acad Med. 2013;88(1):82-89. https://doi.org/10.1097/ACM.0b013e31827647a0.
10. Altman DG, Goodman SN, Schroter S. How statistical expertise is used in medical research. JAMA. 2002;287(21):2817-2820. https://doi.org/10.1001/jama.287.21.2817.
11. Howell E, Kravet S, Kisuule F, Wright SM. An innovative approach to supporting hospitalist physicians towards academic success. J Hosp Med. 2008;3(4):314-318. https://doi.org/10.1002/jhm.327.
12. Kripalani S, Williams MV. Author responsibilities and disclosures at the Journal of Hospital Medicine. J Hosp Med. 2010;5(6):320-322. https://doi.org/10.1002/jhm.715.

Article PDF
Issue
Journal of Hospital Medicine 14(8)
Publications
Topics
Page Number
457-461
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Historically, academic medicine faculty were predominantly physician-scientists.1 During the past decade, the number of clinician-educators and nontenured clinicians has grown.2 Many academically oriented clinical faculty at our institution would like to participate in and learn how to conduct quality scholarship. While institutional requirements vary, scholarly work is often required for promotion,3 and faculty may also desire to support the scholarly work of residents. Moreover, a core program component of the Accreditation Council of Graduate Medical Education standards requires faculty to “maintain an environment of inquiry and scholarship with an active research component.”4 Yet clinical faculty often find academic projects to be challenging. Similar to residents, clinical academic faculty frequently lack formal training in health services research or quality improvement science, have insufficient mentorship, and typically have limited uncommitted time and resources.5

One approach to this problem has been to pair junior clinicians with traditional physician scientists as mentors.6,7 This type of mentorship for clinical faculty is increasingly difficult to access because of growing pressure on physician-scientist faculty to conduct their own research, seek extramural funding, meet clinical expectations, and mentor fellows and faculty in their own disciplines.8 Moreover, senior research faculty may not be prepared or have the time to teach junior faculty how to deal with common stumbling blocks (eg, institutional review board [IRB] applications, statistically testable hypothesis development, and statistical analysis).8,9 Seminars or works-in-progress sessions are another strategy to bolster scholarly work, but the experience at our institution is that such sessions are often not relevant at the time of delivery and can be intimidating to clinical faculty who lack extensive knowledge about research methods and prior research experience.

Another approach to supporting the research efforts of academic clinicians is to fund a consulting statistician. However, without sufficient content expertise, statisticians may be frustrated in their efforts to assist clinicians who struggle to formulate a testable question or to work directly with data collected. Statisticians may be inexperienced in writing IRB applications or implementing protocols in a clinical or educational setting. Furthermore, statistical consultations are often limited in scope10 and, in our setting, rarely produce a durable improvement in the research skills of the faculty member or the enduring partnership required to complete a longer-term project. Because of these shortcomings, we have found that purely statistical support resources are often underutilized and ineffective.

Other models to facilitate scholarship have been employed, but few focus on facilitating scholarship of clinical faculty. One strategy involved supporting hospitalist’s academic productivity by reducing hospitalists’ full-time equivalent (FTE) and providing mentorship.11 For many, this approach is likely cost-prohibitive. Others have focused primarily on resident and fellow scholarships.5,6

In this report, we describe an educational innovation to educate and support the scholarly work of academic hospitalists and internists by using an academic research coach. We recruited a health researcher with extensive experience in research methods and strong interpersonal skills with the ability to explain and teach research concepts in an accessible manner. We sought an individual who would provide high-yield single consultations, join project teams to provide ongoing mentorship from conception to completion, and consequently, bolster scholarly productivity and learning among nonresearch clinicians in our Division. We anticipated that providing support for multiple aspects of a project would be more likely to help faculty overcome barriers to research and disseminate their project results as scholarly output.

 

 

METHODS

The coach initiative was implemented in the Division of General Internal Medicine at the University of Washington. The Division has over 200 members (60 hospitalists), including clinical instructors and acting instructors, who have not yet been appointed to the regular faculty (clinician-educators and physician scientists), and full-time clinical faculty. Division members staff clinical services at four area hospitals and 10 affiliated internal medicine and specialty clinics. Eligible clients were all Division members, although the focus of the initial program targeted hospitalists at our three primary teaching hospitals. Fellows, residents, students, and faculty from within and outside the Division were welcome to participate in a project involving coaching as long as a Division faculty member was engaged in the project.

Program Description

The overall goal of the coach initiative was to support the scholarly work of primarily clinical Division members. Given our focus was on clinical faculty with little training on research methodology, we did not expect the coach to secure grant funding for the position. Instead, we aimed to increase the quality and quantity of scholarship through publications, abstracts, and small grants. We defined scholarly work broadly: clinical research, quality improvement, medical education research, and other forms of scientific inquiry or synthesis. The coach was established as a 0.50 FTE position with a 12-month annually renewable appointment. The role was deemed that of a coach instead of a mentor because the coach was available to all Division members and involved task-oriented consultations with check-ins to facilitate projects, rather than a deeper more developmental relationship that typically exists with mentoring. The Division leadership identified support for scholarly activity as a high priority and mentorship as an unmet need based on faculty feedback. Clinical revenue supported the position.

Necessary qualifications, determined prior to hiring, included a PhD in health services or related field (eg, epidemiology) or a master’s degree with five years of experience in project management, clinical research, and study design. The position also called for expertise in articulating research questions, selecting study designs, navigating the IRB approval process, collecting/managing data, analyzing statistics, and mentoring and teaching clinical faculty in their scholarly endeavors. A track record in generating academic output (manuscripts and abstracts at regional/national meetings) was required. We circulated a description of the position to Division faculty and to leadership in our School of Public Health.

Based on these criteria, an inaugural coach was hired (author C.M.M.). The coach had a PhD in epidemiology, 10 years of research experience, 16 publications, and had recently finished a National Institutes of Health (NIH) career development award. At the time of hiring, she was a Clinical Assistant Professor in the School of Dentistry, which provided additional FTE. She had no extramural funding but was applying for NIH-level grants and had received several small grants.

To ensure uptake of the coach’s services, we realized that it was necessary to delineate the scope of services available, clarify availability of the coach, and define expectations regarding authorship. We used an iterative process that took into consideration the coach’s expertise, services most needed by the Division’s clinicians, and discussions with Division leadership and faculty at faculty meetings across hospitals and clinics. A range of services and authorship expectations were defined. Consensus was reached that the coach should be invited to coauthor projects where design, analysis, and/or substantial intellectual content was provided and for which authorship criteria were met.12 Collegial reviews by the coach of already developed manuscripts and time-limited, low-intensity consultations that did not involve substantial intellectual contributions did not warrant authorship.12 On this basis, we created and distributed a flyer to publicize these guidelines and invite Division members to contact the coach (Figure 1).

The coach attended Division, section, and clinical group meetings to publicize the initiative. The coach also individually met with faculty throughout the Division, explained her role, described services available, and answered questions. The marketing effort was continuous and calibrated with more or less exposure depending on existing projects and the coach’s availability. In addition, the coach coordinated with the director of the Division’s faculty development program to cohost works-in-progress seminars, identify coach clients to present at these meetings, and provide brief presentations on a basic research skill at meetings. Faculty built rapport with the coach through these activities and became more comfortable reaching out for assistance. Because of the large size of the Division, it was decided to roll out the initiative in a stepwise fashion, starting with hospitalists before expanding to the rest of the Division.

Most faculty contacted the coach by e-mail to request a consultation, at which time the coach requested that they complete a preconsultation handout (Figure 2). Initial coaching appointments lasted one hour and were in-person. Coaching entailed an in-depth analysis of the project plan and advice on how to move the project forward. The coach provided tailored scholarly project advice and expertise in research methods. After initial consultations, she would review grant proposals, IRB applications, manuscripts, case report forms, abstracts, and other products. Her efforts typically focused on improving the methods and scientific and technical writing. Assistance with statistical analysis was provided on a case-by-case basis to maintain broad availability. To address statistically complex questions, the coach had five hours of monthly access to a PhD biostatistician via an on-campus consulting service. Follow-up appointments were encouraged and provided as needed by e-mail, phone, or in-person. The coach conducted regular reach outs to facilitate projects. However, execution of the research was generally the responsibility of the faculty member.

 

 

Program Evaluation

To characterize the reach and scope of the program, the coach tracked the number of faculty supported, types of services provided, status of initiated projects, numbers of grants generated, and the dissemination of scholarly products including papers and abstracts. We used these metrics to create summary reports to identify successes and areas for improvement. Monthly meetings between the coach and Division leadership were used to fine-tune the approach.

We surveyed coach clients anonymously to assess their satisfaction with the coach initiative. Using Likert scale questions where 1 = completely disagree and 5 = completely agree, we asked (1) if they would recommend the coach to colleagues, (2) if their work was higher quality because of the coach, (3) if they were overall satisfied with the coach, (4) whether the Division should continue to support the coach, and (5) if the coach’s lack of clinical training negatively affected their experience. This work was considered a quality improvement initiative for which IRB approval was not required.

RESULTS

Over 18 months, the coach supported a 49 Division members including 30 hospitalists and 63 projects. Projects included a wide range of scholarship: medical education research, qualitative research, clinical quality improvement projects, observational studies, and a randomized clinical trial. Many clients (n = 16) used the coach for more than one project. The scope of work included limited support projects (identifying research resource and brainstorming project feasibility) lasting one to two sessions (n = 25), projects with a limited scope (collegial reviews of manuscripts and assistance with IRB submissions) but requiring more than two consultations (n = 24), and ongoing in-depth support projects (contributions on design, data collection, analysis, and manuscript writing) that required three consultations or more (n = 14). The majority of Division members (75%) supported did not have master’s level training in a health services-related area, six had NIH or other national-level funding, and two had small grants funded by local sources prior to providing support. The number of Division faculty on a given project ranged from one to four.

The coach directly supported 13 manuscripts with coach authorship, seven manuscripts without authorship, 11 abstracts, and four grant submissions (Appendix). The coach was a coauthor on all the abstracts and a coinvestigator on the grant applications. Of the 13 publications the coach coauthored, 11 publications have been accepted to peer-reviewed journals and two are currently in the submission process. The types of articles published included one medical evaluation report, one qualitative study, one randomized clinical trial, three quality assessment/improvement reports, and five epidemiologic studies. The types of abstracts included one qualitative report, one systematic review, one randomized clinical trial, two quality improvement projects, two epidemiologic studies, and four medical education projects. Three of four small grants submitted to local and national funders were funded.

The coach’s influence extended beyond the Division. Forty-eight university faculty, fellows, or students not affiliated with general internal medicine benefited from coach coaching: 26 were authors on papers and/or abstracts coauthored by the coach, 17 on manuscripts the coach reviewed without authorship, and five participated in consultations.

The coach found the experience rewarding. She enjoyed working on the methodologic aspects of projects and benefited from being included as coauthor on papers.

Twenty-nine of the 43 faculty (67%) still at the institution responded to the program assessment survey. Faculty strongly agreed that they would recommend the coach to colleagues (average ± standard deviation [SD]: 4.7 ± 0.5), that it improved the quality of their work (4.5 ± 0.9), that they were overall satisfied with the coaching (4.6 ± 0.7), and that the Division should continue to support the coach (4.9 ± 0.4). Faculty did not agree that the lack of clinical training of the coach was a barrier (2.0 ± 1.3).

 

 

DISCUSSION

The coach program was highly utilized, well regarded, and delivered substantial, tangible, and academic output. We anticipate the coach initiative will continue to be a valuable resource for our Division and could prove to be a valuable model for other institutions seeking to bolster the scholarly work of clinical academicians.

Several lessons emerged through the course of this project. First, we realized it is essential to select a coach who is both knowledgeable and approachable. We found that after meeting the coach, many faculty sought her help who otherwise would not have. An explicit, ongoing marketing strategy with regular contact with faculty at meetings was a key to receiving consult requests.

Second, the lack of a clinical background did not seem to hinder the coach’s ability to coach clinicians. The coach acknowledged her lack of clinical experience and relied on clients to explain the clinical context of projects. We also learned that the coach’s substantial experience with the logistics of research was invaluable. For example, the coach had substantial experience with the IRB process and her pre-reviews of IRB applications made for a short and relatively seamless experience navigating the IRB process. The coach also facilitated collaborations and leveraged existing resources at our institution. For example, for a qualitative research project, the coach helped identify a health services faculty member with this specific expertise, which led to a successful collaboration and publication. Although a more junior coach with less established qualifications may be helpful with research methods and with the research process, our endeavor suggests that having a more highly trained and experienced researcher was extremely valuable. Finally, we learned that for a Division of our size, the 0.50 FTE allotted to the coach is a minimum requirement. The coach spent approximately four hours a week on marketing, attending faculty meetings and conducting brief didactics, two hours per week on administration, and 14 hours per week on consultations. Faculty generally received support soon after their requests, but there were occasional wait times, which may have delayed some projects.

Academic leaders at our institution have noted the success of our coach initiative and have created a demand for coach services. We are exploring funding models that would allow for the expansion of coach services to other departments and divisions. We are in the initial stages of creating an Academic Scholarship Support Core under the supervision of the coach. Within this Core, we envision that various research support services will be triaged to staff with appropriate expertise; for example, a regulatory coordinator would review IRB applications while a master’s level statistician would conduct statistical analyses.

We have also transitioned to a new coach and have continued to experience success with the program. Our initial coach (author C.M.M.) obtained an NIH R01, a foundation grant, and took over a summer program that trains dental faculty in clinical research methods leaving insufficient time for coaching. Our new coach also has a PhD in epidemiology with NIH R01 funding but has more available FTE. Both of our coaches are graduates of our School of Public Health and institutions with such schools may have good access to the expertise needed. Nonclinical PhDs are often almost entirely reliant on grants, and some nongrant support is often attractive to these researchers. Additionally, PhDs who are junior or mid-career faculty that have the needed training are relatively affordable, particularly when the resource is made available to large number of faculty. For example, our first coach cost $48,000 a year for 50% FTE.

A limitation to our assessment of the coach initiative was the lack of pre- and postintervention metrics of scholarly productivity. We cannot definitively say that the Division’s scholarly output has increased because of the coach. Nevertheless, we are confident that the coach’s coaching has enhanced the scholarly work of individual clinicians and provided value to the Division as a whole. The coach program has been a success in our Division. Other institutions facing the challenge of supporting the research efforts of academic clinicians may consider this model as a worthy investment.

 

 

Disclosures

The authors have nothing to disclose.

Historically, academic medicine faculty were predominantly physician-scientists.1 During the past decade, the number of clinician-educators and nontenured clinicians has grown.2 Many academically oriented clinical faculty at our institution would like to participate in and learn how to conduct quality scholarship. While institutional requirements vary, scholarly work is often required for promotion,3 and faculty may also desire to support the scholarly work of residents. Moreover, a core program component of the Accreditation Council of Graduate Medical Education standards requires faculty to “maintain an environment of inquiry and scholarship with an active research component.”4 Yet clinical faculty often find academic projects to be challenging. Similar to residents, clinical academic faculty frequently lack formal training in health services research or quality improvement science, have insufficient mentorship, and typically have limited uncommitted time and resources.5

One approach to this problem has been to pair junior clinicians with traditional physician scientists as mentors.6,7 This type of mentorship for clinical faculty is increasingly difficult to access because of growing pressure on physician-scientist faculty to conduct their own research, seek extramural funding, meet clinical expectations, and mentor fellows and faculty in their own disciplines.8 Moreover, senior research faculty may not be prepared or have the time to teach junior faculty how to deal with common stumbling blocks (eg, institutional review board [IRB] applications, statistically testable hypothesis development, and statistical analysis).8,9 Seminars or works-in-progress sessions are another strategy to bolster scholarly work, but the experience at our institution is that such sessions are often not relevant at the time of delivery and can be intimidating to clinical faculty who lack extensive knowledge about research methods and prior research experience.

Another approach to supporting the research efforts of academic clinicians is to fund a consulting statistician. However, without sufficient content expertise, statisticians may be frustrated in their efforts to assist clinicians who struggle to formulate a testable question or to work directly with data collected. Statisticians may be inexperienced in writing IRB applications or implementing protocols in a clinical or educational setting. Furthermore, statistical consultations are often limited in scope10 and, in our setting, rarely produce a durable improvement in the research skills of the faculty member or the enduring partnership required to complete a longer-term project. Because of these shortcomings, we have found that purely statistical support resources are often underutilized and ineffective.

Other models to facilitate scholarship have been employed, but few focus on facilitating scholarship of clinical faculty. One strategy involved supporting hospitalist’s academic productivity by reducing hospitalists’ full-time equivalent (FTE) and providing mentorship.11 For many, this approach is likely cost-prohibitive. Others have focused primarily on resident and fellow scholarships.5,6

In this report, we describe an educational innovation to educate and support the scholarly work of academic hospitalists and internists by using an academic research coach. We recruited a health researcher with extensive experience in research methods and strong interpersonal skills with the ability to explain and teach research concepts in an accessible manner. We sought an individual who would provide high-yield single consultations, join project teams to provide ongoing mentorship from conception to completion, and consequently, bolster scholarly productivity and learning among nonresearch clinicians in our Division. We anticipated that providing support for multiple aspects of a project would be more likely to help faculty overcome barriers to research and disseminate their project results as scholarly output.

 

 

METHODS

The coach initiative was implemented in the Division of General Internal Medicine at the University of Washington. The Division has over 200 members (60 hospitalists), including clinical instructors and acting instructors, who have not yet been appointed to the regular faculty (clinician-educators and physician scientists), and full-time clinical faculty. Division members staff clinical services at four area hospitals and 10 affiliated internal medicine and specialty clinics. Eligible clients were all Division members, although the focus of the initial program targeted hospitalists at our three primary teaching hospitals. Fellows, residents, students, and faculty from within and outside the Division were welcome to participate in a project involving coaching as long as a Division faculty member was engaged in the project.

Program Description

The overall goal of the coach initiative was to support the scholarly work of primarily clinical Division members. Given our focus was on clinical faculty with little training on research methodology, we did not expect the coach to secure grant funding for the position. Instead, we aimed to increase the quality and quantity of scholarship through publications, abstracts, and small grants. We defined scholarly work broadly: clinical research, quality improvement, medical education research, and other forms of scientific inquiry or synthesis. The coach was established as a 0.50 FTE position with a 12-month annually renewable appointment. The role was deemed that of a coach instead of a mentor because the coach was available to all Division members and involved task-oriented consultations with check-ins to facilitate projects, rather than a deeper more developmental relationship that typically exists with mentoring. The Division leadership identified support for scholarly activity as a high priority and mentorship as an unmet need based on faculty feedback. Clinical revenue supported the position.

Necessary qualifications, determined prior to hiring, included a PhD in health services or related field (eg, epidemiology) or a master’s degree with five years of experience in project management, clinical research, and study design. The position also called for expertise in articulating research questions, selecting study designs, navigating the IRB approval process, collecting/managing data, analyzing statistics, and mentoring and teaching clinical faculty in their scholarly endeavors. A track record in generating academic output (manuscripts and abstracts at regional/national meetings) was required. We circulated a description of the position to Division faculty and to leadership in our School of Public Health.

Based on these criteria, an inaugural coach was hired (author C.M.M.). The coach had a PhD in epidemiology, 10 years of research experience, 16 publications, and had recently finished a National Institutes of Health (NIH) career development award. At the time of hiring, she was a Clinical Assistant Professor in the School of Dentistry, which provided additional FTE. She had no extramural funding but was applying for NIH-level grants and had received several small grants.

To ensure uptake of the coach’s services, we realized that it was necessary to delineate the scope of services available, clarify availability of the coach, and define expectations regarding authorship. We used an iterative process that took into consideration the coach’s expertise, services most needed by the Division’s clinicians, and discussions with Division leadership and faculty at faculty meetings across hospitals and clinics. A range of services and authorship expectations were defined. Consensus was reached that the coach should be invited to coauthor projects where design, analysis, and/or substantial intellectual content was provided and for which authorship criteria were met.12 Collegial reviews by the coach of already developed manuscripts and time-limited, low-intensity consultations that did not involve substantial intellectual contributions did not warrant authorship.12 On this basis, we created and distributed a flyer to publicize these guidelines and invite Division members to contact the coach (Figure 1).

The coach attended Division, section, and clinical group meetings to publicize the initiative. The coach also individually met with faculty throughout the Division, explained her role, described services available, and answered questions. The marketing effort was continuous and calibrated with more or less exposure depending on existing projects and the coach’s availability. In addition, the coach coordinated with the director of the Division’s faculty development program to cohost works-in-progress seminars, identify coach clients to present at these meetings, and provide brief presentations on a basic research skill at meetings. Faculty built rapport with the coach through these activities and became more comfortable reaching out for assistance. Because of the large size of the Division, it was decided to roll out the initiative in a stepwise fashion, starting with hospitalists before expanding to the rest of the Division.

Most faculty contacted the coach by e-mail to request a consultation, at which time the coach requested that they complete a preconsultation handout (Figure 2). Initial coaching appointments lasted one hour and were in-person. Coaching entailed an in-depth analysis of the project plan and advice on how to move the project forward. The coach provided tailored scholarly project advice and expertise in research methods. After initial consultations, she would review grant proposals, IRB applications, manuscripts, case report forms, abstracts, and other products. Her efforts typically focused on improving the methods and scientific and technical writing. Assistance with statistical analysis was provided on a case-by-case basis to maintain broad availability. To address statistically complex questions, the coach had five hours of monthly access to a PhD biostatistician via an on-campus consulting service. Follow-up appointments were encouraged and provided as needed by e-mail, phone, or in-person. The coach conducted regular reach outs to facilitate projects. However, execution of the research was generally the responsibility of the faculty member.

 

 

Program Evaluation

To characterize the reach and scope of the program, the coach tracked the number of faculty supported, types of services provided, status of initiated projects, numbers of grants generated, and the dissemination of scholarly products including papers and abstracts. We used these metrics to create summary reports to identify successes and areas for improvement. Monthly meetings between the coach and Division leadership were used to fine-tune the approach.

We surveyed coach clients anonymously to assess their satisfaction with the coach initiative. Using Likert scale questions where 1 = completely disagree and 5 = completely agree, we asked (1) if they would recommend the coach to colleagues, (2) if their work was higher quality because of the coach, (3) if they were overall satisfied with the coach, (4) whether the Division should continue to support the coach, and (5) if the coach’s lack of clinical training negatively affected their experience. This work was considered a quality improvement initiative for which IRB approval was not required.

RESULTS

Over 18 months, the coach supported a 49 Division members including 30 hospitalists and 63 projects. Projects included a wide range of scholarship: medical education research, qualitative research, clinical quality improvement projects, observational studies, and a randomized clinical trial. Many clients (n = 16) used the coach for more than one project. The scope of work included limited support projects (identifying research resource and brainstorming project feasibility) lasting one to two sessions (n = 25), projects with a limited scope (collegial reviews of manuscripts and assistance with IRB submissions) but requiring more than two consultations (n = 24), and ongoing in-depth support projects (contributions on design, data collection, analysis, and manuscript writing) that required three consultations or more (n = 14). The majority of Division members (75%) supported did not have master’s level training in a health services-related area, six had NIH or other national-level funding, and two had small grants funded by local sources prior to providing support. The number of Division faculty on a given project ranged from one to four.

The coach directly supported 13 manuscripts with coach authorship, seven manuscripts without authorship, 11 abstracts, and four grant submissions (Appendix). The coach was a coauthor on all the abstracts and a coinvestigator on the grant applications. Of the 13 publications the coach coauthored, 11 publications have been accepted to peer-reviewed journals and two are currently in the submission process. The types of articles published included one medical evaluation report, one qualitative study, one randomized clinical trial, three quality assessment/improvement reports, and five epidemiologic studies. The types of abstracts included one qualitative report, one systematic review, one randomized clinical trial, two quality improvement projects, two epidemiologic studies, and four medical education projects. Three of four small grants submitted to local and national funders were funded.

The coach’s influence extended beyond the Division. Forty-eight university faculty, fellows, or students not affiliated with general internal medicine benefited from coach coaching: 26 were authors on papers and/or abstracts coauthored by the coach, 17 on manuscripts the coach reviewed without authorship, and five participated in consultations.

The coach found the experience rewarding. She enjoyed working on the methodologic aspects of projects and benefited from being included as coauthor on papers.

Twenty-nine of the 43 faculty (67%) still at the institution responded to the program assessment survey. Faculty strongly agreed that they would recommend the coach to colleagues (average ± standard deviation [SD]: 4.7 ± 0.5), that it improved the quality of their work (4.5 ± 0.9), that they were overall satisfied with the coaching (4.6 ± 0.7), and that the Division should continue to support the coach (4.9 ± 0.4). Faculty did not agree that the lack of clinical training of the coach was a barrier (2.0 ± 1.3).

 

 

DISCUSSION

The coach program was highly utilized, well regarded, and delivered substantial, tangible, and academic output. We anticipate the coach initiative will continue to be a valuable resource for our Division and could prove to be a valuable model for other institutions seeking to bolster the scholarly work of clinical academicians.

Several lessons emerged through the course of this project. First, we realized it is essential to select a coach who is both knowledgeable and approachable. We found that after meeting the coach, many faculty sought her help who otherwise would not have. An explicit, ongoing marketing strategy with regular contact with faculty at meetings was a key to receiving consult requests.

Second, the lack of a clinical background did not seem to hinder the coach’s ability to coach clinicians. The coach acknowledged her lack of clinical experience and relied on clients to explain the clinical context of projects. We also learned that the coach’s substantial experience with the logistics of research was invaluable. For example, the coach had substantial experience with the IRB process and her pre-reviews of IRB applications made for a short and relatively seamless experience navigating the IRB process. The coach also facilitated collaborations and leveraged existing resources at our institution. For example, for a qualitative research project, the coach helped identify a health services faculty member with this specific expertise, which led to a successful collaboration and publication. Although a more junior coach with less established qualifications may be helpful with research methods and with the research process, our endeavor suggests that having a more highly trained and experienced researcher was extremely valuable. Finally, we learned that for a Division of our size, the 0.50 FTE allotted to the coach is a minimum requirement. The coach spent approximately four hours a week on marketing, attending faculty meetings and conducting brief didactics, two hours per week on administration, and 14 hours per week on consultations. Faculty generally received support soon after their requests, but there were occasional wait times, which may have delayed some projects.

Academic leaders at our institution have noted the success of our coach initiative and have created a demand for coach services. We are exploring funding models that would allow for the expansion of coach services to other departments and divisions. We are in the initial stages of creating an Academic Scholarship Support Core under the supervision of the coach. Within this Core, we envision that various research support services will be triaged to staff with appropriate expertise; for example, a regulatory coordinator would review IRB applications while a master’s level statistician would conduct statistical analyses.

We have also transitioned to a new coach and have continued to experience success with the program. Our initial coach (author C.M.M.) obtained an NIH R01, a foundation grant, and took over a summer program that trains dental faculty in clinical research methods leaving insufficient time for coaching. Our new coach also has a PhD in epidemiology with NIH R01 funding but has more available FTE. Both of our coaches are graduates of our School of Public Health and institutions with such schools may have good access to the expertise needed. Nonclinical PhDs are often almost entirely reliant on grants, and some nongrant support is often attractive to these researchers. Additionally, PhDs who are junior or mid-career faculty that have the needed training are relatively affordable, particularly when the resource is made available to large number of faculty. For example, our first coach cost $48,000 a year for 50% FTE.

A limitation to our assessment of the coach initiative was the lack of pre- and postintervention metrics of scholarly productivity. We cannot definitively say that the Division’s scholarly output has increased because of the coach. Nevertheless, we are confident that the coach’s coaching has enhanced the scholarly work of individual clinicians and provided value to the Division as a whole. The coach program has been a success in our Division. Other institutions facing the challenge of supporting the research efforts of academic clinicians may consider this model as a worthy investment.

 

 

Disclosures

The authors have nothing to disclose.

References

1. Marks AR. Physician-scientist, heal thyself. J Clin Invest. 2007;117(1):2. https://doi.org/10.1172/JCI31031.
2. Bunton SA, Corrice AM. Trends in tenure for clinical M.D. faculty in U.S. medical schools: a 25-year review. Association of American Medical Colleges: Analysis in Brief. 2010;9(9):1-2; https://www.aamc.org/download/139778/data/aibvol9_no9.pdf. Accessed March 7, 2019.
3. Bunton SA, Mallon WT. The continued evolution of faculty appointment and tenure policies at U.S. medical schools. Acad Med. 2007;82(3):281-289. https://doi.org/10.1097/ACM.0b013e3180307e87.
4. Accreditation Council for Graduate Medical Education. ACGME Common Program Requirements. 2017; http://www.acgme.org/What-We-Do/Accreditation/Common-Program-Requirements. Accessed March 7, 2019.
5. Penrose LL, Yeomans ER, Praderio C, Prien SD. An incremental approach to improving scholarly activity. J Grad Med Educ. 2012;4(4):496-499. https://doi.org/10.4300/JGME-D-11-00185.1.
6. Manring MM, Panzo JA, Mayerson JL. A framework for improving resident research participation and scholarly output. J Surg Educ. 2014;71(1):8-13. https://doi.org/10.1016/j.jsurg.2013.07.011.
7. Palacio A, Campbell DT, Moore M, Symes S, Tamariz L. Predictors of scholarly success among internal medicine residents. Am J Med. 2013;126(2):181-185. https:doi.org/10.1016/j.amjmed.2012.10.003.
8. Physician-Scientist Workforce Working Group. Physician-scientist workforce (PSW) report 2014. https://report.nih.gov/Workforce/PSW/challenges.aspx. Accessed December 27, 2018.
9. Straus SE, Johnson MO, Marquez C, Feldman MD. Characteristics of successful and failed mentoring relationships: a qualitative study across two academic health centers. Acad Med. 2013;88(1):82-89. https://doi.org/10.1097/ACM.0b013e31827647a0.
10. Altman DG, Goodman SN, Schroter S. How statistical expertise is used in medical research. JAMA. 2002;287(21):2817-2820. https://doi.org/10.1001/jama.287.21.2817.
11. Howell E, Kravet S, Kisuule F, Wright SM. An innovative approach to supporting hospitalist physicians towards academic success. J Hosp Med. 2008;3(4):314-318. https://doi.org/10.1002/jhm.327.
12. Kripalani S, Williams MV. Author responsibilities and disclosures at the Journal of Hospital Medicine. J Hosp Med. 2010;5(6):320-322. https://doi.org/10.1002/jhm.715.

References

1. Marks AR. Physician-scientist, heal thyself. J Clin Invest. 2007;117(1):2. https://doi.org/10.1172/JCI31031.
2. Bunton SA, Corrice AM. Trends in tenure for clinical M.D. faculty in U.S. medical schools: a 25-year review. Association of American Medical Colleges: Analysis in Brief. 2010;9(9):1-2; https://www.aamc.org/download/139778/data/aibvol9_no9.pdf. Accessed March 7, 2019.
3. Bunton SA, Mallon WT. The continued evolution of faculty appointment and tenure policies at U.S. medical schools. Acad Med. 2007;82(3):281-289. https://doi.org/10.1097/ACM.0b013e3180307e87.
4. Accreditation Council for Graduate Medical Education. ACGME Common Program Requirements. 2017; http://www.acgme.org/What-We-Do/Accreditation/Common-Program-Requirements. Accessed March 7, 2019.
5. Penrose LL, Yeomans ER, Praderio C, Prien SD. An incremental approach to improving scholarly activity. J Grad Med Educ. 2012;4(4):496-499. https://doi.org/10.4300/JGME-D-11-00185.1.
6. Manring MM, Panzo JA, Mayerson JL. A framework for improving resident research participation and scholarly output. J Surg Educ. 2014;71(1):8-13. https://doi.org/10.1016/j.jsurg.2013.07.011.
7. Palacio A, Campbell DT, Moore M, Symes S, Tamariz L. Predictors of scholarly success among internal medicine residents. Am J Med. 2013;126(2):181-185. https:doi.org/10.1016/j.amjmed.2012.10.003.
8. Physician-Scientist Workforce Working Group. Physician-scientist workforce (PSW) report 2014. https://report.nih.gov/Workforce/PSW/challenges.aspx. Accessed December 27, 2018.
9. Straus SE, Johnson MO, Marquez C, Feldman MD. Characteristics of successful and failed mentoring relationships: a qualitative study across two academic health centers. Acad Med. 2013;88(1):82-89. https://doi.org/10.1097/ACM.0b013e31827647a0.
10. Altman DG, Goodman SN, Schroter S. How statistical expertise is used in medical research. JAMA. 2002;287(21):2817-2820. https://doi.org/10.1001/jama.287.21.2817.
11. Howell E, Kravet S, Kisuule F, Wright SM. An innovative approach to supporting hospitalist physicians towards academic success. J Hosp Med. 2008;3(4):314-318. https://doi.org/10.1002/jhm.327.
12. Kripalani S, Williams MV. Author responsibilities and disclosures at the Journal of Hospital Medicine. J Hosp Med. 2010;5(6):320-322. https://doi.org/10.1002/jhm.715.

Issue
Journal of Hospital Medicine 14(8)
Issue
Journal of Hospital Medicine 14(8)
Page Number
457-461
Page Number
457-461
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Christy M. McKinney, PhD, MPH; E-mail: christy.mckinney@seattlechildrens.org; Telephone: 206-884-0584.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Nephrotoxin-Related Acute Kidney Injury and Predicting High-Risk Medication Combinations in the Hospitalized Child

Article Type
Changed
Sun, 08/04/2019 - 22:53

Acute kidney injury (AKI) is increasingly common in the hospitalized patient1,2 with recent adult and pediatric multinational studies reporting AKI rates of 57% and 27%, respectively.3,4 The development of AKI is associated with significant adverse outcomes including an increased risk of mortality.5-7 For those that survive, the history of AKI may contribute to a lifetime of impaired health with chronic kidney disease.8,9 This is particularly concerning for pediatric patients as AKI may impact morbidity for many decades, influence available therapies for these morbidities, and ultimately contribute to a shortened lifespan.10

AKI in the hospitalized patient is no longer accepted as an unfortunate and unavoidable consequence of illness or the indicated therapy. Currently, there is strong interest in this hospital-acquired condition with global initiatives aimed at increased prevention and early detection and treatment of AKI.11,12 To this objective, risk stratification tools or prediction models could assist clinicians in decision making. Numerous studies have tested AKI prediction models either in particular high-risk populations or based on associated comorbidities, biomarkers, and critical illness scores. These studies are predominantly in adult populations, and few have been externally validated.13 While associations between certain medications and AKI are well known, an AKI prediction model that is applicable to pediatric or adult populations and is based on medication exposure is difficult. However, there is a growing recognition of the potential to develop such a model using the electronic health record (EHR).14

In 2013, Seattle Children’s Hospital (SCH) implemented a nephrotoxin and AKI detection system to assist in clinical decision making within the EHR. This system instituted the automatic ordering of serum creatinines to screen for AKI when the provider ordered three or more medications that were suspected to be nephrotoxic. Other clinical factors such as the diagnoses or preexisting conditions were not considered in the decision-tool algorithm. This original algorithm (Algorithm 1) was later modified and the list of suspected nephrotoxins was expanded (Table 1) in order to align with a national pediatric AKI collaborative (Algorithm 2). However, it was unclear whether the algorithm modification would improve AKI detection.



The present study had two objectives. The first was to evaluate the impact of the modifications on the sensitivity and specificity of our system. The second objective, if either the sensitivity or specificity was determined to be suboptimal, was to develop an improved model for nephrotoxin-related AKI detection. Having either the sensitivity or the specificity under 50% would be equivalent to or worse than a random guess, which we would consider unacceptable.

METHODS

Context

SCH is a tertiary care academic teaching hospital affiliated with the University of Washington School of Medicine, Harborview Medical Center, and the Seattle Cancer Care Alliance. The hospital has 371 licensed beds and approximately 18 medical subspecialty services.

 

 

Study Population

This was a retrospective cohort study examining all patients ages 0-21 years admitted to SCH between December 1, 2013 and November 30, 2015. The detection system was modified to align with the national pediatric AKI collaborative, Nephrotoxic Injury Negated by Just-in-Time Action (NINJA) in November 2014. Both acute care and intensive care patients were included (data not separated by location). Patients who had end-stage kidney disease and were receiving dialysis and patients who were evaluated in the emergency department without being admitted or admitted as observation status were excluded from analysis. Patients were also excluded if they did not have a baseline serum creatinine as defined below.

Study Measures

AKI is defined at SCH using the Kidney Disease: Improving Global Outcomes Stage 1 criteria as a guideline. The diagnosis of AKI is based on an increase in the baseline serum creatinine by 0.3 mg/dL or an increase in the serum creatinine by >1.5 times the baseline assuming the incoming creatinine is 0.5 mg/dL or higher. For our definition, the increase in serum creatinine needs to have occurred within a one-week timeframe and urine output is not a diagnostic criterion.15 Baseline serum creatinine is defined as the lowest serum creatinine in the previous six months. Forty medications were classified as nephrotoxins based on previous analysis16 and adapted for our institutional formulary.

Statistical Analysis

To evaluate the efficacy of our systems in detecting nephrotoxin-related AKI, the sensitivity and the specificity using both our original algorithm (Algorithm 1) and the modified algorithm (Algorithm 2) were generated on our complete data set. To test sensitivity, the proportion of AKI patients who would trigger alert using Algorithm 1 and then with Algorithm 2 was identified. Similarly, to test specificity, the proportion of non-AKI patients who did not trigger an alert by the surveillance systems was identified. The differences in sensitivity and specificity between the two algorithms were evaluated using two-sample tests of proportion.

The statistical method of Combinatorial Inference has been utilized in studies of cancer biology17 and in genomics.18 A variation of this approach was used in this study to identify the specific medication combinations most associated with AKI. First, all of the nephrotoxic medications and medication combinations that were prescribed during our study period were identified from a data set (ie, a training set) containing 75% of all encounters selected at random without replacement. Using this training set, the prevalence of each medication combination and the rate of AKI associated with each combination were identified. The predicted overall AKI risk of an individual medication is the average of all the AKI rates associated with each combination containing that specific medication. Also incorporated into the determination of the predicted AKI risk was the prevalence of that medication combination.

To test our model’s predictive capability, the algorithm was applied to the remaining 25% of the total patient data (ie, the test set). The predicted AKI risk was compared with the actual AKI rate in the test data set. Our model’s predictive capability was represented in a receiver operator characteristic (ROC) analysis. The goal was to achieve an area under the ROC curve (AUC) approaching one as this would reflect 100% sensitivity and 100% specificity, whereas an AUC of 0.5 would represent a random guess (50% chance of being correct).

Lastly, our final step was to use our model’s ROC curve to determine an optimal threshold of AKI risk for which to trigger an alert. This predicted risk threshold was based on our goal to increase our surveillance system’s sensitivity balanced with maintaining an acceptable specificity.

An a priori threshold of P = .05 was used to determine statistical significance of all results. Analyses were conducted in Stata 12.1 (StataCorp LP, College Station, Texas) and R 3.3.2 (R Foundation for Statistical Computing, Vienna, Austria). A sample data set containing replication code for our model can be found in an online repository (https://dataverse.harvard.edu/dataverse/chuan). This study was approved by the Seattle Children’s Institutional Review Board.

 

 

RESULTS

Sensitivity and Specificity

Of the patient encounters, 14,779 were eligible during the study period. The sensitivity of the system’s ability to identify nephrotoxin-related AKI decreased from 46.9% using Algorithm 1 to 43.3% using Algorithm 2, a change of 3.6% (P = .22). The specificity increased from 73.6% to 89.3%, a change of 15.7% (P < .001; Table 2).

Improvement of Our Nephrotoxin-Related AKI Detection System Using a Novel AKI Prediction Strategy

A total of 838 medication combinations were identified in our training set and the predicted AKI risk for every medication combination was determined. By comparing the predicted risk of AKI to the actual AKI occurrence, an ROC curve with an AUC of 0.756 (Figure) was generated. An increase in system sensitivity was prioritized when determining the optimal AKI risk at which the model would trigger an alert. Setting an alert threshold at a predicted AKI risk of >8%, our model performed with a sensitivity of 74% while decreasing the specificity to 70%.

Identification of High-Risk Nephrotoxic Medications and Medication Combinations

Approximately 200 medication combinations were associated with >8% AKI risk, our new AKI prediction model’s alert threshold. Medication combinations consisting of up to 11 concomitantly prescribed medications were present in our data set. However, many of these combinations were infrequently prescribed. Further analysis, conducted in order to increase the clinical relevance of our findings, identified 10 medications or medication combinations that were both associated with a predicted AKI risk of >8% and that were prescribed on average greater than twice a month (Table 3).

DISCUSSION

The nephrotoxin-related AKI detection system at SCH automatically places orders for serum creatinines on patients who have met criteria for concomitant nephrotoxin exposure. This has given us a robust database from which to develop our clinical decision-making tool. Both our original and updated systems were based on the absolute number of concomitant nephrotoxic medications prescribed.16 This is a reasonable approach given the complexity of building a surveillance system19 and resource limitations. However, a system based on observed rather than theoretical or in vitro data, adaptable to the institution and designed for ongoing refinement, would be more valuable.

The interest in AKI prediction tools continues to be high. Bedford et al. employed numerous variables and diagnostic codes to predict the development of AKI in adults during hospitalization. They were able to produce a prediction model with a reasonable fit (AUC 0.72) to identify patients at higher risk for AKI but were less successful in their attempts to predict progression to severe AKI.20 Hodgson et al. recently developed an adult AKI prediction score (AUC 0.65-0.72) also based on numerous clinical factors that was able to positively impact inpatient mortality.21 To our knowledge, our model is unique in that it focuses on nephrotoxins using a predicted AKI risk algorithm based on observed AKI rates of previously ordered medications/medication combinations (two to 11 medications). Having a decision tool targeting medications gives the clinician guidance that can be used to make a specific intervention rather than identifying a patient at risk due to a diagnosis code or other difficult to modify factors.

There are abundant case studies and reports using logistic regression models identifying specific medications associated with AKI. Our choice of methodology was based on our assessment that logistic regression models would be inadequate for the development of a real-time clinical decision-making tool for several reasons. Using logistic regression to explore every medication combination based on our medication list would be challenging as there are approximately 5.5 × 1010 potential medication combinations. Additionally, logistic regression ignores any potential interactions between the medications. This is an important point as medication interactions can be synergistic, neutral, or antagonist. Consequently, the outcome generated from a set of combined variables may be different from one generated from the sum of each variable taken independently. Logistic regression also does not account for the potential prescribing trends among providers as it assumes that all medications or medication combinations are equally available at the same time. However, in practice, depending on numerous factors, such as hospital culture (eg, the presence of clinical standard work pathways), local bacterial resistance patterns, or medication shortages; certain medication combinations may occur more frequently while others not at all. Finally, logistic regression cannot account for the possibility of a medication combination occurring; therefore, logistic regression may identify a combination strongly associated with AKI that is rarely prescribed.

We theorized that AKI detection would improve with the Algorithm 2 modifications, including the expanded nephrotoxin list, which accompanied alignment with the national pediatric AKI collaborative, NINJA. The finding that our surveillance sensitivity did not improve with this system update supported our subsequent objective to develop a novel nephrotoxin-related AKI decision tool or detection system using our EHR data to identify which specific medications and/or medication combinations were associated with a higher rate of AKI. However, it should be noted that two factors related to measurement bias introduce limitations to our sensitivity and specificity analyses. First, regarding the presence of the alert system, our system will order serum creatinines on patients when they have been exposed to nephrotoxins. Consequently, the proportion of patients with creatinines measured will increase in the nephrotoxin-exposed patients. Unexposed patients may have AKI that is not detected because creatinines may not be ordered. Therefore, there is the potential for a relative increase in AKI detection among nephrotoxin-exposed patients as compared with unexposed patients, which would then affect the measured sensitivity and specificity of the alert. Second, the automated alerts require a baseline creatinine in order to trigger therefore are unable to identify AKI among patients who do not have a baseline serum creatinine measurement.

Our new nephrotoxin-related AKI detection model performed best when an alert was triggered for those medications or medication combinations with a predicted AKI risk of >8%. Forty-six medication combinations consisting of exactly two medications were determined to have a predicted AKI risk of >8% therefore would trigger an alert in our new model system. These medication combinations would not have triggered an alert using either of the previous system algorithms as both algorithms are based on the presence of three or more concomitant nephrotoxic medications.

From the list of suspected nephrotoxins, we identified 11 unique medications in 10 different combinations with a predicted AKI risk of >8% that were prescribed frequently (at least twice a month on average; Table 3). Notably, six out of 10 medication combinations involved vancomycin. Piperacillin-tazobactam was also represented in several combinations. These findings support the concern that others have reported regarding these two medications particularly when prescribed together.22,23



Interestingly, enalapril was identified as a higher-risk medication both alone and in combination with another medication. We do not suspect that enalapril carries a higher risk than other angiotensin-converting enzyme (ACE) inhibitors to increase a patient’s serum creatinine. Rather, we suspect that in our hospitalized patients, this relatively short-acting ACE inhibitor is commonly used in several of our vulnerable populations such as in cardiac and bone marrow transplant patients.

The alert threshold of our model can be adjusted to increase either the sensitivity or the specificity of AKI detection. Our detection sensitivity increased by >1.5-fold with the alert trigger threshold set at a predicted AKI risk of >8%. As a screening tool, our alert limits could be set such that our sensitivity would be greater; however, balancing the potential for alert fatigue is important in determining the acceptance and, ultimately, the success of a working surveillance system.24

A patient’s overall risk of AKI is influenced by many factors such as the presence of underlying chronic comorbidities and the nature or severity of the acute illness as this may affect the patient’s intravascular volume status, systemic blood pressures, or drug metabolism. Our study is limited as we are a children’s hospital and our patients may have fewer comorbidities than seen in the adult population. One could argue that this permits a perspective not clouded by the confounders of chronic disease and allows for the effect of the medications prescribed to be more apparent. However, our study includes critically ill patients and patients who may have been hemodynamically unstable. This may explain why the NINJA algorithm did not improve the sensitivity of our AKI detection as the NINJA collaborative excludes critically ill patients.

Dose and dosing frequency of the prescribed medications could not be taken into account, which could explain the finding that nonsteroidal anti-inflammatory drugs (NSAIDs) such as aspirin, ibuprofen, or ketorolac when used alone were associated with a low (<1%) rate of AKI despite being frequently prescribed. Additionally, as many providers are aware of the AKI risk of NSAIDs, these medications may have been used intermittently (as needed) or in select, perhaps healthier, patients or in patients that take these medications chronically who were admitted for reasons that did not alter their outpatient medication regimen.

Our study also reflects the prescribing habits of our institution and may not be directly applicable to nontertiary care hospitals or centers that do not have large cystic fibrosis, bone marrow, or solid organ transplant populations. Despite our study’s limitations, we feel that there are several findings that are relevant across centers and populations. Our data were derived from the systematic ordering of daily serum creatinines when a patient is at risk for nephrotoxin-related AKI. This is in step with the philosophy advocated by others that AKI identification can only occur if the providers are aware of this risk and are vigilant.25 In this vigilance, we also recognize that not all risks are of the same magnitude and may not deserve the same attention when resources are limited. Our identification of those medication combinations most associated with AKI at our institution has helped us narrow our focus and identify specific areas of potential education and intervention. The specific combinations identified may also be relevant to similar institutions serving similarly complex patients. Those with dissimilar populations could use this methodology to identify those medication combinations most relevant for their patient population and their prescriber’s habits. More studies of this type would be beneficial to the medical community as a whole as certain medication combinations may be found to be high risk regardless of the institution and the age or demographics of the populations they serve.

 

 

Acknowledgments

Dr. Karyn E. Yonekawa conceptualized and designed the study, directed the data analysis, interpreted the data, drafted, revised and gave final approval of the manuscript. Dr. Chuan Zhou contributed to the study design, acquired data, conducted the data analysis, critically reviewed, and gave final approval of the manuscript. Ms. Wren L. Haaland contributed to the study design, acquired data, conducted the data analysis, critically reviewed, and gave final approval of the manuscript. Dr. Davene R. Wright contributed to the study design, data analysis, critically reviewed, revised, and gave final approval of the manuscript.

The authors would like to thank Holly Clifton and Suzanne Spencer for their assistance with data acquisition and Drs. Derya Caglar, Corrie McDaniel, and Thida Ong for their writing support.

All authors approved the final manuscript as submitted and agree to be accountable for all aspects of the work.

Disclosures

The authors have no conflicts of interest to report.

Files
References

1. Siew ED, Davenport A. The growth of acute kidney injury: a rising tide or just closer attention to detail? Kidney Int. 2015;87(1):46-61.  https://doi.org/10.1038/ki.2014.293.
2. Matuszkiewicz-Rowinska J, Zebrowski P, Koscielska M, Malyszko J, Mazur A. The growth of acute kidney injury: Eastern European perspective. Kidney Int. 2015;87(6):1264.
https://doi.org/10.1038/ki.2015.61.
3. Hoste EA, Bagshaw SM, Bellomo R, et al. Epidemiology of acute kidney injury in critically ill patients: the multinational AKI-EPI study. Intensive Care Med. 2015;41(8):1411-1423. https://doi.org/10.1007/s00134-015-3934-7.
4. Kaddourah A, Basu RK, Bagshaw SM, Goldstein SL, AWARE Investigators. Epidemiology of acute kidney injury in critically ill children and young adults. N Engl J Med. 2017;376(1):11-20. https://doi.org/10.1056/NEJMoa1611391.
5. Soler YA, Nieves-Plaza M, Prieto M, Garcia-De Jesus R, Suarez-Rivera M. Pediatric risk, injury, failure, loss, end-stage renal disease score identifies acute kidney injury and predicts mortality in critically ill children: a prospective study. Pediatr Crit Care Med. 2013;14(4):e189-e195.
https://doi.org/10.1097/PCC.0b013e3182745675.
6. Case J, Khan S, Khalid R, Khan A. Epidemiology of acute kidney injury in the intensive care unit. Crit Care Res Pract. 2013;2013:479730. https://doi.org/10.1155/2013/479730.
7. Rewa O, Bagshaw SM. Acute kidney injury-epidemiology, outcomes and economics. Nat Rev Nephrol. 2014;10(4):193-207. https://doi.org/10.1038/nrneph.2013.282.
8. Hsu RK, Hsu CY. The role of acute kidney injury in chronic kidney disease. Semin Nephrol. 2016;36(4):283-292. https://doi.org/10.1016/j.semnephrol.2016.05.005.
9. Menon S, Kirkendall ES, Nguyen H, Goldstein SL. Acute kidney injury associated with high nephrotoxic medication exposure leads to chronic kidney disease after 6 months. J Pediatr. 2014;165(3):522-527.https://doi.org/10.1016/j.jpeds.2014.04.058.
10. Neild GH. Life expectancy with chronic kidney disease: an educational review. Pediatr Nephrol. 2017;32(2):243-248. https://doi.org/10.1007/s00467-016-3383-8.
11. Kellum JA. Acute kidney injury: AKI: the myth of inevitability is finally shattered. Nat Rev Nephrol. 2017;13(3):140-141. https://doi.org/10.1038/nrneph.2017.11.
12. Mehta RL, Cerda J, Burdmann EA, et al. International Society of Nephrology’s 0by25 initiative for acute kidney injury (zero preventable deaths by 2025): a human rights case for nephrology. Lancet. 2015;385(9987):2616-2643. https://doi.org/10.106/S0140-6736(15)60126-X.13.
13. Hodgson LE, Sarnowski A, Roderick PJ, Dimitrov BD, Venn RM, Forni LG. Systematic review of prognostic prediction models for acute kidney injury (AKI) in general hospital populations. BMJ Open. 2017;7(9):e016591. https://doi.org/10.1136/bmjopen-2017-016591.
14. Sutherland SM. Electronic health record-enabled big-data approaches to nephrotoxin-associated acute kidney injury risk prediction. Pharmacotherapy. 2018;38(8):804-812. https://doi.org/10.1002/phar.2150.
15. KDIGO Work Group. KDIGO clinical practice guidelines for acute kidney injury. Kidney Int Suppl. 2012;2(1):S1-138. PubMed
16. Moffett BS, Goldstein SL. Acute kidney injury and increasing nephrotoxic-medication exposure in noncritically-ill children. Clin J Am Soc Nephrol. 2011;6(4):856-863. https://doi.org/10.2215/CJN.08110910.
17. Mukherjee S, Pelech S, Neve RM, et al. Sparse combinatorial inference with an application in cancer biology. Bioinformatics. 2009;25(2):265-271. https://doi.org/10.1093/bioinformatics/btn611.
18. Bailly-Bechet M, Braunstein A, Pagnani A, Weigt M, Zecchina R. Inference of sparse combinatorial-control networks from gene-expression data: a message passing approach. BMC Bioinformatics. 2010;11:355. https://doi.org/10.1186/1471-2105-11-355.
19. Kirkendall ES, Spires WL, Mottes TA, et al. Development and performance of electronic acute kidney injury triggers to identify pediatric patients at risk for nephrotoxic medication-associated harm. Appl Clin Inform. 2014;5(2):313-333. https://doi.org/10.4338/ACI-2013-12-RA-0102.
20. Bedford M, Stevens P, Coulton S, et al. Development of Risk Models for the Prediction of New or Worsening Acute Kidney Injury on or During Hospital Admission: A Cohort and Nested Study. Southampton, UK: NIHR Journals Library; 2016. PubMed
21. Hodgson LE, Roderick PJ, Venn RM, Yao GL, Dimitrov BD, Forni LG. The ICE-AKI study: impact analysis of a clinical prediction rule and electronic AKI alert in general medical patients. PLoS One. 2018;13(8):e0200584. https://doi.org/10.1371/journal.pone.0200584.
22. Hammond DA, Smith MN, Li C, Hayes SM, Lusardi K, Bookstaver PB. Systematic review and meta-analysis of acute kidney injury associated with concomitant vancomycin and piperacillin/tazobactam. Clin Infect Dis. 2017;64(5):666-674. https://doi.org/10.1093/cid/ciw811.
23. Downes KJ, Cowden C, Laskin BL, et al. Association of acute kidney injury with concomitant vancomycin and piperacillin/tazobactam treatment among hospitalized children. JAMA Pediatr. 2017;171(12):e173219.https://doi.org/10.1001/jamapediatrics.2017.3219.
24. Agency for Heathcare Research and Quality. Alert Fatigue Web site. https://psnet.ahrq.gov/primers/primer/28/alert-fatigue. Updated July 2016. Accessed April 14, 2017.
25. Downes KJ, Rao MB, Kahill L, Nguyen H, Clancy JP, Goldstein SL. Daily serum creatinine monitoring promotes earlier detection of acute kidney injury in children and adolescents with cystic fibrosis. J Cyst Fibros. 2014;13(4):435-441. https://doi.org/10.1016/j.jcf.2014.03.005.

Article PDF
Issue
Journal of Hospital Medicine 14(8)
Publications
Topics
Page Number
462-467
Sections
Files
Files
Article PDF
Article PDF

Acute kidney injury (AKI) is increasingly common in the hospitalized patient1,2 with recent adult and pediatric multinational studies reporting AKI rates of 57% and 27%, respectively.3,4 The development of AKI is associated with significant adverse outcomes including an increased risk of mortality.5-7 For those that survive, the history of AKI may contribute to a lifetime of impaired health with chronic kidney disease.8,9 This is particularly concerning for pediatric patients as AKI may impact morbidity for many decades, influence available therapies for these morbidities, and ultimately contribute to a shortened lifespan.10

AKI in the hospitalized patient is no longer accepted as an unfortunate and unavoidable consequence of illness or the indicated therapy. Currently, there is strong interest in this hospital-acquired condition with global initiatives aimed at increased prevention and early detection and treatment of AKI.11,12 To this objective, risk stratification tools or prediction models could assist clinicians in decision making. Numerous studies have tested AKI prediction models either in particular high-risk populations or based on associated comorbidities, biomarkers, and critical illness scores. These studies are predominantly in adult populations, and few have been externally validated.13 While associations between certain medications and AKI are well known, an AKI prediction model that is applicable to pediatric or adult populations and is based on medication exposure is difficult. However, there is a growing recognition of the potential to develop such a model using the electronic health record (EHR).14

In 2013, Seattle Children’s Hospital (SCH) implemented a nephrotoxin and AKI detection system to assist in clinical decision making within the EHR. This system instituted the automatic ordering of serum creatinines to screen for AKI when the provider ordered three or more medications that were suspected to be nephrotoxic. Other clinical factors such as the diagnoses or preexisting conditions were not considered in the decision-tool algorithm. This original algorithm (Algorithm 1) was later modified and the list of suspected nephrotoxins was expanded (Table 1) in order to align with a national pediatric AKI collaborative (Algorithm 2). However, it was unclear whether the algorithm modification would improve AKI detection.



The present study had two objectives. The first was to evaluate the impact of the modifications on the sensitivity and specificity of our system. The second objective, if either the sensitivity or specificity was determined to be suboptimal, was to develop an improved model for nephrotoxin-related AKI detection. Having either the sensitivity or the specificity under 50% would be equivalent to or worse than a random guess, which we would consider unacceptable.

METHODS

Context

SCH is a tertiary care academic teaching hospital affiliated with the University of Washington School of Medicine, Harborview Medical Center, and the Seattle Cancer Care Alliance. The hospital has 371 licensed beds and approximately 18 medical subspecialty services.

 

 

Study Population

This was a retrospective cohort study examining all patients ages 0-21 years admitted to SCH between December 1, 2013 and November 30, 2015. The detection system was modified to align with the national pediatric AKI collaborative, Nephrotoxic Injury Negated by Just-in-Time Action (NINJA) in November 2014. Both acute care and intensive care patients were included (data not separated by location). Patients who had end-stage kidney disease and were receiving dialysis and patients who were evaluated in the emergency department without being admitted or admitted as observation status were excluded from analysis. Patients were also excluded if they did not have a baseline serum creatinine as defined below.

Study Measures

AKI is defined at SCH using the Kidney Disease: Improving Global Outcomes Stage 1 criteria as a guideline. The diagnosis of AKI is based on an increase in the baseline serum creatinine by 0.3 mg/dL or an increase in the serum creatinine by >1.5 times the baseline assuming the incoming creatinine is 0.5 mg/dL or higher. For our definition, the increase in serum creatinine needs to have occurred within a one-week timeframe and urine output is not a diagnostic criterion.15 Baseline serum creatinine is defined as the lowest serum creatinine in the previous six months. Forty medications were classified as nephrotoxins based on previous analysis16 and adapted for our institutional formulary.

Statistical Analysis

To evaluate the efficacy of our systems in detecting nephrotoxin-related AKI, the sensitivity and the specificity using both our original algorithm (Algorithm 1) and the modified algorithm (Algorithm 2) were generated on our complete data set. To test sensitivity, the proportion of AKI patients who would trigger alert using Algorithm 1 and then with Algorithm 2 was identified. Similarly, to test specificity, the proportion of non-AKI patients who did not trigger an alert by the surveillance systems was identified. The differences in sensitivity and specificity between the two algorithms were evaluated using two-sample tests of proportion.

The statistical method of Combinatorial Inference has been utilized in studies of cancer biology17 and in genomics.18 A variation of this approach was used in this study to identify the specific medication combinations most associated with AKI. First, all of the nephrotoxic medications and medication combinations that were prescribed during our study period were identified from a data set (ie, a training set) containing 75% of all encounters selected at random without replacement. Using this training set, the prevalence of each medication combination and the rate of AKI associated with each combination were identified. The predicted overall AKI risk of an individual medication is the average of all the AKI rates associated with each combination containing that specific medication. Also incorporated into the determination of the predicted AKI risk was the prevalence of that medication combination.

To test our model’s predictive capability, the algorithm was applied to the remaining 25% of the total patient data (ie, the test set). The predicted AKI risk was compared with the actual AKI rate in the test data set. Our model’s predictive capability was represented in a receiver operator characteristic (ROC) analysis. The goal was to achieve an area under the ROC curve (AUC) approaching one as this would reflect 100% sensitivity and 100% specificity, whereas an AUC of 0.5 would represent a random guess (50% chance of being correct).

Lastly, our final step was to use our model’s ROC curve to determine an optimal threshold of AKI risk for which to trigger an alert. This predicted risk threshold was based on our goal to increase our surveillance system’s sensitivity balanced with maintaining an acceptable specificity.

An a priori threshold of P = .05 was used to determine statistical significance of all results. Analyses were conducted in Stata 12.1 (StataCorp LP, College Station, Texas) and R 3.3.2 (R Foundation for Statistical Computing, Vienna, Austria). A sample data set containing replication code for our model can be found in an online repository (https://dataverse.harvard.edu/dataverse/chuan). This study was approved by the Seattle Children’s Institutional Review Board.

 

 

RESULTS

Sensitivity and Specificity

Of the patient encounters, 14,779 were eligible during the study period. The sensitivity of the system’s ability to identify nephrotoxin-related AKI decreased from 46.9% using Algorithm 1 to 43.3% using Algorithm 2, a change of 3.6% (P = .22). The specificity increased from 73.6% to 89.3%, a change of 15.7% (P < .001; Table 2).

Improvement of Our Nephrotoxin-Related AKI Detection System Using a Novel AKI Prediction Strategy

A total of 838 medication combinations were identified in our training set and the predicted AKI risk for every medication combination was determined. By comparing the predicted risk of AKI to the actual AKI occurrence, an ROC curve with an AUC of 0.756 (Figure) was generated. An increase in system sensitivity was prioritized when determining the optimal AKI risk at which the model would trigger an alert. Setting an alert threshold at a predicted AKI risk of >8%, our model performed with a sensitivity of 74% while decreasing the specificity to 70%.

Identification of High-Risk Nephrotoxic Medications and Medication Combinations

Approximately 200 medication combinations were associated with >8% AKI risk, our new AKI prediction model’s alert threshold. Medication combinations consisting of up to 11 concomitantly prescribed medications were present in our data set. However, many of these combinations were infrequently prescribed. Further analysis, conducted in order to increase the clinical relevance of our findings, identified 10 medications or medication combinations that were both associated with a predicted AKI risk of >8% and that were prescribed on average greater than twice a month (Table 3).

DISCUSSION

The nephrotoxin-related AKI detection system at SCH automatically places orders for serum creatinines on patients who have met criteria for concomitant nephrotoxin exposure. This has given us a robust database from which to develop our clinical decision-making tool. Both our original and updated systems were based on the absolute number of concomitant nephrotoxic medications prescribed.16 This is a reasonable approach given the complexity of building a surveillance system19 and resource limitations. However, a system based on observed rather than theoretical or in vitro data, adaptable to the institution and designed for ongoing refinement, would be more valuable.

The interest in AKI prediction tools continues to be high. Bedford et al. employed numerous variables and diagnostic codes to predict the development of AKI in adults during hospitalization. They were able to produce a prediction model with a reasonable fit (AUC 0.72) to identify patients at higher risk for AKI but were less successful in their attempts to predict progression to severe AKI.20 Hodgson et al. recently developed an adult AKI prediction score (AUC 0.65-0.72) also based on numerous clinical factors that was able to positively impact inpatient mortality.21 To our knowledge, our model is unique in that it focuses on nephrotoxins using a predicted AKI risk algorithm based on observed AKI rates of previously ordered medications/medication combinations (two to 11 medications). Having a decision tool targeting medications gives the clinician guidance that can be used to make a specific intervention rather than identifying a patient at risk due to a diagnosis code or other difficult to modify factors.

There are abundant case studies and reports using logistic regression models identifying specific medications associated with AKI. Our choice of methodology was based on our assessment that logistic regression models would be inadequate for the development of a real-time clinical decision-making tool for several reasons. Using logistic regression to explore every medication combination based on our medication list would be challenging as there are approximately 5.5 × 1010 potential medication combinations. Additionally, logistic regression ignores any potential interactions between the medications. This is an important point as medication interactions can be synergistic, neutral, or antagonist. Consequently, the outcome generated from a set of combined variables may be different from one generated from the sum of each variable taken independently. Logistic regression also does not account for the potential prescribing trends among providers as it assumes that all medications or medication combinations are equally available at the same time. However, in practice, depending on numerous factors, such as hospital culture (eg, the presence of clinical standard work pathways), local bacterial resistance patterns, or medication shortages; certain medication combinations may occur more frequently while others not at all. Finally, logistic regression cannot account for the possibility of a medication combination occurring; therefore, logistic regression may identify a combination strongly associated with AKI that is rarely prescribed.

We theorized that AKI detection would improve with the Algorithm 2 modifications, including the expanded nephrotoxin list, which accompanied alignment with the national pediatric AKI collaborative, NINJA. The finding that our surveillance sensitivity did not improve with this system update supported our subsequent objective to develop a novel nephrotoxin-related AKI decision tool or detection system using our EHR data to identify which specific medications and/or medication combinations were associated with a higher rate of AKI. However, it should be noted that two factors related to measurement bias introduce limitations to our sensitivity and specificity analyses. First, regarding the presence of the alert system, our system will order serum creatinines on patients when they have been exposed to nephrotoxins. Consequently, the proportion of patients with creatinines measured will increase in the nephrotoxin-exposed patients. Unexposed patients may have AKI that is not detected because creatinines may not be ordered. Therefore, there is the potential for a relative increase in AKI detection among nephrotoxin-exposed patients as compared with unexposed patients, which would then affect the measured sensitivity and specificity of the alert. Second, the automated alerts require a baseline creatinine in order to trigger therefore are unable to identify AKI among patients who do not have a baseline serum creatinine measurement.

Our new nephrotoxin-related AKI detection model performed best when an alert was triggered for those medications or medication combinations with a predicted AKI risk of >8%. Forty-six medication combinations consisting of exactly two medications were determined to have a predicted AKI risk of >8% therefore would trigger an alert in our new model system. These medication combinations would not have triggered an alert using either of the previous system algorithms as both algorithms are based on the presence of three or more concomitant nephrotoxic medications.

From the list of suspected nephrotoxins, we identified 11 unique medications in 10 different combinations with a predicted AKI risk of >8% that were prescribed frequently (at least twice a month on average; Table 3). Notably, six out of 10 medication combinations involved vancomycin. Piperacillin-tazobactam was also represented in several combinations. These findings support the concern that others have reported regarding these two medications particularly when prescribed together.22,23



Interestingly, enalapril was identified as a higher-risk medication both alone and in combination with another medication. We do not suspect that enalapril carries a higher risk than other angiotensin-converting enzyme (ACE) inhibitors to increase a patient’s serum creatinine. Rather, we suspect that in our hospitalized patients, this relatively short-acting ACE inhibitor is commonly used in several of our vulnerable populations such as in cardiac and bone marrow transplant patients.

The alert threshold of our model can be adjusted to increase either the sensitivity or the specificity of AKI detection. Our detection sensitivity increased by >1.5-fold with the alert trigger threshold set at a predicted AKI risk of >8%. As a screening tool, our alert limits could be set such that our sensitivity would be greater; however, balancing the potential for alert fatigue is important in determining the acceptance and, ultimately, the success of a working surveillance system.24

A patient’s overall risk of AKI is influenced by many factors such as the presence of underlying chronic comorbidities and the nature or severity of the acute illness as this may affect the patient’s intravascular volume status, systemic blood pressures, or drug metabolism. Our study is limited as we are a children’s hospital and our patients may have fewer comorbidities than seen in the adult population. One could argue that this permits a perspective not clouded by the confounders of chronic disease and allows for the effect of the medications prescribed to be more apparent. However, our study includes critically ill patients and patients who may have been hemodynamically unstable. This may explain why the NINJA algorithm did not improve the sensitivity of our AKI detection as the NINJA collaborative excludes critically ill patients.

Dose and dosing frequency of the prescribed medications could not be taken into account, which could explain the finding that nonsteroidal anti-inflammatory drugs (NSAIDs) such as aspirin, ibuprofen, or ketorolac when used alone were associated with a low (<1%) rate of AKI despite being frequently prescribed. Additionally, as many providers are aware of the AKI risk of NSAIDs, these medications may have been used intermittently (as needed) or in select, perhaps healthier, patients or in patients that take these medications chronically who were admitted for reasons that did not alter their outpatient medication regimen.

Our study also reflects the prescribing habits of our institution and may not be directly applicable to nontertiary care hospitals or centers that do not have large cystic fibrosis, bone marrow, or solid organ transplant populations. Despite our study’s limitations, we feel that there are several findings that are relevant across centers and populations. Our data were derived from the systematic ordering of daily serum creatinines when a patient is at risk for nephrotoxin-related AKI. This is in step with the philosophy advocated by others that AKI identification can only occur if the providers are aware of this risk and are vigilant.25 In this vigilance, we also recognize that not all risks are of the same magnitude and may not deserve the same attention when resources are limited. Our identification of those medication combinations most associated with AKI at our institution has helped us narrow our focus and identify specific areas of potential education and intervention. The specific combinations identified may also be relevant to similar institutions serving similarly complex patients. Those with dissimilar populations could use this methodology to identify those medication combinations most relevant for their patient population and their prescriber’s habits. More studies of this type would be beneficial to the medical community as a whole as certain medication combinations may be found to be high risk regardless of the institution and the age or demographics of the populations they serve.

 

 

Acknowledgments

Dr. Karyn E. Yonekawa conceptualized and designed the study, directed the data analysis, interpreted the data, drafted, revised and gave final approval of the manuscript. Dr. Chuan Zhou contributed to the study design, acquired data, conducted the data analysis, critically reviewed, and gave final approval of the manuscript. Ms. Wren L. Haaland contributed to the study design, acquired data, conducted the data analysis, critically reviewed, and gave final approval of the manuscript. Dr. Davene R. Wright contributed to the study design, data analysis, critically reviewed, revised, and gave final approval of the manuscript.

The authors would like to thank Holly Clifton and Suzanne Spencer for their assistance with data acquisition and Drs. Derya Caglar, Corrie McDaniel, and Thida Ong for their writing support.

All authors approved the final manuscript as submitted and agree to be accountable for all aspects of the work.

Disclosures

The authors have no conflicts of interest to report.

Acute kidney injury (AKI) is increasingly common in the hospitalized patient1,2 with recent adult and pediatric multinational studies reporting AKI rates of 57% and 27%, respectively.3,4 The development of AKI is associated with significant adverse outcomes including an increased risk of mortality.5-7 For those that survive, the history of AKI may contribute to a lifetime of impaired health with chronic kidney disease.8,9 This is particularly concerning for pediatric patients as AKI may impact morbidity for many decades, influence available therapies for these morbidities, and ultimately contribute to a shortened lifespan.10

AKI in the hospitalized patient is no longer accepted as an unfortunate and unavoidable consequence of illness or the indicated therapy. Currently, there is strong interest in this hospital-acquired condition with global initiatives aimed at increased prevention and early detection and treatment of AKI.11,12 To this objective, risk stratification tools or prediction models could assist clinicians in decision making. Numerous studies have tested AKI prediction models either in particular high-risk populations or based on associated comorbidities, biomarkers, and critical illness scores. These studies are predominantly in adult populations, and few have been externally validated.13 While associations between certain medications and AKI are well known, an AKI prediction model that is applicable to pediatric or adult populations and is based on medication exposure is difficult. However, there is a growing recognition of the potential to develop such a model using the electronic health record (EHR).14

In 2013, Seattle Children’s Hospital (SCH) implemented a nephrotoxin and AKI detection system to assist in clinical decision making within the EHR. This system instituted the automatic ordering of serum creatinines to screen for AKI when the provider ordered three or more medications that were suspected to be nephrotoxic. Other clinical factors such as the diagnoses or preexisting conditions were not considered in the decision-tool algorithm. This original algorithm (Algorithm 1) was later modified and the list of suspected nephrotoxins was expanded (Table 1) in order to align with a national pediatric AKI collaborative (Algorithm 2). However, it was unclear whether the algorithm modification would improve AKI detection.



The present study had two objectives. The first was to evaluate the impact of the modifications on the sensitivity and specificity of our system. The second objective, if either the sensitivity or specificity was determined to be suboptimal, was to develop an improved model for nephrotoxin-related AKI detection. Having either the sensitivity or the specificity under 50% would be equivalent to or worse than a random guess, which we would consider unacceptable.

METHODS

Context

SCH is a tertiary care academic teaching hospital affiliated with the University of Washington School of Medicine, Harborview Medical Center, and the Seattle Cancer Care Alliance. The hospital has 371 licensed beds and approximately 18 medical subspecialty services.

 

 

Study Population

This was a retrospective cohort study examining all patients ages 0-21 years admitted to SCH between December 1, 2013 and November 30, 2015. The detection system was modified to align with the national pediatric AKI collaborative, Nephrotoxic Injury Negated by Just-in-Time Action (NINJA) in November 2014. Both acute care and intensive care patients were included (data not separated by location). Patients who had end-stage kidney disease and were receiving dialysis and patients who were evaluated in the emergency department without being admitted or admitted as observation status were excluded from analysis. Patients were also excluded if they did not have a baseline serum creatinine as defined below.

Study Measures

AKI is defined at SCH using the Kidney Disease: Improving Global Outcomes Stage 1 criteria as a guideline. The diagnosis of AKI is based on an increase in the baseline serum creatinine by 0.3 mg/dL or an increase in the serum creatinine by >1.5 times the baseline assuming the incoming creatinine is 0.5 mg/dL or higher. For our definition, the increase in serum creatinine needs to have occurred within a one-week timeframe and urine output is not a diagnostic criterion.15 Baseline serum creatinine is defined as the lowest serum creatinine in the previous six months. Forty medications were classified as nephrotoxins based on previous analysis16 and adapted for our institutional formulary.

Statistical Analysis

To evaluate the efficacy of our systems in detecting nephrotoxin-related AKI, the sensitivity and the specificity using both our original algorithm (Algorithm 1) and the modified algorithm (Algorithm 2) were generated on our complete data set. To test sensitivity, the proportion of AKI patients who would trigger alert using Algorithm 1 and then with Algorithm 2 was identified. Similarly, to test specificity, the proportion of non-AKI patients who did not trigger an alert by the surveillance systems was identified. The differences in sensitivity and specificity between the two algorithms were evaluated using two-sample tests of proportion.

The statistical method of Combinatorial Inference has been utilized in studies of cancer biology17 and in genomics.18 A variation of this approach was used in this study to identify the specific medication combinations most associated with AKI. First, all of the nephrotoxic medications and medication combinations that were prescribed during our study period were identified from a data set (ie, a training set) containing 75% of all encounters selected at random without replacement. Using this training set, the prevalence of each medication combination and the rate of AKI associated with each combination were identified. The predicted overall AKI risk of an individual medication is the average of all the AKI rates associated with each combination containing that specific medication. Also incorporated into the determination of the predicted AKI risk was the prevalence of that medication combination.

To test our model’s predictive capability, the algorithm was applied to the remaining 25% of the total patient data (ie, the test set). The predicted AKI risk was compared with the actual AKI rate in the test data set. Our model’s predictive capability was represented in a receiver operator characteristic (ROC) analysis. The goal was to achieve an area under the ROC curve (AUC) approaching one as this would reflect 100% sensitivity and 100% specificity, whereas an AUC of 0.5 would represent a random guess (50% chance of being correct).

Lastly, our final step was to use our model’s ROC curve to determine an optimal threshold of AKI risk for which to trigger an alert. This predicted risk threshold was based on our goal to increase our surveillance system’s sensitivity balanced with maintaining an acceptable specificity.

An a priori threshold of P = .05 was used to determine statistical significance of all results. Analyses were conducted in Stata 12.1 (StataCorp LP, College Station, Texas) and R 3.3.2 (R Foundation for Statistical Computing, Vienna, Austria). A sample data set containing replication code for our model can be found in an online repository (https://dataverse.harvard.edu/dataverse/chuan). This study was approved by the Seattle Children’s Institutional Review Board.

 

 

RESULTS

Sensitivity and Specificity

Of the patient encounters, 14,779 were eligible during the study period. The sensitivity of the system’s ability to identify nephrotoxin-related AKI decreased from 46.9% using Algorithm 1 to 43.3% using Algorithm 2, a change of 3.6% (P = .22). The specificity increased from 73.6% to 89.3%, a change of 15.7% (P < .001; Table 2).

Improvement of Our Nephrotoxin-Related AKI Detection System Using a Novel AKI Prediction Strategy

A total of 838 medication combinations were identified in our training set and the predicted AKI risk for every medication combination was determined. By comparing the predicted risk of AKI to the actual AKI occurrence, an ROC curve with an AUC of 0.756 (Figure) was generated. An increase in system sensitivity was prioritized when determining the optimal AKI risk at which the model would trigger an alert. Setting an alert threshold at a predicted AKI risk of >8%, our model performed with a sensitivity of 74% while decreasing the specificity to 70%.

Identification of High-Risk Nephrotoxic Medications and Medication Combinations

Approximately 200 medication combinations were associated with >8% AKI risk, our new AKI prediction model’s alert threshold. Medication combinations consisting of up to 11 concomitantly prescribed medications were present in our data set. However, many of these combinations were infrequently prescribed. Further analysis, conducted in order to increase the clinical relevance of our findings, identified 10 medications or medication combinations that were both associated with a predicted AKI risk of >8% and that were prescribed on average greater than twice a month (Table 3).

DISCUSSION

The nephrotoxin-related AKI detection system at SCH automatically places orders for serum creatinines on patients who have met criteria for concomitant nephrotoxin exposure. This has given us a robust database from which to develop our clinical decision-making tool. Both our original and updated systems were based on the absolute number of concomitant nephrotoxic medications prescribed.16 This is a reasonable approach given the complexity of building a surveillance system19 and resource limitations. However, a system based on observed rather than theoretical or in vitro data, adaptable to the institution and designed for ongoing refinement, would be more valuable.

The interest in AKI prediction tools continues to be high. Bedford et al. employed numerous variables and diagnostic codes to predict the development of AKI in adults during hospitalization. They were able to produce a prediction model with a reasonable fit (AUC 0.72) to identify patients at higher risk for AKI but were less successful in their attempts to predict progression to severe AKI.20 Hodgson et al. recently developed an adult AKI prediction score (AUC 0.65-0.72) also based on numerous clinical factors that was able to positively impact inpatient mortality.21 To our knowledge, our model is unique in that it focuses on nephrotoxins using a predicted AKI risk algorithm based on observed AKI rates of previously ordered medications/medication combinations (two to 11 medications). Having a decision tool targeting medications gives the clinician guidance that can be used to make a specific intervention rather than identifying a patient at risk due to a diagnosis code or other difficult to modify factors.

There are abundant case studies and reports using logistic regression models identifying specific medications associated with AKI. Our choice of methodology was based on our assessment that logistic regression models would be inadequate for the development of a real-time clinical decision-making tool for several reasons. Using logistic regression to explore every medication combination based on our medication list would be challenging as there are approximately 5.5 × 1010 potential medication combinations. Additionally, logistic regression ignores any potential interactions between the medications. This is an important point as medication interactions can be synergistic, neutral, or antagonist. Consequently, the outcome generated from a set of combined variables may be different from one generated from the sum of each variable taken independently. Logistic regression also does not account for the potential prescribing trends among providers as it assumes that all medications or medication combinations are equally available at the same time. However, in practice, depending on numerous factors, such as hospital culture (eg, the presence of clinical standard work pathways), local bacterial resistance patterns, or medication shortages; certain medication combinations may occur more frequently while others not at all. Finally, logistic regression cannot account for the possibility of a medication combination occurring; therefore, logistic regression may identify a combination strongly associated with AKI that is rarely prescribed.

We theorized that AKI detection would improve with the Algorithm 2 modifications, including the expanded nephrotoxin list, which accompanied alignment with the national pediatric AKI collaborative, NINJA. The finding that our surveillance sensitivity did not improve with this system update supported our subsequent objective to develop a novel nephrotoxin-related AKI decision tool or detection system using our EHR data to identify which specific medications and/or medication combinations were associated with a higher rate of AKI. However, it should be noted that two factors related to measurement bias introduce limitations to our sensitivity and specificity analyses. First, regarding the presence of the alert system, our system will order serum creatinines on patients when they have been exposed to nephrotoxins. Consequently, the proportion of patients with creatinines measured will increase in the nephrotoxin-exposed patients. Unexposed patients may have AKI that is not detected because creatinines may not be ordered. Therefore, there is the potential for a relative increase in AKI detection among nephrotoxin-exposed patients as compared with unexposed patients, which would then affect the measured sensitivity and specificity of the alert. Second, the automated alerts require a baseline creatinine in order to trigger therefore are unable to identify AKI among patients who do not have a baseline serum creatinine measurement.

Our new nephrotoxin-related AKI detection model performed best when an alert was triggered for those medications or medication combinations with a predicted AKI risk of >8%. Forty-six medication combinations consisting of exactly two medications were determined to have a predicted AKI risk of >8% therefore would trigger an alert in our new model system. These medication combinations would not have triggered an alert using either of the previous system algorithms as both algorithms are based on the presence of three or more concomitant nephrotoxic medications.

From the list of suspected nephrotoxins, we identified 11 unique medications in 10 different combinations with a predicted AKI risk of >8% that were prescribed frequently (at least twice a month on average; Table 3). Notably, six out of 10 medication combinations involved vancomycin. Piperacillin-tazobactam was also represented in several combinations. These findings support the concern that others have reported regarding these two medications particularly when prescribed together.22,23



Interestingly, enalapril was identified as a higher-risk medication both alone and in combination with another medication. We do not suspect that enalapril carries a higher risk than other angiotensin-converting enzyme (ACE) inhibitors to increase a patient’s serum creatinine. Rather, we suspect that in our hospitalized patients, this relatively short-acting ACE inhibitor is commonly used in several of our vulnerable populations such as in cardiac and bone marrow transplant patients.

The alert threshold of our model can be adjusted to increase either the sensitivity or the specificity of AKI detection. Our detection sensitivity increased by >1.5-fold with the alert trigger threshold set at a predicted AKI risk of >8%. As a screening tool, our alert limits could be set such that our sensitivity would be greater; however, balancing the potential for alert fatigue is important in determining the acceptance and, ultimately, the success of a working surveillance system.24

A patient’s overall risk of AKI is influenced by many factors such as the presence of underlying chronic comorbidities and the nature or severity of the acute illness as this may affect the patient’s intravascular volume status, systemic blood pressures, or drug metabolism. Our study is limited as we are a children’s hospital and our patients may have fewer comorbidities than seen in the adult population. One could argue that this permits a perspective not clouded by the confounders of chronic disease and allows for the effect of the medications prescribed to be more apparent. However, our study includes critically ill patients and patients who may have been hemodynamically unstable. This may explain why the NINJA algorithm did not improve the sensitivity of our AKI detection as the NINJA collaborative excludes critically ill patients.

Dose and dosing frequency of the prescribed medications could not be taken into account, which could explain the finding that nonsteroidal anti-inflammatory drugs (NSAIDs) such as aspirin, ibuprofen, or ketorolac when used alone were associated with a low (<1%) rate of AKI despite being frequently prescribed. Additionally, as many providers are aware of the AKI risk of NSAIDs, these medications may have been used intermittently (as needed) or in select, perhaps healthier, patients or in patients that take these medications chronically who were admitted for reasons that did not alter their outpatient medication regimen.

Our study also reflects the prescribing habits of our institution and may not be directly applicable to nontertiary care hospitals or centers that do not have large cystic fibrosis, bone marrow, or solid organ transplant populations. Despite our study’s limitations, we feel that there are several findings that are relevant across centers and populations. Our data were derived from the systematic ordering of daily serum creatinines when a patient is at risk for nephrotoxin-related AKI. This is in step with the philosophy advocated by others that AKI identification can only occur if the providers are aware of this risk and are vigilant.25 In this vigilance, we also recognize that not all risks are of the same magnitude and may not deserve the same attention when resources are limited. Our identification of those medication combinations most associated with AKI at our institution has helped us narrow our focus and identify specific areas of potential education and intervention. The specific combinations identified may also be relevant to similar institutions serving similarly complex patients. Those with dissimilar populations could use this methodology to identify those medication combinations most relevant for their patient population and their prescriber’s habits. More studies of this type would be beneficial to the medical community as a whole as certain medication combinations may be found to be high risk regardless of the institution and the age or demographics of the populations they serve.

 

 

Acknowledgments

Dr. Karyn E. Yonekawa conceptualized and designed the study, directed the data analysis, interpreted the data, drafted, revised and gave final approval of the manuscript. Dr. Chuan Zhou contributed to the study design, acquired data, conducted the data analysis, critically reviewed, and gave final approval of the manuscript. Ms. Wren L. Haaland contributed to the study design, acquired data, conducted the data analysis, critically reviewed, and gave final approval of the manuscript. Dr. Davene R. Wright contributed to the study design, data analysis, critically reviewed, revised, and gave final approval of the manuscript.

The authors would like to thank Holly Clifton and Suzanne Spencer for their assistance with data acquisition and Drs. Derya Caglar, Corrie McDaniel, and Thida Ong for their writing support.

All authors approved the final manuscript as submitted and agree to be accountable for all aspects of the work.

Disclosures

The authors have no conflicts of interest to report.

References

1. Siew ED, Davenport A. The growth of acute kidney injury: a rising tide or just closer attention to detail? Kidney Int. 2015;87(1):46-61.  https://doi.org/10.1038/ki.2014.293.
2. Matuszkiewicz-Rowinska J, Zebrowski P, Koscielska M, Malyszko J, Mazur A. The growth of acute kidney injury: Eastern European perspective. Kidney Int. 2015;87(6):1264.
https://doi.org/10.1038/ki.2015.61.
3. Hoste EA, Bagshaw SM, Bellomo R, et al. Epidemiology of acute kidney injury in critically ill patients: the multinational AKI-EPI study. Intensive Care Med. 2015;41(8):1411-1423. https://doi.org/10.1007/s00134-015-3934-7.
4. Kaddourah A, Basu RK, Bagshaw SM, Goldstein SL, AWARE Investigators. Epidemiology of acute kidney injury in critically ill children and young adults. N Engl J Med. 2017;376(1):11-20. https://doi.org/10.1056/NEJMoa1611391.
5. Soler YA, Nieves-Plaza M, Prieto M, Garcia-De Jesus R, Suarez-Rivera M. Pediatric risk, injury, failure, loss, end-stage renal disease score identifies acute kidney injury and predicts mortality in critically ill children: a prospective study. Pediatr Crit Care Med. 2013;14(4):e189-e195.
https://doi.org/10.1097/PCC.0b013e3182745675.
6. Case J, Khan S, Khalid R, Khan A. Epidemiology of acute kidney injury in the intensive care unit. Crit Care Res Pract. 2013;2013:479730. https://doi.org/10.1155/2013/479730.
7. Rewa O, Bagshaw SM. Acute kidney injury-epidemiology, outcomes and economics. Nat Rev Nephrol. 2014;10(4):193-207. https://doi.org/10.1038/nrneph.2013.282.
8. Hsu RK, Hsu CY. The role of acute kidney injury in chronic kidney disease. Semin Nephrol. 2016;36(4):283-292. https://doi.org/10.1016/j.semnephrol.2016.05.005.
9. Menon S, Kirkendall ES, Nguyen H, Goldstein SL. Acute kidney injury associated with high nephrotoxic medication exposure leads to chronic kidney disease after 6 months. J Pediatr. 2014;165(3):522-527.https://doi.org/10.1016/j.jpeds.2014.04.058.
10. Neild GH. Life expectancy with chronic kidney disease: an educational review. Pediatr Nephrol. 2017;32(2):243-248. https://doi.org/10.1007/s00467-016-3383-8.
11. Kellum JA. Acute kidney injury: AKI: the myth of inevitability is finally shattered. Nat Rev Nephrol. 2017;13(3):140-141. https://doi.org/10.1038/nrneph.2017.11.
12. Mehta RL, Cerda J, Burdmann EA, et al. International Society of Nephrology’s 0by25 initiative for acute kidney injury (zero preventable deaths by 2025): a human rights case for nephrology. Lancet. 2015;385(9987):2616-2643. https://doi.org/10.106/S0140-6736(15)60126-X.13.
13. Hodgson LE, Sarnowski A, Roderick PJ, Dimitrov BD, Venn RM, Forni LG. Systematic review of prognostic prediction models for acute kidney injury (AKI) in general hospital populations. BMJ Open. 2017;7(9):e016591. https://doi.org/10.1136/bmjopen-2017-016591.
14. Sutherland SM. Electronic health record-enabled big-data approaches to nephrotoxin-associated acute kidney injury risk prediction. Pharmacotherapy. 2018;38(8):804-812. https://doi.org/10.1002/phar.2150.
15. KDIGO Work Group. KDIGO clinical practice guidelines for acute kidney injury. Kidney Int Suppl. 2012;2(1):S1-138. PubMed
16. Moffett BS, Goldstein SL. Acute kidney injury and increasing nephrotoxic-medication exposure in noncritically-ill children. Clin J Am Soc Nephrol. 2011;6(4):856-863. https://doi.org/10.2215/CJN.08110910.
17. Mukherjee S, Pelech S, Neve RM, et al. Sparse combinatorial inference with an application in cancer biology. Bioinformatics. 2009;25(2):265-271. https://doi.org/10.1093/bioinformatics/btn611.
18. Bailly-Bechet M, Braunstein A, Pagnani A, Weigt M, Zecchina R. Inference of sparse combinatorial-control networks from gene-expression data: a message passing approach. BMC Bioinformatics. 2010;11:355. https://doi.org/10.1186/1471-2105-11-355.
19. Kirkendall ES, Spires WL, Mottes TA, et al. Development and performance of electronic acute kidney injury triggers to identify pediatric patients at risk for nephrotoxic medication-associated harm. Appl Clin Inform. 2014;5(2):313-333. https://doi.org/10.4338/ACI-2013-12-RA-0102.
20. Bedford M, Stevens P, Coulton S, et al. Development of Risk Models for the Prediction of New or Worsening Acute Kidney Injury on or During Hospital Admission: A Cohort and Nested Study. Southampton, UK: NIHR Journals Library; 2016. PubMed
21. Hodgson LE, Roderick PJ, Venn RM, Yao GL, Dimitrov BD, Forni LG. The ICE-AKI study: impact analysis of a clinical prediction rule and electronic AKI alert in general medical patients. PLoS One. 2018;13(8):e0200584. https://doi.org/10.1371/journal.pone.0200584.
22. Hammond DA, Smith MN, Li C, Hayes SM, Lusardi K, Bookstaver PB. Systematic review and meta-analysis of acute kidney injury associated with concomitant vancomycin and piperacillin/tazobactam. Clin Infect Dis. 2017;64(5):666-674. https://doi.org/10.1093/cid/ciw811.
23. Downes KJ, Cowden C, Laskin BL, et al. Association of acute kidney injury with concomitant vancomycin and piperacillin/tazobactam treatment among hospitalized children. JAMA Pediatr. 2017;171(12):e173219.https://doi.org/10.1001/jamapediatrics.2017.3219.
24. Agency for Heathcare Research and Quality. Alert Fatigue Web site. https://psnet.ahrq.gov/primers/primer/28/alert-fatigue. Updated July 2016. Accessed April 14, 2017.
25. Downes KJ, Rao MB, Kahill L, Nguyen H, Clancy JP, Goldstein SL. Daily serum creatinine monitoring promotes earlier detection of acute kidney injury in children and adolescents with cystic fibrosis. J Cyst Fibros. 2014;13(4):435-441. https://doi.org/10.1016/j.jcf.2014.03.005.

References

1. Siew ED, Davenport A. The growth of acute kidney injury: a rising tide or just closer attention to detail? Kidney Int. 2015;87(1):46-61.  https://doi.org/10.1038/ki.2014.293.
2. Matuszkiewicz-Rowinska J, Zebrowski P, Koscielska M, Malyszko J, Mazur A. The growth of acute kidney injury: Eastern European perspective. Kidney Int. 2015;87(6):1264.
https://doi.org/10.1038/ki.2015.61.
3. Hoste EA, Bagshaw SM, Bellomo R, et al. Epidemiology of acute kidney injury in critically ill patients: the multinational AKI-EPI study. Intensive Care Med. 2015;41(8):1411-1423. https://doi.org/10.1007/s00134-015-3934-7.
4. Kaddourah A, Basu RK, Bagshaw SM, Goldstein SL, AWARE Investigators. Epidemiology of acute kidney injury in critically ill children and young adults. N Engl J Med. 2017;376(1):11-20. https://doi.org/10.1056/NEJMoa1611391.
5. Soler YA, Nieves-Plaza M, Prieto M, Garcia-De Jesus R, Suarez-Rivera M. Pediatric risk, injury, failure, loss, end-stage renal disease score identifies acute kidney injury and predicts mortality in critically ill children: a prospective study. Pediatr Crit Care Med. 2013;14(4):e189-e195.
https://doi.org/10.1097/PCC.0b013e3182745675.
6. Case J, Khan S, Khalid R, Khan A. Epidemiology of acute kidney injury in the intensive care unit. Crit Care Res Pract. 2013;2013:479730. https://doi.org/10.1155/2013/479730.
7. Rewa O, Bagshaw SM. Acute kidney injury-epidemiology, outcomes and economics. Nat Rev Nephrol. 2014;10(4):193-207. https://doi.org/10.1038/nrneph.2013.282.
8. Hsu RK, Hsu CY. The role of acute kidney injury in chronic kidney disease. Semin Nephrol. 2016;36(4):283-292. https://doi.org/10.1016/j.semnephrol.2016.05.005.
9. Menon S, Kirkendall ES, Nguyen H, Goldstein SL. Acute kidney injury associated with high nephrotoxic medication exposure leads to chronic kidney disease after 6 months. J Pediatr. 2014;165(3):522-527.https://doi.org/10.1016/j.jpeds.2014.04.058.
10. Neild GH. Life expectancy with chronic kidney disease: an educational review. Pediatr Nephrol. 2017;32(2):243-248. https://doi.org/10.1007/s00467-016-3383-8.
11. Kellum JA. Acute kidney injury: AKI: the myth of inevitability is finally shattered. Nat Rev Nephrol. 2017;13(3):140-141. https://doi.org/10.1038/nrneph.2017.11.
12. Mehta RL, Cerda J, Burdmann EA, et al. International Society of Nephrology’s 0by25 initiative for acute kidney injury (zero preventable deaths by 2025): a human rights case for nephrology. Lancet. 2015;385(9987):2616-2643. https://doi.org/10.106/S0140-6736(15)60126-X.13.
13. Hodgson LE, Sarnowski A, Roderick PJ, Dimitrov BD, Venn RM, Forni LG. Systematic review of prognostic prediction models for acute kidney injury (AKI) in general hospital populations. BMJ Open. 2017;7(9):e016591. https://doi.org/10.1136/bmjopen-2017-016591.
14. Sutherland SM. Electronic health record-enabled big-data approaches to nephrotoxin-associated acute kidney injury risk prediction. Pharmacotherapy. 2018;38(8):804-812. https://doi.org/10.1002/phar.2150.
15. KDIGO Work Group. KDIGO clinical practice guidelines for acute kidney injury. Kidney Int Suppl. 2012;2(1):S1-138. PubMed
16. Moffett BS, Goldstein SL. Acute kidney injury and increasing nephrotoxic-medication exposure in noncritically-ill children. Clin J Am Soc Nephrol. 2011;6(4):856-863. https://doi.org/10.2215/CJN.08110910.
17. Mukherjee S, Pelech S, Neve RM, et al. Sparse combinatorial inference with an application in cancer biology. Bioinformatics. 2009;25(2):265-271. https://doi.org/10.1093/bioinformatics/btn611.
18. Bailly-Bechet M, Braunstein A, Pagnani A, Weigt M, Zecchina R. Inference of sparse combinatorial-control networks from gene-expression data: a message passing approach. BMC Bioinformatics. 2010;11:355. https://doi.org/10.1186/1471-2105-11-355.
19. Kirkendall ES, Spires WL, Mottes TA, et al. Development and performance of electronic acute kidney injury triggers to identify pediatric patients at risk for nephrotoxic medication-associated harm. Appl Clin Inform. 2014;5(2):313-333. https://doi.org/10.4338/ACI-2013-12-RA-0102.
20. Bedford M, Stevens P, Coulton S, et al. Development of Risk Models for the Prediction of New or Worsening Acute Kidney Injury on or During Hospital Admission: A Cohort and Nested Study. Southampton, UK: NIHR Journals Library; 2016. PubMed
21. Hodgson LE, Roderick PJ, Venn RM, Yao GL, Dimitrov BD, Forni LG. The ICE-AKI study: impact analysis of a clinical prediction rule and electronic AKI alert in general medical patients. PLoS One. 2018;13(8):e0200584. https://doi.org/10.1371/journal.pone.0200584.
22. Hammond DA, Smith MN, Li C, Hayes SM, Lusardi K, Bookstaver PB. Systematic review and meta-analysis of acute kidney injury associated with concomitant vancomycin and piperacillin/tazobactam. Clin Infect Dis. 2017;64(5):666-674. https://doi.org/10.1093/cid/ciw811.
23. Downes KJ, Cowden C, Laskin BL, et al. Association of acute kidney injury with concomitant vancomycin and piperacillin/tazobactam treatment among hospitalized children. JAMA Pediatr. 2017;171(12):e173219.https://doi.org/10.1001/jamapediatrics.2017.3219.
24. Agency for Heathcare Research and Quality. Alert Fatigue Web site. https://psnet.ahrq.gov/primers/primer/28/alert-fatigue. Updated July 2016. Accessed April 14, 2017.
25. Downes KJ, Rao MB, Kahill L, Nguyen H, Clancy JP, Goldstein SL. Daily serum creatinine monitoring promotes earlier detection of acute kidney injury in children and adolescents with cystic fibrosis. J Cyst Fibros. 2014;13(4):435-441. https://doi.org/10.1016/j.jcf.2014.03.005.

Issue
Journal of Hospital Medicine 14(8)
Issue
Journal of Hospital Medicine 14(8)
Page Number
462-467
Page Number
462-467
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Karyn E Yonekawa, MD; E-mail: karyn.yonekawa@seattlechildrens.org; Telephone: 206-987-2524.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files