Affiliations
Division of General and Community Pediatrics, Hospital Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio
Anderson Center for Health Systems Excellence, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio
Email
grant.mussman@cchmc.org
Given name(s)
Grant M.
Family name
Mussman
Degrees
MD

Journal of Hospital Medicine – Nov. 2017

Article Type
Changed
Fri, 09/14/2018 - 11:56
Sustainability in the AAP Bronchiolitis Quality Improvement Project

 

BACKGROUND AND OBJECTIVES: Adherence to American Academy of Pediatrics (AAP) bronchiolitis clinical practice guideline recommendations improved significantly through the AAP’s multi-institutional collaborative the Bronchiolitis Quality Improvement Project (BQIP). We assessed sustainability of improvements at participating institutions for 1 year following completion of the collaborative.

METHODS: Twenty-one multidisciplinary hospital-based teams provided monthly data for key inpatient bronchiolitis measures during baseline and intervention bronchiolitis seasons. Nine sites provided data in the season following completion of the collaborative. Encounters included children younger than 24 months who were hospitalized for bronchiolitis without comorbid chronic illness, prematurity, or intensive care. Changes between baseline-, intervention-, and sustainability-season data were assessed using generalized linear mixed-effects models with site-specific random effects. Differences between hospital characteristics, baseline performance, and initial improvement among sites that did and did not participate in the sustainability season were compared.

RESULTS: A total of 2,275 discharges were reviewed, comprising 995 baseline, 877 intervention, and 403 sustainability-season encounters. Improvements in all key bronchiolitis quality measures achieved during the intervention season were maintained during the sustainability season, and orders for intermittent pulse oximetry increased from 40.6% (95% confidence interval, 22.8-61.1) to 79.2% (95% CI, 58.0-91.3). Sites that did and did not participate in the sustainability season had similar characteristics.

DISCUSSION: BQIP participating sites maintained improvements in key bronchiolitis quality measures for 1 year following the project’s completion. This approach, which provided an evidence-based best-practice toolkit while building the quality-improvement capacity of local interdisciplinary teams, may support performance gains that persist beyond the active phase of the collaborative.
 

Also in JHM this month

The effect of an inpatient smoking cessation treatment program on hospital readmissions and length of stayAUTHORS: Eline M. van den Broek-Altenburg, MS, MA, Adam J. Atherly, PhD

Treatment trends and outcomes in healthcare-associated pneumoniaAUTHORS: Sarah Haessler, MD; Tara Lagu, MD, MPH; Peter K. Lindenauer, MD, MSc; Daniel J. Skiest, MD; Aruna Priya, MA, MSc; Penelope S. Pekow, PhD; Marya D. Zilberberg, MD, MPH; Thomas L. Higgins, MD, MBA; Michael B. Rothberg, MD, MPH

What’s the purpose of rounds? A qualitative study examining the perceptions of faculty and studentsAUTHORS: Oliver Hulland; Jeanne Farnan, MD, MHPE; Raphael Rabinowitz; Lisa Kearns, MD, MS; Michele Long, MD; Bradley Monash, MD; Priti Bhansali, MD; H. Barrett Fromme, MD, MHPE

Association between anemia and fatigue in hospitalized patients: does the measure of anemia matter?AUTHORS: Micah T. Prochaska, MD, MS; Richard Newcomb, BA; Graham Block, BA; Brian Park, BA; David O. Meltzer MD, PhD

Helping seniors plan for posthospital discharge needs before a hospitalization occurs: Results from the randomized control trial of planyourlifespan.orgAUTHORS: Lee A. Lindquist, MD, MPH, MBA; Vanessa Ramirez-Zohfeld, MPH; Priya D. Sunkara, MA; Chris Forcucci, RN, BSN; Dianne S. Campbell, BS; Phyllis Mitzen, MA; Jody D. Ciolino, PhD; Gayle Kricke, MSW; Anne Seltzer, LSW; Ana V. Ramirez, BA; Kenzie A. Cameron, PhD, MPH

Publications
Topics
Sections
Sustainability in the AAP Bronchiolitis Quality Improvement Project
Sustainability in the AAP Bronchiolitis Quality Improvement Project

 

BACKGROUND AND OBJECTIVES: Adherence to American Academy of Pediatrics (AAP) bronchiolitis clinical practice guideline recommendations improved significantly through the AAP’s multi-institutional collaborative the Bronchiolitis Quality Improvement Project (BQIP). We assessed sustainability of improvements at participating institutions for 1 year following completion of the collaborative.

METHODS: Twenty-one multidisciplinary hospital-based teams provided monthly data for key inpatient bronchiolitis measures during baseline and intervention bronchiolitis seasons. Nine sites provided data in the season following completion of the collaborative. Encounters included children younger than 24 months who were hospitalized for bronchiolitis without comorbid chronic illness, prematurity, or intensive care. Changes between baseline-, intervention-, and sustainability-season data were assessed using generalized linear mixed-effects models with site-specific random effects. Differences between hospital characteristics, baseline performance, and initial improvement among sites that did and did not participate in the sustainability season were compared.

RESULTS: A total of 2,275 discharges were reviewed, comprising 995 baseline, 877 intervention, and 403 sustainability-season encounters. Improvements in all key bronchiolitis quality measures achieved during the intervention season were maintained during the sustainability season, and orders for intermittent pulse oximetry increased from 40.6% (95% confidence interval, 22.8-61.1) to 79.2% (95% CI, 58.0-91.3). Sites that did and did not participate in the sustainability season had similar characteristics.

DISCUSSION: BQIP participating sites maintained improvements in key bronchiolitis quality measures for 1 year following the project’s completion. This approach, which provided an evidence-based best-practice toolkit while building the quality-improvement capacity of local interdisciplinary teams, may support performance gains that persist beyond the active phase of the collaborative.
 

Also in JHM this month

The effect of an inpatient smoking cessation treatment program on hospital readmissions and length of stayAUTHORS: Eline M. van den Broek-Altenburg, MS, MA, Adam J. Atherly, PhD

Treatment trends and outcomes in healthcare-associated pneumoniaAUTHORS: Sarah Haessler, MD; Tara Lagu, MD, MPH; Peter K. Lindenauer, MD, MSc; Daniel J. Skiest, MD; Aruna Priya, MA, MSc; Penelope S. Pekow, PhD; Marya D. Zilberberg, MD, MPH; Thomas L. Higgins, MD, MBA; Michael B. Rothberg, MD, MPH

What’s the purpose of rounds? A qualitative study examining the perceptions of faculty and studentsAUTHORS: Oliver Hulland; Jeanne Farnan, MD, MHPE; Raphael Rabinowitz; Lisa Kearns, MD, MS; Michele Long, MD; Bradley Monash, MD; Priti Bhansali, MD; H. Barrett Fromme, MD, MHPE

Association between anemia and fatigue in hospitalized patients: does the measure of anemia matter?AUTHORS: Micah T. Prochaska, MD, MS; Richard Newcomb, BA; Graham Block, BA; Brian Park, BA; David O. Meltzer MD, PhD

Helping seniors plan for posthospital discharge needs before a hospitalization occurs: Results from the randomized control trial of planyourlifespan.orgAUTHORS: Lee A. Lindquist, MD, MPH, MBA; Vanessa Ramirez-Zohfeld, MPH; Priya D. Sunkara, MA; Chris Forcucci, RN, BSN; Dianne S. Campbell, BS; Phyllis Mitzen, MA; Jody D. Ciolino, PhD; Gayle Kricke, MSW; Anne Seltzer, LSW; Ana V. Ramirez, BA; Kenzie A. Cameron, PhD, MPH

 

BACKGROUND AND OBJECTIVES: Adherence to American Academy of Pediatrics (AAP) bronchiolitis clinical practice guideline recommendations improved significantly through the AAP’s multi-institutional collaborative the Bronchiolitis Quality Improvement Project (BQIP). We assessed sustainability of improvements at participating institutions for 1 year following completion of the collaborative.

METHODS: Twenty-one multidisciplinary hospital-based teams provided monthly data for key inpatient bronchiolitis measures during baseline and intervention bronchiolitis seasons. Nine sites provided data in the season following completion of the collaborative. Encounters included children younger than 24 months who were hospitalized for bronchiolitis without comorbid chronic illness, prematurity, or intensive care. Changes between baseline-, intervention-, and sustainability-season data were assessed using generalized linear mixed-effects models with site-specific random effects. Differences between hospital characteristics, baseline performance, and initial improvement among sites that did and did not participate in the sustainability season were compared.

RESULTS: A total of 2,275 discharges were reviewed, comprising 995 baseline, 877 intervention, and 403 sustainability-season encounters. Improvements in all key bronchiolitis quality measures achieved during the intervention season were maintained during the sustainability season, and orders for intermittent pulse oximetry increased from 40.6% (95% confidence interval, 22.8-61.1) to 79.2% (95% CI, 58.0-91.3). Sites that did and did not participate in the sustainability season had similar characteristics.

DISCUSSION: BQIP participating sites maintained improvements in key bronchiolitis quality measures for 1 year following the project’s completion. This approach, which provided an evidence-based best-practice toolkit while building the quality-improvement capacity of local interdisciplinary teams, may support performance gains that persist beyond the active phase of the collaborative.
 

Also in JHM this month

The effect of an inpatient smoking cessation treatment program on hospital readmissions and length of stayAUTHORS: Eline M. van den Broek-Altenburg, MS, MA, Adam J. Atherly, PhD

Treatment trends and outcomes in healthcare-associated pneumoniaAUTHORS: Sarah Haessler, MD; Tara Lagu, MD, MPH; Peter K. Lindenauer, MD, MSc; Daniel J. Skiest, MD; Aruna Priya, MA, MSc; Penelope S. Pekow, PhD; Marya D. Zilberberg, MD, MPH; Thomas L. Higgins, MD, MBA; Michael B. Rothberg, MD, MPH

What’s the purpose of rounds? A qualitative study examining the perceptions of faculty and studentsAUTHORS: Oliver Hulland; Jeanne Farnan, MD, MHPE; Raphael Rabinowitz; Lisa Kearns, MD, MS; Michele Long, MD; Bradley Monash, MD; Priti Bhansali, MD; H. Barrett Fromme, MD, MHPE

Association between anemia and fatigue in hospitalized patients: does the measure of anemia matter?AUTHORS: Micah T. Prochaska, MD, MS; Richard Newcomb, BA; Graham Block, BA; Brian Park, BA; David O. Meltzer MD, PhD

Helping seniors plan for posthospital discharge needs before a hospitalization occurs: Results from the randomized control trial of planyourlifespan.orgAUTHORS: Lee A. Lindquist, MD, MPH, MBA; Vanessa Ramirez-Zohfeld, MPH; Priya D. Sunkara, MA; Chris Forcucci, RN, BSN; Dianne S. Campbell, BS; Phyllis Mitzen, MA; Jody D. Ciolino, PhD; Gayle Kricke, MSW; Anne Seltzer, LSW; Ana V. Ramirez, BA; Kenzie A. Cameron, PhD, MPH

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Sustainability in the AAP Bronchiolitis Quality Improvement Project

Article Type
Changed
Mon, 11/13/2017 - 13:39

Acute viral bronchiolitis is the most common cause of hospitalization for children less than 1 year of age.1 Overuse of ineffective therapies has persisted despite the existence of the evidence-based American Academy of Pediatrics (AAP) clinical practice guideline (CPG), which recommends primarily supportive care.2-8 Adherence to the AAP CPG recommendations for management of bronchiolitis improved significantly through the AAP’s Bronchiolitis Quality Improvement Project (BQIP), a 12-month, multiinstitutional collaborative of community and free-standing children’s hospitals.9 This subsequent study investigates if these improvements were sustained after completion of the formal 12-month project.

Published multiinstitutional bronchiolitis quality improvement (QI) work is limited to 1 study5 that describes the results of a single intervention season at academic medical centers. Multiyear bronchiolitis QI projects are limited to single-center studies, and results have been mixed.5,6,8,10-13 One study11 observed continued improvement in bronchodilator use in subsequent seasons, whereas a second study10 observed a return to baseline bronchodilator use in the following season. Mittal6 observed inconsistent improvements in key bronchiolitis measures during postintervention seasons.

Our specific aim was to assess the sustainability of improvements in bronchiolitis management at participating institutions 1 year following completion of the AAP BQIP collaborative.9 Because no studies demonstrate the most effective way to support long-term improvement through a QI collaborative, we hypothesized that the initial collaborative activities, which were designed to build the capacity of local interdisciplinary teams while providing standardized evidence-based care pathways, would lead to performance in the subsequent season at levels similar to or better than those observed during the active phase of the collaborative, without additional project interventions.

METHODS

Study Design and Setting

This was a follow-up study of the AAP Quality Improvement Innovation Networks project entitled “A Quality Collaborative for Improving Hospital Compliance with the AAP Bronchiolitis Guideline” (BQIP).9 The AAP Institutional Review Board approved this project.

Twenty-one multidisciplinary, hospital-based teams participated in the BQIP collaborative and provided monthly data during the January through March bronchiolitis season. Teams submitted 2013 baseline data and 2014 intervention data. Nine sites provided 2015 sustainability data following the completion of the collaborative.

Participants

Hospital encounters with a primary diagnosis of acute viral bronchiolitis were eligible for inclusion among patients from 1 month to 2 years of age. Encounters were excluded for prematurity (<35 weeks gestational age), congenital heart disease, bronchopulmonary dysplasia, genetic, congenital or neuromuscular abnormalities, and pediatric intensive-care admission.

Data Collection

Hospital characteristics were collected, including hospital type (academic, community), bed size, location (urban, rural), hospital distributions of race/ethnicity and public payer, cases of bronchiolitis per year, presence of an electronic medical record and a pediatric respiratory therapist, and self-rated QI knowledge of the multidisciplinary team (very knowledgeable, knowledgeable, and somewhat knowledgeable). A trained member at each site collected data through structured chart review in baseline, intervention, and sustainability bronchiolitis seasons for January, February, and March. Site members reviewed the first 20 charts per month that met the inclusion criteria or all charts if there were fewer than 20 eligible encounters. Sites input data about key quality measures into the AAP’s Quality Improvement Data Aggregator, a web-based data repository.

Intervention

The BQIP project was designed as a virtual collaborative consisting of monthly education webinars about QI methods and bronchiolitis management, opportunities for collaboration via teleconference and e-mail listserv, and individual site-coaching by e-mail or telephone.9 A change package was shared with sites that included examples of evidence-based pathways, ordersets, a respiratory scoring tool, communication tools for parents and referring physicians, and slide sets for individual site education efforts. Following completion of the collaborative, written resources remained available to participants, although virtual collaboration ceased and no additional project interventions to promote sustainability were introduced.

Bronchiolitis Process and Outcome Measures

Process measures following admission included the following: severity assessment using a respiratory score, respiratory score use to assess response to bronchodilators, bronchodilator use, bronchodilator doses, steroid doses per patient encounter, chest radiographs per encounter, and presence of an order to transition to intermittent pulse oximetry monitoring. Outcome measures included length of stay and readmissions within 72 hours.

 

 

Analysis

Changes among baseline-, intervention-, and sustainability-season data were assessed using generalized linear mixed-effects models with random effect for study sites. Negative binomial models were used for count variables to allow for overdispersion. Length of stay was log-transformed to achieve a normal distribution. We also analyzed each site individually to assess whether sustained improvements were the result of broad sustainability across all sites or whether they represented an aggregation of some sites that continued to improve while other sites actually worsened.

To address any bias introduced by the voluntary and incomplete participation of sites in the sustainability season, we planned a priori to conduct 3 additional analyses. First, we compared the characteristics of sites that did participate in the sustainability season with those that did not participate by using Chi-squared tests for differences in proportions and t tests for differences in means. Second, we determined whether the baseline-season process and outcome measures were different between sites that did and did not participate using descriptive statistics. Third, we assessed whether improvements between the baseline and intervention seasons were different between sites that did and did not participate using a linear mixed-effects model for normally distributed outcomes and generalized linear mixed-effects model with site-specific random effects for nonnormally distributed outcomes. All study outcomes were summarized in terms of model-adjusted means along with the corresponding 95% confidence intervals. All P values are 2-sided, and P < 0.05 was used to define statistical significance. Data analyses were conducted using SAS software (SAS Institute Inc., Cary, North Carolina) version 9.4.

RESULTS

A total of 2275 patient encounters were reviewed, comprising 995 encounters from the baseline season, 877 from the intervention season, and 403 from the sustainability season. Improvements were observed across key bronchiolitis quality measures from the baseline to intervention season,9 although not every site improved in every metric. All improvements achieved by the combined groups during the intervention season were sustained during the sustainability season (Table 1). No measures demonstrated statistically significant reductions between the intervention and sustainability seasons, and the use of intermittent pulse oximetry continued to increase. Length of stay and 72-hour readmissions were not statistically different between seasons (P = 0.54 and P = 0.98, respectively).

Mean use of a respiratory score, which was 6.6% (95% confidence interval [CI], 1.8-21.5) in the baseline season, increased to 73.9% (95% CI, 56.9-85.9) during the intervention season and 70.7% (95 % CI, 53.8-83.5) in the sustainability season. The number of bronchodilator doses per encounter decreased from 3.1 (95% CI, 2.1-4.4) in the baseline season to 1.0 (95% CI, 0.7-1.4) in the intervention season and 0.8 (95% CI, 0.5-1.3) in the sustainability season. Orders for intermittent pulse oximetry increased significantly from a baseline of 40.6% (95% CI, 22.8-61.1) to 68.6% (95% CI, 47.4-84.1) in the intervention season and 79.2% (95% CI, 58.0-91.3) in the sustainability season. In general, this same pattern was present, ie, individual sites did not demonstrate significant improvement or worsening across the measures (Appendix 1a). The Figure illustrates individual site and overall project performance over the study period using bronchodilator use as a representative example.

Characteristics of sites that did and did not participate in the sustainability season were not significantly different (Table 2). The majority of sites were medium-sized centers that cared for an average of 100 to 300 inpatient cases of bronchiolitis per year and were located in an urban environment.

Differences in baseline bronchiolitis quality measures between sites that did and did not participate in the sustainability season are displayed in Table 3. Sustainability sites had significantly lower baseline use of a respiratory score, both to assess severity of illness at any point after hospitalization as well as to assess responsiveness following bronchodilator treatments (P < 0.001). At baseline they also had fewer orders for intermittent pulse oximetry use (P = 0.01) and fewer doses of bronchodilators per encounter (P = 0.04). Sites were not significantly different in their baseline use of bronchodilators, oral steroid doses, or chest radiographs. Sites that participated in the sustainability season demonstated larger magnitude improvement between baseline and intervention seasons for respiratory score use (P < 0.001 for any use and P = 0.02 to assess bronchodilator responsiveness; Appendix 1b).

DISCUSSION

To our knowledge, this is the first report of sustained improvements in care achieved through a multiinstitutional QI collaborative of community and academic hospitals focused on bronchiolitis care. We found that overall sites participating in a national bronchiolitis QI project sustained improvements in key bronchiolitis quality measures for 1 year following the project’s completion. For the aggregate group no measures worsened, and one measure, orders for intermittent pulse oximetry monitoring, continued to increase during the sustainability season. Furthermore, the sustained improvements were primarily the result of consistent sustained performance of each individual site, as opposed to averages wherein some sites worsened while others improved (Appendix 1a). These findings suggest that designing a collaborative approach, which provides an evidence-based best-practice toolkit while building the QI capacity of local interdisciplinary teams, can support performance gains that persist beyond the project’s active phase.

 

 

There are a number of possible reasons why improvements were sustained following the collaborative. The BQIP requirement for institutional leadership buy-in may have motivated accountability to local leaders in subsequent bronchiolitis seasons at each site. We suspect that culture change such as flattened hierarchies through multidisciplinary teams,14 which empowered nurse and respiratory therapy staff, may have facilitated consistent use of tools created locally. The synergy of interdisciplinary teams composed of physician, nurse, and respiratory therapy champions may have created accountability to perpetuate the previous year’s efforts.15 In addition, the sites adopted elements of the evidence-based toolkit, such as pathways,16,17 forcing function tools13,18 and order sets that limited management decision options and bronchodilator use contingent on respiratory scores,9,19 which may have driven desired behaviors.

Moreover, the 2014 AAP CPG for the management of bronchiolitis,20 released prior to the sustainability bronchiolitis season, may have underscored the key concepts of the collaborative. Similarly, national exposure of best practices for bronchiolitis management, including the 3 widespread Choosing Wisely recommendations related to bronchiolitis,21 might have been a compelling reason for sites to maintain their improvement efforts and contribute to secular trends toward decreasing interventions in bronchiolitis management nationally.3 Lastly, the mechanisms developed for local data collection may have created opportunities at each site to conduct ongoing evaluation of performance on key bronchiolitis quality measures through data-driven feedback systems.22 Our study highlights the need for additional research in order to understand why improvements are or are not sustained.

Even with substantial, sustained improvements in this initiative, further reduction in unnecessary care may be possible. Findings from previous studies suggest that even multifaceted QI interventions, including provider education, guidelines and use of respiratory scores, may only modestly reduce bronchodilators, steroids, and chest radiograph use.8,13 To achieve continued improvements in bronchiolitis care, additional active efforts may be needed to develop new interventions that target root causes for areas of overuse at individual sites.

Future multiinstitutional collaboratives might benefit their participants if they include a focus on helping sites develop skills to ensure that local improvement activities continue after the collaborative phases are completed. Proactively scheduling intermittent check-ins with collaborative members to discuss experiences with both sustainability and ongoing improvement may be valuable and likely needs to be incorporated into the initial collaborative planning.

As these sustainability data represent a subset of 9 of the original 21 BQIP sites, there is concern for potential selection bias related to factors that could have motivated sites to participate in the sustainability season’s data collection and simultaneously influenced their performance. These concerns were mitigated to some extent through 3 specific analyses: finding limited differences in hospital characteristics, baseline performance in key bronchiolitis measures, and performance change from baseline to intervention seasons between sites that did and did not participate in the sustainability season.

Notably, sites that participated in the sustainability phase actually had lower baseline respiratory score use and fewer orders for intermittent pulse oximetry at baseline. Theoretically, if participation in the collaborative highlighted this disparity for these sites, it could have been a motivating factor for their continued participation and sustained performance across these measures. Similarly, sites that recognized their higher baseline performance through participation in the collaborative might have felt less motivation to participate in ongoing data collection during the sustainability season. Whether they might have also sustained, declined, or continued improving is not known. Additionally, the magnitude of improvement in the collaborative period might have also motivated ongoing participation during the sustainability phase. For example, although all sites improved in score use during the collaborative, sites participating in the sustainability season demonstrated significantly more improvement in these measures. Sites with a higher magnitude of improvement in collaborative measures might have more enthusiasm about the project, more commitment to the project activities, or feel a sense of obligation to respond to requests for additional data collection.

This work has several limitations. Selection bias may limit generalizability of the results, as sites that did not participate in the sustainability season may have had different results than those that did participate. It is unknown whether sites that regressed toward their baseline were deterred from participating in the sustainability season. The analyses that we were able to preform, however, suggest that the 2 groups were similar in their characteristics as well as in their baseline and improvement performance.

We have limited knowledge of the local improvement work that sites conducted between the completion of the collaborative and the sustainability season. Site-specific factors may have influenced improvement sustainability. For example, qualitative research with the original group found that team engagement had a quantitative association with better performance, but only for the bronchodilator use measure.23 Sites were responsible for their own data collection, and despite attempts to centralize and standardize the process, data collection inconsistencies may have occurred. For instance, it is unknown how closely that orders for intermittent pulse oximetry correlate with intermittent use at the bedside. Lastly, the absence of a control group limits examination of the causal relationships of interventions and the influence of secular trends.

 

 

CONCLUSIONS

Improvements gained during the BQIP collaborative were sustained at 1 year following completion of the collaborative. These findings are encouraging, as national QI collaborative efforts are increasingly common. Our findings suggest that opportunities exist to even further reduce unnecessary care in the management of bronchiolitis. Such opportunities highlight the importance of integrating strategies to both measure sustainability and plan for ongoing independent local activities after completion of the collaborative. Future efforts should focus on supporting local sites to continue individual practice-improvement as they transition from collaborative to independent quality initiatives.

Acknowledgments

The authors thank the 21 hospitals that participated in the BQIP collaborative, and in particular the 9 hospital teams that contributed sustainability data for their ongoing dedication. There was no external funding for this manuscript.

Disclosure

The authors report no financial conflicts of interest.

Files
References

1. Healthcare Cost and Utilization Project (HCUP) KID Trends Supplemental File. Agency for Healthcare Research and Quality website. http://hcupnet.ahrq.gov/HCUPnet.jsp?Id=2C331B13FB40957D&Form=DispTab&JS=Y&Action=Accept. 2012. Accessed July 21, 2016.
2. Ralston S, Parikh K, Goodman D. Benchmarking overuse of medical interventions for bronchiolitis. JAMA Pediatr. 2015;169:805-806. PubMed
3. Parikh K, Hall M, Teach SJ. Bronchiolitis management before and after the AAP guidelines. Pediatrics. 2014;133:e1-e7. PubMed
4. Johnson LW, Robles J, Hudgins A, Osburn S, Martin D, Thompson A. Management of bronchiolitis in the emergency department: impact of evidence-based guidelines? Pediatrics. 2013;131 Suppl 1:S103-S109. PubMed
5. Kotagal UR, Robbins JM, Kini NM, Schoettker PJ, Atherton HD, Kirschbaum MS. Impact of a bronchiolitis guideline: a multisite demonstration project. Chest. 2002;121:1789-1797. PubMed
6. Mittal V, Darnell C, Walsh B, et al. Inpatient bronchiolitis guideline implementation and resource utilization. Pediatrics. 2014;133:e730-e737. PubMed
7. Mittal V, Hall M, Morse R, et al. Impact of inpatient bronchiolitis clinical practice guideline implementation on testing and treatment. J Pediatr. 2014;165:570.e3-576.e3. PubMed
8. Ralston S, Garber M, Narang S, et al. Decreasing unnecessary utilization in acute bronchiolitis care: results from the value in inpatient pediatrics network. J Hosp Med. 2013;8:25-30. PubMed
9. Ralston SL, Garber MD, Rice-Conboy E, et al. A multicenter collaborative to reduce unnecessary care in inpatient bronchiolitis. Pediatrics. 2016;137. PubMed
10. Perlstein PH, Kotagal UR, Schoettker PJ, et al. Sustaining the implementation of an evidence-based guideline for bronchiolitis. Arch Pediatr Adolesc Med. 2000;154:1001-1007. PubMed
11. Walker C, Danby S, Turner S. Impact of a bronchiolitis clinical care pathway on treatment and hospital stay. Eur J Pediatr. 2012;171:827-832. PubMed
12. Cheney J, Barber S, Altamirano L, et al. A clinical pathway for bronchiolitis is effective in reducing readmission rates. J Pediatr. 2005;147:622-626. PubMed
13. Ralston S, Comick A, Nichols E, Parker D, Lanter P. Effectiveness of quality improvement in hospitalization for bronchiolitis: a systematic review. Pediatrics. 2014;134:571-581. PubMed
14. Schwartz RW, Tumblin TF. The power of servant leadership to transform health care organizations for the 21st-century economy. Arch Surg. 2002;137:1419-1427; discussion 27. PubMed
15. Schalock RL, Verdugo M, Lee T. A systematic approach to an organization’s sustainability. Eval Program Plann. 2016;56:56-63. PubMed
16. Wilson SD, Dahl BB, Wells RD. An evidence-based clinical pathway for bronchiolitis safely reduces antibiotic overuse. Am J Med Qual. 2002;17:195-199. PubMed
17. Muething S, Schoettker PJ, Gerhardt WE, Atherton HD, Britto MT, Kotagal UR. Decreasing overuse of therapies in the treatment of bronchiolitis by incorporating evidence at the point of care. J Pediatr. 2004;144:703-710. PubMed
18. Streiff MB, Carolan HT, Hobson DB, et al. Lessons from the Johns Hopkins multi-disciplinary venous thromboembolism (VTE) prevention collaborative. BMJ. 2012;344:e3935. PubMed
19. Todd J, Bertoch D, Dolan S. Use of a large national database for comparative evaluation of the effect of a bronchiolitis/viral pneumonia clinical care guideline on patient outcome and resource utilization. Arch Pediatr Adolesc Med. 2002;156:1086-1090. PubMed
20. Ralston SL, Lieberthal AS, Meissner HC, et al. Clinical practice guideline: the diagnosis, management, and prevention of bronchiolitis. Pediatrics. 2014;134:e1474-e1502. PubMed
21. Quinonez RA, Garber MD, Schroeder AR, et al. Choosing wisely in pediatric hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8:479-485. PubMed
22. Stone S, Lee HC, Sharek PJ. Perceived factors associated with sustained improvement following participation in a multicenter quality improvement collaborative. Jt Comm J Qual Patient Saf. 2016;42:309-315. PubMed
23. Ralston SL, Atwood EC, Garber MD, Holmes AV. What works to reduce unnecessary care for bronchiolitis? A qualitative analysis of a national collaborative. Acad Pediatr. 2017;17(2):198-204. PubMed

Article PDF
Issue
Journal of Hospital Medicine 12(11)
Publications
Topics
Page Number
905-910. Published online first September 6, 2017.
Sections
Files
Files
Article PDF
Article PDF

Acute viral bronchiolitis is the most common cause of hospitalization for children less than 1 year of age.1 Overuse of ineffective therapies has persisted despite the existence of the evidence-based American Academy of Pediatrics (AAP) clinical practice guideline (CPG), which recommends primarily supportive care.2-8 Adherence to the AAP CPG recommendations for management of bronchiolitis improved significantly through the AAP’s Bronchiolitis Quality Improvement Project (BQIP), a 12-month, multiinstitutional collaborative of community and free-standing children’s hospitals.9 This subsequent study investigates if these improvements were sustained after completion of the formal 12-month project.

Published multiinstitutional bronchiolitis quality improvement (QI) work is limited to 1 study5 that describes the results of a single intervention season at academic medical centers. Multiyear bronchiolitis QI projects are limited to single-center studies, and results have been mixed.5,6,8,10-13 One study11 observed continued improvement in bronchodilator use in subsequent seasons, whereas a second study10 observed a return to baseline bronchodilator use in the following season. Mittal6 observed inconsistent improvements in key bronchiolitis measures during postintervention seasons.

Our specific aim was to assess the sustainability of improvements in bronchiolitis management at participating institutions 1 year following completion of the AAP BQIP collaborative.9 Because no studies demonstrate the most effective way to support long-term improvement through a QI collaborative, we hypothesized that the initial collaborative activities, which were designed to build the capacity of local interdisciplinary teams while providing standardized evidence-based care pathways, would lead to performance in the subsequent season at levels similar to or better than those observed during the active phase of the collaborative, without additional project interventions.

METHODS

Study Design and Setting

This was a follow-up study of the AAP Quality Improvement Innovation Networks project entitled “A Quality Collaborative for Improving Hospital Compliance with the AAP Bronchiolitis Guideline” (BQIP).9 The AAP Institutional Review Board approved this project.

Twenty-one multidisciplinary, hospital-based teams participated in the BQIP collaborative and provided monthly data during the January through March bronchiolitis season. Teams submitted 2013 baseline data and 2014 intervention data. Nine sites provided 2015 sustainability data following the completion of the collaborative.

Participants

Hospital encounters with a primary diagnosis of acute viral bronchiolitis were eligible for inclusion among patients from 1 month to 2 years of age. Encounters were excluded for prematurity (<35 weeks gestational age), congenital heart disease, bronchopulmonary dysplasia, genetic, congenital or neuromuscular abnormalities, and pediatric intensive-care admission.

Data Collection

Hospital characteristics were collected, including hospital type (academic, community), bed size, location (urban, rural), hospital distributions of race/ethnicity and public payer, cases of bronchiolitis per year, presence of an electronic medical record and a pediatric respiratory therapist, and self-rated QI knowledge of the multidisciplinary team (very knowledgeable, knowledgeable, and somewhat knowledgeable). A trained member at each site collected data through structured chart review in baseline, intervention, and sustainability bronchiolitis seasons for January, February, and March. Site members reviewed the first 20 charts per month that met the inclusion criteria or all charts if there were fewer than 20 eligible encounters. Sites input data about key quality measures into the AAP’s Quality Improvement Data Aggregator, a web-based data repository.

Intervention

The BQIP project was designed as a virtual collaborative consisting of monthly education webinars about QI methods and bronchiolitis management, opportunities for collaboration via teleconference and e-mail listserv, and individual site-coaching by e-mail or telephone.9 A change package was shared with sites that included examples of evidence-based pathways, ordersets, a respiratory scoring tool, communication tools for parents and referring physicians, and slide sets for individual site education efforts. Following completion of the collaborative, written resources remained available to participants, although virtual collaboration ceased and no additional project interventions to promote sustainability were introduced.

Bronchiolitis Process and Outcome Measures

Process measures following admission included the following: severity assessment using a respiratory score, respiratory score use to assess response to bronchodilators, bronchodilator use, bronchodilator doses, steroid doses per patient encounter, chest radiographs per encounter, and presence of an order to transition to intermittent pulse oximetry monitoring. Outcome measures included length of stay and readmissions within 72 hours.

 

 

Analysis

Changes among baseline-, intervention-, and sustainability-season data were assessed using generalized linear mixed-effects models with random effect for study sites. Negative binomial models were used for count variables to allow for overdispersion. Length of stay was log-transformed to achieve a normal distribution. We also analyzed each site individually to assess whether sustained improvements were the result of broad sustainability across all sites or whether they represented an aggregation of some sites that continued to improve while other sites actually worsened.

To address any bias introduced by the voluntary and incomplete participation of sites in the sustainability season, we planned a priori to conduct 3 additional analyses. First, we compared the characteristics of sites that did participate in the sustainability season with those that did not participate by using Chi-squared tests for differences in proportions and t tests for differences in means. Second, we determined whether the baseline-season process and outcome measures were different between sites that did and did not participate using descriptive statistics. Third, we assessed whether improvements between the baseline and intervention seasons were different between sites that did and did not participate using a linear mixed-effects model for normally distributed outcomes and generalized linear mixed-effects model with site-specific random effects for nonnormally distributed outcomes. All study outcomes were summarized in terms of model-adjusted means along with the corresponding 95% confidence intervals. All P values are 2-sided, and P < 0.05 was used to define statistical significance. Data analyses were conducted using SAS software (SAS Institute Inc., Cary, North Carolina) version 9.4.

RESULTS

A total of 2275 patient encounters were reviewed, comprising 995 encounters from the baseline season, 877 from the intervention season, and 403 from the sustainability season. Improvements were observed across key bronchiolitis quality measures from the baseline to intervention season,9 although not every site improved in every metric. All improvements achieved by the combined groups during the intervention season were sustained during the sustainability season (Table 1). No measures demonstrated statistically significant reductions between the intervention and sustainability seasons, and the use of intermittent pulse oximetry continued to increase. Length of stay and 72-hour readmissions were not statistically different between seasons (P = 0.54 and P = 0.98, respectively).

Mean use of a respiratory score, which was 6.6% (95% confidence interval [CI], 1.8-21.5) in the baseline season, increased to 73.9% (95% CI, 56.9-85.9) during the intervention season and 70.7% (95 % CI, 53.8-83.5) in the sustainability season. The number of bronchodilator doses per encounter decreased from 3.1 (95% CI, 2.1-4.4) in the baseline season to 1.0 (95% CI, 0.7-1.4) in the intervention season and 0.8 (95% CI, 0.5-1.3) in the sustainability season. Orders for intermittent pulse oximetry increased significantly from a baseline of 40.6% (95% CI, 22.8-61.1) to 68.6% (95% CI, 47.4-84.1) in the intervention season and 79.2% (95% CI, 58.0-91.3) in the sustainability season. In general, this same pattern was present, ie, individual sites did not demonstrate significant improvement or worsening across the measures (Appendix 1a). The Figure illustrates individual site and overall project performance over the study period using bronchodilator use as a representative example.

Characteristics of sites that did and did not participate in the sustainability season were not significantly different (Table 2). The majority of sites were medium-sized centers that cared for an average of 100 to 300 inpatient cases of bronchiolitis per year and were located in an urban environment.

Differences in baseline bronchiolitis quality measures between sites that did and did not participate in the sustainability season are displayed in Table 3. Sustainability sites had significantly lower baseline use of a respiratory score, both to assess severity of illness at any point after hospitalization as well as to assess responsiveness following bronchodilator treatments (P < 0.001). At baseline they also had fewer orders for intermittent pulse oximetry use (P = 0.01) and fewer doses of bronchodilators per encounter (P = 0.04). Sites were not significantly different in their baseline use of bronchodilators, oral steroid doses, or chest radiographs. Sites that participated in the sustainability season demonstated larger magnitude improvement between baseline and intervention seasons for respiratory score use (P < 0.001 for any use and P = 0.02 to assess bronchodilator responsiveness; Appendix 1b).

DISCUSSION

To our knowledge, this is the first report of sustained improvements in care achieved through a multiinstitutional QI collaborative of community and academic hospitals focused on bronchiolitis care. We found that overall sites participating in a national bronchiolitis QI project sustained improvements in key bronchiolitis quality measures for 1 year following the project’s completion. For the aggregate group no measures worsened, and one measure, orders for intermittent pulse oximetry monitoring, continued to increase during the sustainability season. Furthermore, the sustained improvements were primarily the result of consistent sustained performance of each individual site, as opposed to averages wherein some sites worsened while others improved (Appendix 1a). These findings suggest that designing a collaborative approach, which provides an evidence-based best-practice toolkit while building the QI capacity of local interdisciplinary teams, can support performance gains that persist beyond the project’s active phase.

 

 

There are a number of possible reasons why improvements were sustained following the collaborative. The BQIP requirement for institutional leadership buy-in may have motivated accountability to local leaders in subsequent bronchiolitis seasons at each site. We suspect that culture change such as flattened hierarchies through multidisciplinary teams,14 which empowered nurse and respiratory therapy staff, may have facilitated consistent use of tools created locally. The synergy of interdisciplinary teams composed of physician, nurse, and respiratory therapy champions may have created accountability to perpetuate the previous year’s efforts.15 In addition, the sites adopted elements of the evidence-based toolkit, such as pathways,16,17 forcing function tools13,18 and order sets that limited management decision options and bronchodilator use contingent on respiratory scores,9,19 which may have driven desired behaviors.

Moreover, the 2014 AAP CPG for the management of bronchiolitis,20 released prior to the sustainability bronchiolitis season, may have underscored the key concepts of the collaborative. Similarly, national exposure of best practices for bronchiolitis management, including the 3 widespread Choosing Wisely recommendations related to bronchiolitis,21 might have been a compelling reason for sites to maintain their improvement efforts and contribute to secular trends toward decreasing interventions in bronchiolitis management nationally.3 Lastly, the mechanisms developed for local data collection may have created opportunities at each site to conduct ongoing evaluation of performance on key bronchiolitis quality measures through data-driven feedback systems.22 Our study highlights the need for additional research in order to understand why improvements are or are not sustained.

Even with substantial, sustained improvements in this initiative, further reduction in unnecessary care may be possible. Findings from previous studies suggest that even multifaceted QI interventions, including provider education, guidelines and use of respiratory scores, may only modestly reduce bronchodilators, steroids, and chest radiograph use.8,13 To achieve continued improvements in bronchiolitis care, additional active efforts may be needed to develop new interventions that target root causes for areas of overuse at individual sites.

Future multiinstitutional collaboratives might benefit their participants if they include a focus on helping sites develop skills to ensure that local improvement activities continue after the collaborative phases are completed. Proactively scheduling intermittent check-ins with collaborative members to discuss experiences with both sustainability and ongoing improvement may be valuable and likely needs to be incorporated into the initial collaborative planning.

As these sustainability data represent a subset of 9 of the original 21 BQIP sites, there is concern for potential selection bias related to factors that could have motivated sites to participate in the sustainability season’s data collection and simultaneously influenced their performance. These concerns were mitigated to some extent through 3 specific analyses: finding limited differences in hospital characteristics, baseline performance in key bronchiolitis measures, and performance change from baseline to intervention seasons between sites that did and did not participate in the sustainability season.

Notably, sites that participated in the sustainability phase actually had lower baseline respiratory score use and fewer orders for intermittent pulse oximetry at baseline. Theoretically, if participation in the collaborative highlighted this disparity for these sites, it could have been a motivating factor for their continued participation and sustained performance across these measures. Similarly, sites that recognized their higher baseline performance through participation in the collaborative might have felt less motivation to participate in ongoing data collection during the sustainability season. Whether they might have also sustained, declined, or continued improving is not known. Additionally, the magnitude of improvement in the collaborative period might have also motivated ongoing participation during the sustainability phase. For example, although all sites improved in score use during the collaborative, sites participating in the sustainability season demonstrated significantly more improvement in these measures. Sites with a higher magnitude of improvement in collaborative measures might have more enthusiasm about the project, more commitment to the project activities, or feel a sense of obligation to respond to requests for additional data collection.

This work has several limitations. Selection bias may limit generalizability of the results, as sites that did not participate in the sustainability season may have had different results than those that did participate. It is unknown whether sites that regressed toward their baseline were deterred from participating in the sustainability season. The analyses that we were able to preform, however, suggest that the 2 groups were similar in their characteristics as well as in their baseline and improvement performance.

We have limited knowledge of the local improvement work that sites conducted between the completion of the collaborative and the sustainability season. Site-specific factors may have influenced improvement sustainability. For example, qualitative research with the original group found that team engagement had a quantitative association with better performance, but only for the bronchodilator use measure.23 Sites were responsible for their own data collection, and despite attempts to centralize and standardize the process, data collection inconsistencies may have occurred. For instance, it is unknown how closely that orders for intermittent pulse oximetry correlate with intermittent use at the bedside. Lastly, the absence of a control group limits examination of the causal relationships of interventions and the influence of secular trends.

 

 

CONCLUSIONS

Improvements gained during the BQIP collaborative were sustained at 1 year following completion of the collaborative. These findings are encouraging, as national QI collaborative efforts are increasingly common. Our findings suggest that opportunities exist to even further reduce unnecessary care in the management of bronchiolitis. Such opportunities highlight the importance of integrating strategies to both measure sustainability and plan for ongoing independent local activities after completion of the collaborative. Future efforts should focus on supporting local sites to continue individual practice-improvement as they transition from collaborative to independent quality initiatives.

Acknowledgments

The authors thank the 21 hospitals that participated in the BQIP collaborative, and in particular the 9 hospital teams that contributed sustainability data for their ongoing dedication. There was no external funding for this manuscript.

Disclosure

The authors report no financial conflicts of interest.

Acute viral bronchiolitis is the most common cause of hospitalization for children less than 1 year of age.1 Overuse of ineffective therapies has persisted despite the existence of the evidence-based American Academy of Pediatrics (AAP) clinical practice guideline (CPG), which recommends primarily supportive care.2-8 Adherence to the AAP CPG recommendations for management of bronchiolitis improved significantly through the AAP’s Bronchiolitis Quality Improvement Project (BQIP), a 12-month, multiinstitutional collaborative of community and free-standing children’s hospitals.9 This subsequent study investigates if these improvements were sustained after completion of the formal 12-month project.

Published multiinstitutional bronchiolitis quality improvement (QI) work is limited to 1 study5 that describes the results of a single intervention season at academic medical centers. Multiyear bronchiolitis QI projects are limited to single-center studies, and results have been mixed.5,6,8,10-13 One study11 observed continued improvement in bronchodilator use in subsequent seasons, whereas a second study10 observed a return to baseline bronchodilator use in the following season. Mittal6 observed inconsistent improvements in key bronchiolitis measures during postintervention seasons.

Our specific aim was to assess the sustainability of improvements in bronchiolitis management at participating institutions 1 year following completion of the AAP BQIP collaborative.9 Because no studies demonstrate the most effective way to support long-term improvement through a QI collaborative, we hypothesized that the initial collaborative activities, which were designed to build the capacity of local interdisciplinary teams while providing standardized evidence-based care pathways, would lead to performance in the subsequent season at levels similar to or better than those observed during the active phase of the collaborative, without additional project interventions.

METHODS

Study Design and Setting

This was a follow-up study of the AAP Quality Improvement Innovation Networks project entitled “A Quality Collaborative for Improving Hospital Compliance with the AAP Bronchiolitis Guideline” (BQIP).9 The AAP Institutional Review Board approved this project.

Twenty-one multidisciplinary, hospital-based teams participated in the BQIP collaborative and provided monthly data during the January through March bronchiolitis season. Teams submitted 2013 baseline data and 2014 intervention data. Nine sites provided 2015 sustainability data following the completion of the collaborative.

Participants

Hospital encounters with a primary diagnosis of acute viral bronchiolitis were eligible for inclusion among patients from 1 month to 2 years of age. Encounters were excluded for prematurity (<35 weeks gestational age), congenital heart disease, bronchopulmonary dysplasia, genetic, congenital or neuromuscular abnormalities, and pediatric intensive-care admission.

Data Collection

Hospital characteristics were collected, including hospital type (academic, community), bed size, location (urban, rural), hospital distributions of race/ethnicity and public payer, cases of bronchiolitis per year, presence of an electronic medical record and a pediatric respiratory therapist, and self-rated QI knowledge of the multidisciplinary team (very knowledgeable, knowledgeable, and somewhat knowledgeable). A trained member at each site collected data through structured chart review in baseline, intervention, and sustainability bronchiolitis seasons for January, February, and March. Site members reviewed the first 20 charts per month that met the inclusion criteria or all charts if there were fewer than 20 eligible encounters. Sites input data about key quality measures into the AAP’s Quality Improvement Data Aggregator, a web-based data repository.

Intervention

The BQIP project was designed as a virtual collaborative consisting of monthly education webinars about QI methods and bronchiolitis management, opportunities for collaboration via teleconference and e-mail listserv, and individual site-coaching by e-mail or telephone.9 A change package was shared with sites that included examples of evidence-based pathways, ordersets, a respiratory scoring tool, communication tools for parents and referring physicians, and slide sets for individual site education efforts. Following completion of the collaborative, written resources remained available to participants, although virtual collaboration ceased and no additional project interventions to promote sustainability were introduced.

Bronchiolitis Process and Outcome Measures

Process measures following admission included the following: severity assessment using a respiratory score, respiratory score use to assess response to bronchodilators, bronchodilator use, bronchodilator doses, steroid doses per patient encounter, chest radiographs per encounter, and presence of an order to transition to intermittent pulse oximetry monitoring. Outcome measures included length of stay and readmissions within 72 hours.

 

 

Analysis

Changes among baseline-, intervention-, and sustainability-season data were assessed using generalized linear mixed-effects models with random effect for study sites. Negative binomial models were used for count variables to allow for overdispersion. Length of stay was log-transformed to achieve a normal distribution. We also analyzed each site individually to assess whether sustained improvements were the result of broad sustainability across all sites or whether they represented an aggregation of some sites that continued to improve while other sites actually worsened.

To address any bias introduced by the voluntary and incomplete participation of sites in the sustainability season, we planned a priori to conduct 3 additional analyses. First, we compared the characteristics of sites that did participate in the sustainability season with those that did not participate by using Chi-squared tests for differences in proportions and t tests for differences in means. Second, we determined whether the baseline-season process and outcome measures were different between sites that did and did not participate using descriptive statistics. Third, we assessed whether improvements between the baseline and intervention seasons were different between sites that did and did not participate using a linear mixed-effects model for normally distributed outcomes and generalized linear mixed-effects model with site-specific random effects for nonnormally distributed outcomes. All study outcomes were summarized in terms of model-adjusted means along with the corresponding 95% confidence intervals. All P values are 2-sided, and P < 0.05 was used to define statistical significance. Data analyses were conducted using SAS software (SAS Institute Inc., Cary, North Carolina) version 9.4.

RESULTS

A total of 2275 patient encounters were reviewed, comprising 995 encounters from the baseline season, 877 from the intervention season, and 403 from the sustainability season. Improvements were observed across key bronchiolitis quality measures from the baseline to intervention season,9 although not every site improved in every metric. All improvements achieved by the combined groups during the intervention season were sustained during the sustainability season (Table 1). No measures demonstrated statistically significant reductions between the intervention and sustainability seasons, and the use of intermittent pulse oximetry continued to increase. Length of stay and 72-hour readmissions were not statistically different between seasons (P = 0.54 and P = 0.98, respectively).

Mean use of a respiratory score, which was 6.6% (95% confidence interval [CI], 1.8-21.5) in the baseline season, increased to 73.9% (95% CI, 56.9-85.9) during the intervention season and 70.7% (95 % CI, 53.8-83.5) in the sustainability season. The number of bronchodilator doses per encounter decreased from 3.1 (95% CI, 2.1-4.4) in the baseline season to 1.0 (95% CI, 0.7-1.4) in the intervention season and 0.8 (95% CI, 0.5-1.3) in the sustainability season. Orders for intermittent pulse oximetry increased significantly from a baseline of 40.6% (95% CI, 22.8-61.1) to 68.6% (95% CI, 47.4-84.1) in the intervention season and 79.2% (95% CI, 58.0-91.3) in the sustainability season. In general, this same pattern was present, ie, individual sites did not demonstrate significant improvement or worsening across the measures (Appendix 1a). The Figure illustrates individual site and overall project performance over the study period using bronchodilator use as a representative example.

Characteristics of sites that did and did not participate in the sustainability season were not significantly different (Table 2). The majority of sites were medium-sized centers that cared for an average of 100 to 300 inpatient cases of bronchiolitis per year and were located in an urban environment.

Differences in baseline bronchiolitis quality measures between sites that did and did not participate in the sustainability season are displayed in Table 3. Sustainability sites had significantly lower baseline use of a respiratory score, both to assess severity of illness at any point after hospitalization as well as to assess responsiveness following bronchodilator treatments (P < 0.001). At baseline they also had fewer orders for intermittent pulse oximetry use (P = 0.01) and fewer doses of bronchodilators per encounter (P = 0.04). Sites were not significantly different in their baseline use of bronchodilators, oral steroid doses, or chest radiographs. Sites that participated in the sustainability season demonstated larger magnitude improvement between baseline and intervention seasons for respiratory score use (P < 0.001 for any use and P = 0.02 to assess bronchodilator responsiveness; Appendix 1b).

DISCUSSION

To our knowledge, this is the first report of sustained improvements in care achieved through a multiinstitutional QI collaborative of community and academic hospitals focused on bronchiolitis care. We found that overall sites participating in a national bronchiolitis QI project sustained improvements in key bronchiolitis quality measures for 1 year following the project’s completion. For the aggregate group no measures worsened, and one measure, orders for intermittent pulse oximetry monitoring, continued to increase during the sustainability season. Furthermore, the sustained improvements were primarily the result of consistent sustained performance of each individual site, as opposed to averages wherein some sites worsened while others improved (Appendix 1a). These findings suggest that designing a collaborative approach, which provides an evidence-based best-practice toolkit while building the QI capacity of local interdisciplinary teams, can support performance gains that persist beyond the project’s active phase.

 

 

There are a number of possible reasons why improvements were sustained following the collaborative. The BQIP requirement for institutional leadership buy-in may have motivated accountability to local leaders in subsequent bronchiolitis seasons at each site. We suspect that culture change such as flattened hierarchies through multidisciplinary teams,14 which empowered nurse and respiratory therapy staff, may have facilitated consistent use of tools created locally. The synergy of interdisciplinary teams composed of physician, nurse, and respiratory therapy champions may have created accountability to perpetuate the previous year’s efforts.15 In addition, the sites adopted elements of the evidence-based toolkit, such as pathways,16,17 forcing function tools13,18 and order sets that limited management decision options and bronchodilator use contingent on respiratory scores,9,19 which may have driven desired behaviors.

Moreover, the 2014 AAP CPG for the management of bronchiolitis,20 released prior to the sustainability bronchiolitis season, may have underscored the key concepts of the collaborative. Similarly, national exposure of best practices for bronchiolitis management, including the 3 widespread Choosing Wisely recommendations related to bronchiolitis,21 might have been a compelling reason for sites to maintain their improvement efforts and contribute to secular trends toward decreasing interventions in bronchiolitis management nationally.3 Lastly, the mechanisms developed for local data collection may have created opportunities at each site to conduct ongoing evaluation of performance on key bronchiolitis quality measures through data-driven feedback systems.22 Our study highlights the need for additional research in order to understand why improvements are or are not sustained.

Even with substantial, sustained improvements in this initiative, further reduction in unnecessary care may be possible. Findings from previous studies suggest that even multifaceted QI interventions, including provider education, guidelines and use of respiratory scores, may only modestly reduce bronchodilators, steroids, and chest radiograph use.8,13 To achieve continued improvements in bronchiolitis care, additional active efforts may be needed to develop new interventions that target root causes for areas of overuse at individual sites.

Future multiinstitutional collaboratives might benefit their participants if they include a focus on helping sites develop skills to ensure that local improvement activities continue after the collaborative phases are completed. Proactively scheduling intermittent check-ins with collaborative members to discuss experiences with both sustainability and ongoing improvement may be valuable and likely needs to be incorporated into the initial collaborative planning.

As these sustainability data represent a subset of 9 of the original 21 BQIP sites, there is concern for potential selection bias related to factors that could have motivated sites to participate in the sustainability season’s data collection and simultaneously influenced their performance. These concerns were mitigated to some extent through 3 specific analyses: finding limited differences in hospital characteristics, baseline performance in key bronchiolitis measures, and performance change from baseline to intervention seasons between sites that did and did not participate in the sustainability season.

Notably, sites that participated in the sustainability phase actually had lower baseline respiratory score use and fewer orders for intermittent pulse oximetry at baseline. Theoretically, if participation in the collaborative highlighted this disparity for these sites, it could have been a motivating factor for their continued participation and sustained performance across these measures. Similarly, sites that recognized their higher baseline performance through participation in the collaborative might have felt less motivation to participate in ongoing data collection during the sustainability season. Whether they might have also sustained, declined, or continued improving is not known. Additionally, the magnitude of improvement in the collaborative period might have also motivated ongoing participation during the sustainability phase. For example, although all sites improved in score use during the collaborative, sites participating in the sustainability season demonstrated significantly more improvement in these measures. Sites with a higher magnitude of improvement in collaborative measures might have more enthusiasm about the project, more commitment to the project activities, or feel a sense of obligation to respond to requests for additional data collection.

This work has several limitations. Selection bias may limit generalizability of the results, as sites that did not participate in the sustainability season may have had different results than those that did participate. It is unknown whether sites that regressed toward their baseline were deterred from participating in the sustainability season. The analyses that we were able to preform, however, suggest that the 2 groups were similar in their characteristics as well as in their baseline and improvement performance.

We have limited knowledge of the local improvement work that sites conducted between the completion of the collaborative and the sustainability season. Site-specific factors may have influenced improvement sustainability. For example, qualitative research with the original group found that team engagement had a quantitative association with better performance, but only for the bronchodilator use measure.23 Sites were responsible for their own data collection, and despite attempts to centralize and standardize the process, data collection inconsistencies may have occurred. For instance, it is unknown how closely that orders for intermittent pulse oximetry correlate with intermittent use at the bedside. Lastly, the absence of a control group limits examination of the causal relationships of interventions and the influence of secular trends.

 

 

CONCLUSIONS

Improvements gained during the BQIP collaborative were sustained at 1 year following completion of the collaborative. These findings are encouraging, as national QI collaborative efforts are increasingly common. Our findings suggest that opportunities exist to even further reduce unnecessary care in the management of bronchiolitis. Such opportunities highlight the importance of integrating strategies to both measure sustainability and plan for ongoing independent local activities after completion of the collaborative. Future efforts should focus on supporting local sites to continue individual practice-improvement as they transition from collaborative to independent quality initiatives.

Acknowledgments

The authors thank the 21 hospitals that participated in the BQIP collaborative, and in particular the 9 hospital teams that contributed sustainability data for their ongoing dedication. There was no external funding for this manuscript.

Disclosure

The authors report no financial conflicts of interest.

References

1. Healthcare Cost and Utilization Project (HCUP) KID Trends Supplemental File. Agency for Healthcare Research and Quality website. http://hcupnet.ahrq.gov/HCUPnet.jsp?Id=2C331B13FB40957D&Form=DispTab&JS=Y&Action=Accept. 2012. Accessed July 21, 2016.
2. Ralston S, Parikh K, Goodman D. Benchmarking overuse of medical interventions for bronchiolitis. JAMA Pediatr. 2015;169:805-806. PubMed
3. Parikh K, Hall M, Teach SJ. Bronchiolitis management before and after the AAP guidelines. Pediatrics. 2014;133:e1-e7. PubMed
4. Johnson LW, Robles J, Hudgins A, Osburn S, Martin D, Thompson A. Management of bronchiolitis in the emergency department: impact of evidence-based guidelines? Pediatrics. 2013;131 Suppl 1:S103-S109. PubMed
5. Kotagal UR, Robbins JM, Kini NM, Schoettker PJ, Atherton HD, Kirschbaum MS. Impact of a bronchiolitis guideline: a multisite demonstration project. Chest. 2002;121:1789-1797. PubMed
6. Mittal V, Darnell C, Walsh B, et al. Inpatient bronchiolitis guideline implementation and resource utilization. Pediatrics. 2014;133:e730-e737. PubMed
7. Mittal V, Hall M, Morse R, et al. Impact of inpatient bronchiolitis clinical practice guideline implementation on testing and treatment. J Pediatr. 2014;165:570.e3-576.e3. PubMed
8. Ralston S, Garber M, Narang S, et al. Decreasing unnecessary utilization in acute bronchiolitis care: results from the value in inpatient pediatrics network. J Hosp Med. 2013;8:25-30. PubMed
9. Ralston SL, Garber MD, Rice-Conboy E, et al. A multicenter collaborative to reduce unnecessary care in inpatient bronchiolitis. Pediatrics. 2016;137. PubMed
10. Perlstein PH, Kotagal UR, Schoettker PJ, et al. Sustaining the implementation of an evidence-based guideline for bronchiolitis. Arch Pediatr Adolesc Med. 2000;154:1001-1007. PubMed
11. Walker C, Danby S, Turner S. Impact of a bronchiolitis clinical care pathway on treatment and hospital stay. Eur J Pediatr. 2012;171:827-832. PubMed
12. Cheney J, Barber S, Altamirano L, et al. A clinical pathway for bronchiolitis is effective in reducing readmission rates. J Pediatr. 2005;147:622-626. PubMed
13. Ralston S, Comick A, Nichols E, Parker D, Lanter P. Effectiveness of quality improvement in hospitalization for bronchiolitis: a systematic review. Pediatrics. 2014;134:571-581. PubMed
14. Schwartz RW, Tumblin TF. The power of servant leadership to transform health care organizations for the 21st-century economy. Arch Surg. 2002;137:1419-1427; discussion 27. PubMed
15. Schalock RL, Verdugo M, Lee T. A systematic approach to an organization’s sustainability. Eval Program Plann. 2016;56:56-63. PubMed
16. Wilson SD, Dahl BB, Wells RD. An evidence-based clinical pathway for bronchiolitis safely reduces antibiotic overuse. Am J Med Qual. 2002;17:195-199. PubMed
17. Muething S, Schoettker PJ, Gerhardt WE, Atherton HD, Britto MT, Kotagal UR. Decreasing overuse of therapies in the treatment of bronchiolitis by incorporating evidence at the point of care. J Pediatr. 2004;144:703-710. PubMed
18. Streiff MB, Carolan HT, Hobson DB, et al. Lessons from the Johns Hopkins multi-disciplinary venous thromboembolism (VTE) prevention collaborative. BMJ. 2012;344:e3935. PubMed
19. Todd J, Bertoch D, Dolan S. Use of a large national database for comparative evaluation of the effect of a bronchiolitis/viral pneumonia clinical care guideline on patient outcome and resource utilization. Arch Pediatr Adolesc Med. 2002;156:1086-1090. PubMed
20. Ralston SL, Lieberthal AS, Meissner HC, et al. Clinical practice guideline: the diagnosis, management, and prevention of bronchiolitis. Pediatrics. 2014;134:e1474-e1502. PubMed
21. Quinonez RA, Garber MD, Schroeder AR, et al. Choosing wisely in pediatric hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8:479-485. PubMed
22. Stone S, Lee HC, Sharek PJ. Perceived factors associated with sustained improvement following participation in a multicenter quality improvement collaborative. Jt Comm J Qual Patient Saf. 2016;42:309-315. PubMed
23. Ralston SL, Atwood EC, Garber MD, Holmes AV. What works to reduce unnecessary care for bronchiolitis? A qualitative analysis of a national collaborative. Acad Pediatr. 2017;17(2):198-204. PubMed

References

1. Healthcare Cost and Utilization Project (HCUP) KID Trends Supplemental File. Agency for Healthcare Research and Quality website. http://hcupnet.ahrq.gov/HCUPnet.jsp?Id=2C331B13FB40957D&Form=DispTab&JS=Y&Action=Accept. 2012. Accessed July 21, 2016.
2. Ralston S, Parikh K, Goodman D. Benchmarking overuse of medical interventions for bronchiolitis. JAMA Pediatr. 2015;169:805-806. PubMed
3. Parikh K, Hall M, Teach SJ. Bronchiolitis management before and after the AAP guidelines. Pediatrics. 2014;133:e1-e7. PubMed
4. Johnson LW, Robles J, Hudgins A, Osburn S, Martin D, Thompson A. Management of bronchiolitis in the emergency department: impact of evidence-based guidelines? Pediatrics. 2013;131 Suppl 1:S103-S109. PubMed
5. Kotagal UR, Robbins JM, Kini NM, Schoettker PJ, Atherton HD, Kirschbaum MS. Impact of a bronchiolitis guideline: a multisite demonstration project. Chest. 2002;121:1789-1797. PubMed
6. Mittal V, Darnell C, Walsh B, et al. Inpatient bronchiolitis guideline implementation and resource utilization. Pediatrics. 2014;133:e730-e737. PubMed
7. Mittal V, Hall M, Morse R, et al. Impact of inpatient bronchiolitis clinical practice guideline implementation on testing and treatment. J Pediatr. 2014;165:570.e3-576.e3. PubMed
8. Ralston S, Garber M, Narang S, et al. Decreasing unnecessary utilization in acute bronchiolitis care: results from the value in inpatient pediatrics network. J Hosp Med. 2013;8:25-30. PubMed
9. Ralston SL, Garber MD, Rice-Conboy E, et al. A multicenter collaborative to reduce unnecessary care in inpatient bronchiolitis. Pediatrics. 2016;137. PubMed
10. Perlstein PH, Kotagal UR, Schoettker PJ, et al. Sustaining the implementation of an evidence-based guideline for bronchiolitis. Arch Pediatr Adolesc Med. 2000;154:1001-1007. PubMed
11. Walker C, Danby S, Turner S. Impact of a bronchiolitis clinical care pathway on treatment and hospital stay. Eur J Pediatr. 2012;171:827-832. PubMed
12. Cheney J, Barber S, Altamirano L, et al. A clinical pathway for bronchiolitis is effective in reducing readmission rates. J Pediatr. 2005;147:622-626. PubMed
13. Ralston S, Comick A, Nichols E, Parker D, Lanter P. Effectiveness of quality improvement in hospitalization for bronchiolitis: a systematic review. Pediatrics. 2014;134:571-581. PubMed
14. Schwartz RW, Tumblin TF. The power of servant leadership to transform health care organizations for the 21st-century economy. Arch Surg. 2002;137:1419-1427; discussion 27. PubMed
15. Schalock RL, Verdugo M, Lee T. A systematic approach to an organization’s sustainability. Eval Program Plann. 2016;56:56-63. PubMed
16. Wilson SD, Dahl BB, Wells RD. An evidence-based clinical pathway for bronchiolitis safely reduces antibiotic overuse. Am J Med Qual. 2002;17:195-199. PubMed
17. Muething S, Schoettker PJ, Gerhardt WE, Atherton HD, Britto MT, Kotagal UR. Decreasing overuse of therapies in the treatment of bronchiolitis by incorporating evidence at the point of care. J Pediatr. 2004;144:703-710. PubMed
18. Streiff MB, Carolan HT, Hobson DB, et al. Lessons from the Johns Hopkins multi-disciplinary venous thromboembolism (VTE) prevention collaborative. BMJ. 2012;344:e3935. PubMed
19. Todd J, Bertoch D, Dolan S. Use of a large national database for comparative evaluation of the effect of a bronchiolitis/viral pneumonia clinical care guideline on patient outcome and resource utilization. Arch Pediatr Adolesc Med. 2002;156:1086-1090. PubMed
20. Ralston SL, Lieberthal AS, Meissner HC, et al. Clinical practice guideline: the diagnosis, management, and prevention of bronchiolitis. Pediatrics. 2014;134:e1474-e1502. PubMed
21. Quinonez RA, Garber MD, Schroeder AR, et al. Choosing wisely in pediatric hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8:479-485. PubMed
22. Stone S, Lee HC, Sharek PJ. Perceived factors associated with sustained improvement following participation in a multicenter quality improvement collaborative. Jt Comm J Qual Patient Saf. 2016;42:309-315. PubMed
23. Ralston SL, Atwood EC, Garber MD, Holmes AV. What works to reduce unnecessary care for bronchiolitis? A qualitative analysis of a national collaborative. Acad Pediatr. 2017;17(2):198-204. PubMed

Issue
Journal of Hospital Medicine 12(11)
Issue
Journal of Hospital Medicine 12(11)
Page Number
905-910. Published online first September 6, 2017.
Page Number
905-910. Published online first September 6, 2017.
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Kristin A. Shadman, MD, Department of Pediatrics, University of Wisconsin, H4/468 CSC, 600 Highland Ave, Madison, WI 53972; Telephone: 608-265-8561; E-mail: kshadman@pediatrics.wisc.edu
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Hospitalist Versus Traditional Systems

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Pediatric hospitalist systems versus traditional models of care: Effect on quality and cost outcomes

In the United States, general medical inpatient care is increasingly provided by hospital‐based physicians, also called hospitalists.1 The field of pediatrics is no exception, and by 2005 there were an estimated 1000 pediatric hospitalists in the workforce.2 Current numbers are likely to be greater than 2500, as the need for pediatric hospitalists has grown considerably.

At the same time, the quality of care delivered by the United States health system has come under increased scrutiny. In 2001, the Institute of Medicine, in its report on the quality of healthcare in America, concluded that between the care we have and what we could have lies not just a gap but a chasm.3 Meanwhile, the cost of healthcare delivery continues to increase. The pressure to deliver cost‐effective, high quality care is among the more important forces driving the proliferation of hospitalists.4

Over the last decade, data supporting the role of hospitalists in improving quality of care for adult patients has continued to accumulate.58 A 2007 retrospective cohort study by Lindenaur et al.7 included nearly 77,000 adult patients and found small reductions in length of stay without adverse effects on mortality or readmission rates, and a 2009 systematic review by Peterson6 included 33 studies and concluded that in general inpatient care of general medical patients by hospitalist physicians leads to decreased hospital cost and length of stay. A 2002 study by Meltzer et al.8 is also interesting, suggesting that improvements in costs and short‐term mortality are related to the disease‐specific experience of hospitalists.

Similar data for pediatric hospitalists has been slower to emerge. A systematic review of the literature by Landrigan et al., which included studies through 2004, concluded that [R]esearch suggests that pediatric hospitalists decrease costs and length of stay . The quality of care in pediatric hospitalist systems is unclear, because rigorous metrics to evaluate quality are lacking.9 Since the publication of that review, there have been multiple studies which have sought to evaluate the quality of pediatric hospitalist systems. This review was undertaken to synthesize this new information, and to determine the effect of pediatric hospitalist systems on quality of care.

METHODS

A review of the available English language literature on the Medline database was undertaken in November of 2010 to answer the question, What are the differences in quality of care and outcomes of inpatient medical care provided by hospitalists versus non‐hospitalists in the pediatric population? Care metrics of interest were categorized according to the Society of Hospital Medicine's recommendations for measuring hospital performance.10

Search terms used (with additional medical subject headings [MeSH] terms in parenthesis) were hospital medicine (hospitalist), pediatrics (child health, child welfare), cost (cost and cost analysis), quality (quality indicators, healthcare), outcomes (outcome assessment, healthcare; outcomes and process assessment, healthcare); volume, patient satisfaction, length of stay, productivity (efficiency), provider satisfaction (attitude of health personnel, job satisfaction), mortality, and readmission rate (patient readmission). The citing articles search tool was used to identify other articles that potentially could meet criteria. Finally, references cited in the selected articles, as well as in excluded literature reviews, were searched for additional articles.

Articles were deemed eligible if they were published in a peer‐reviewed journal, if they had a comparative experimental design for hospitalists versus non‐hospitalists, and if they dealt exclusively with pediatric hospitalists. Noncomparative studies were excluded, as were studies that pertained to settings besides that of an inpatient pediatrics ward, such as pediatric intensive care units or emergency rooms. The search algorithm is diagrammed in Figure 1.

Figure 1
Search strategy. Abbreviations: ICU, intensive care unit.

The selected articles were reviewed for the relevant outcome measures. The quality of each article was assessed using the Oxford Centre for Evidence‐Based Medicine levels of evidence,11 a widely accepted standard for critical analysis of studies. Levels of evidence are assigned to studies, from 1a (systematic reviews of randomized controlled trials) to 5 (expert opinion only). Well‐conducted prospective cohort studies receive a rating of 2c; those with wide confidence intervals due to small sample size receive a minus () modifier. This system does not specifically address survey studies, which were therefore not assigned a level of evidence.

RESULTS

The screening process yielded 92 possible relevant articles, which were then reviewed individually (by G.M.M.) by title and abstract. A total of 81 articles were excluded, including 48 studies that were either noncomparative or descriptive in nature. Ten of the identified articles were reviews and did not contain primary data. Nine studies were not restricted to the pediatric population. Also excluded were 7 studies that did not have outcomes related to quality (eg, billing performance), and 7 studies of hospitalists in settings besides general pediatric wards (eg, pediatric intensive care units). Ten studies were thus identified. The cited reference tool was used to identify an additional article which met criteria, yielding 11 total articles that were included in the review.

Five of the identified studies published prior to 2005 were previously reviewed by Landrigan et al.9 Since then, 6 additional studies of similar nature have been published and were included here. Articles that met criteria but appeared in an earlier review are included in Table 1; new articles appear in Table 2. The results of all 11 articles were included for this discussion.

Previously Reviewed Reports Comparing Outcomes for Hospitalists vs Non‐Hospitalists
Source Site Study Design Outcomes Measured (Oxford Level of Evidence) Results for Hospitalists
  • NOTE: Levels of evidence are assigned to studies, from 1a (systematic reviews of randomized controlled trials) to 5 (expert opinion only). Well‐conducted prospective cohort studies receive a rating of 2c; those with wide confidence intervals due to small sample size receive a minus () modifier.

  • Abbreviations: LOS, length of stay.

Bellet and Whitaker13 (2000) Cincinnati Children's Hospital Medical Center, Cincinnati, OH 1440 general pediatric patients LOS, costs (2c) LOS shorter (2.4 vs 2.7 days)
Retrospective cohort study Readmission rate, subspecialty consultations, mortality (2c, low power) Costs lower ($2720 vs $3002)
Readmissions higher for hospitalists (1% vs 3%)
No differences in consultations
No mortality in study
Ogershok et al.16 (2001) West Virginia University Children's Hospitals, Morgantown, WV 2177 general pediatric patients LOS, cost (2c) No difference in LOS
Retrospective cohort study Readmission rate, patient satisfaction, mortality (2c, low power) Costs lower ($1238 vs $1421)
Lab and radiology tests ordered less often
No difference in mortality or readmission rates
No difference in satisfaction scores
Wells et al.15 (2001) Valley Children's Hospital, Madera, CA 182 general pediatric patients LOS, cost, patient satisfaction, follow‐up rate (2c, low power) LOS shorter (45.2 vs 66.8 hr; P = 0.01)
Prospective cohort study No LOS or cost benefit for patients with bronchiolitis, gastroenteritis, or pneumonia
Costs lower ($2701 vs $4854; P = 0.005) for patients with asthma
No difference in outpatient follow‐up rate
Landrigan et al.14 (2002) Boston Children's Hospital, Boston, MA 17,873 general pediatric patients LOS, cost (2c) LOS shorter (2.2 vs 2.5 days)
Retrospective cohort study Readmission rate, follow‐up rate, mortality (2c, low power) Costs lower ($1139 vs $1356)
No difference in follow‐up rate
No mortality in study
Dwight et al.12 (2004) Hospital for Sick Children, Toronto, Ontario, Canada 3807 general pediatric patients LOS (2c) LOS shorter (from 2.9 to 2.5 days; P = 0.04)
Retrospective cohort study Subspecialty consultations, readmission rate, mortality (2c, low power) No difference in readmission rates
No difference in mortality
Previously Unreviewed Reports Comparing Outcomes for Hospitalists vs Non‐Hospitalists
Source Site Study Design Outcomes Measured (Oxford Level of Evidence) Results for Hospitalists
  • NOTE: Levels of evidence are assigned to studies, from 1a (systematic reviews of randomized controlled trials) to 5 (expert opinion only). Well‐conducted prospective cohort studies receive a rating of 2c; those with wide confidence intervals due to small sample size receive a minus () modifier.

  • Abbreviations: DRGs, diagnosis‐related groups; GI, gastrointestinal; Heme/Onc, hematology/oncology; LOS, length of stay; PHIS, Pediatric Health Information System; UTI, urinary tract infection.

Boyd et al.21 (2006) St Joseph's Hospital and Medical Center, Phoenix, AZ 1009 patients with 11 most common DRGs (3 groups) Cost, LOS, and readmission rate (2c, low power) LOS longer (2.6 2.0 vs 3.1 2.6 vs 2.9 2.3, mean SD)
Retrospective cohort study Costs higher ($1781 $1449 (faculty) vs $1954 $1212 (hospitalist group 1) vs $1964 $1495 (hospitalist group 2)
No difference in readmission rates
Conway et al.22 (2006) National provider survey 213 hospitalists and 352 community pediatrician survey responses Self‐reported evidence‐based medicine use (descriptive study, no assignable level) Hospitalists more likely to follow EBG for following: VCUG and RUS after first UTI, albuterol and ipratropium in first 24 hr for asthma
Descriptive study Hospitalists less likely to use the following unproven therapies: levalbuterol and inhaled or oral steroids for bronchiolitis, stool culture or rotavirus testing for gastroenteritis, or ipratropium after 24 hr for asthma
Srivastava et al.17 (2007) University of Utah Health Sciences Center, Salt Lake City, UT 1970 patients with asthma, dehydration, or viral illness LOS, cost (2c, no confidence intervals reported) LOS shorter for asthma (0.23 days, 13%) and for dehydration (0.19 days, 11%)
Retrospective cohort study No LOS difference for patients with viral illness
Costs lower for asthma ($105.51, 9.3%) and for dehydration ($86.22, 7.8%)
Simon et al.19 (2007) Children's Hospital of Denver, Denver, CO 759 patients undergoing spinal fusion before and after availability of hospitalist consultation LOS (4, unaccounted confounding factors) LOS shorter, 6.5 (6.26.7) days to 4.8 (4.55.1)
Retrospective cohort study
Bekmezian et al.18 (2008) UCLA Hospital and Medical Center, Los Angeles, CA 925 subspecialty patients on GI and Heme/Onc services vs hospitalist service LOS, cost, readmission rate, mortality (2c, low power) LOS shorter (38%, P < 0.01)
Retrospective cohort study Cost lower (29%, P < 0.05)
Readmissions lower (36 for faculty vs none for hospitalists, P = 0.02)
No difference in mortality
Conway and Keren20 (2009) Multicenter, 25 children's hospitals 20,892 patients identified with UTI admissions in PHIS database LOS, cost, evidence‐based medicine use (2c) No difference in LOS
Retrospective cohort study No difference in cost
No difference in performance of EBM guideline (VCUG and RUS for first UTI)

Effect on Length of Stay, Cost, and Resource Utilization

Ten articles addressed length of stay as an outcome measure, and 8 included cost as well. Five have been previously reported9 (see Table 1). Of these, Dwight et al.,12, Bellet and Whitaker,13 and Landrigan et al.14 found decreased length of stay (LOS) and cost for all patients. Wells et al.15 found significantly decreased LOS and cost for asthma patients but not for all diagnoses taken together, and Ogershok et al.16 found lower hospital costs but not length of stay. Five of the 6 new studies, listed in Table 2, reported on length of stay and cost. Three showed some benefits for length of stay: Srivastava et al.17 reported improvement in length of stay and cost for asthma and dehydration, but not for all diagnoses together; Bekmezian et al.18 reported improved length of stay and cost for pediatric hospitalists for patients on a hematology and gastroenterology service; and Simon et al.19 attributes a generalized decrease in length of stay on a surgical service to implementation of hospitalist comanagement of their most complex patients, though hospitalists only comanaged 12% of the patients in the study. A multicentered study in 2009 by Conway and Keren20 reported no significant difference in length of stay for general pediatric patients with urinary tract infections.

Of the 4 total studies that showed significant advantage in length of stay for hospitalist groups, improvement ranged from 11% to 38%. All attempted to adjust for diagnosis and severity using diagnosis‐related groups (DRGs) or other methods. Dwight et al.,12 Bellet and Whitaker,13 and Bekmezian et al.18 used retrospective or historical comparison alone, while Landrigan et al.14 had both concurrent and historical comparison groups.

In contrast to the other studies, Boyd et al.21 in 2006 found significant advantages, in both length of stay and cost, for a faculty/resident service in comparison to a hospitalist service. This nonrandomized, retrospective cohort study included 1009 pediatric patients, with the 11 most common DRGs, admitted during the same time period to either a traditional faculty/resident team or 1 of 2 private practice hospitalist groups at an academic medical center. The 8 general pediatric faculty practice attendings were dedicated to inpatient care while on service, and rotated bimonthly. The authors found that the faculty group patients had significantly shorter lengths of stay and total direct patient costs.

Cost‐comparison results were reported by 7 of the studies. Bellet and Whitaker,13 Landrigan et al.,14 Ogershok et al.,16 and Bekmezian et al.18 reported reductions in cost for all patients varying from 9% to 29%, while Wells et al.15 and Srivastava et al.17 found reductions in cost only for patients with certain diagnoses. Srivastava et al.17 analyzed 1970 patients, admitted with primary diagnoses of asthma, dehydration, or viral illness, over a 5‐year period from 1993 to 1997. Cost‐per‐patient was reduced between 9.3% for asthma and 7.8% for dehydrations, but when combined with the viral illness group, the difference was not statistically significant. Wells et al.15 studied 182 admissions over a 1‐year period, and found significant reductions in cost of 44% (P < 0.005) for patients with asthma but not for bronchiolitis, gastroenteritis, or pneumonia. In 2009, Conway and Keren20 studied a multicentered cohort of 20,892 children hospitalized for urinary tract infection, and found no significant difference in hospitalization costs between hospitalist services and more traditional models.

Other Quality Measures

Though financial outcomes (length of stay, cost, and resource utilization) were the primary area of emphasis for most of the selected articles, other parameters with more of a focus on quality were examined as well. The studies by Dwight et al.,12 Bellet and Whitaker,13 Landrigan et al.,14 Ogershok et al.,16 Bekmezian et al.,18 and Boyd et al.21 examined mortality and readmission rate. None of these studies reported differences in mortality rate, though none were powered to do so. When studying readmission rate, Bellet and Whitaker13 reported a statistically significant lower rate of readmission for a traditionally staffed service versus the hospitalist service (1% vs 3%; P = 0.006). In contrast, Bekmezian et al.18 found a lower readmission rate for the hospitalist service (4.4% vs 0%; P = 0.02). The studies by Dwight et al.,12 Landrigan et al.,14 Ogershok et al.,16 and Boyd et al.21 did not detect differences in readmission rates.

Two studies measured patient satisfaction.15, 16 Ogershok et al.16 utilized hospital‐generated patient satisfaction surveys, completed at discharge, for comparison and found no differences between the hospitalist and non‐hospitalist ward services. Wells et al.15 utilized a standardized patient satisfaction assessment tool, given at discharge, followed by a telephone interview after 1 month. At discharge, parents rated hospitalist physicians higher in courtesy (P < 0.05) and friendliness (P < 0.005), though this difference was not detected in the telephone interviews 1 month later. However, at that time, parents did indicate that they received better explanations about their child's illness if their child was seen by their primary care physician rather than a hospitalist.

In 2006, a study by Conway et al.22 reported on the use of evidence‐based therapies and tests by hospitalists as compared to community pediatricians. The survey identified evidence‐based therapies and tests for asthma, bronchiolitis, gastroenteritis, and first‐time urinary tract infection (UTI) diagnosis. A total of 213 hospitalists and 228 community pediatricians met the inclusion criteria by returning the completed survey. After multivariate regression analysis, hospitalists were found to be more likely to use 4 of 5 evidence‐based therapies and recommended tests, and were less likely to use 6 of 7 therapies and tests of unproven benefit. In 2009, Conway and Clancy23 again studied the use of evidence‐based therapies, this time using more objective measures. In this report, the Pediatric Health Information System (PHIS) was examined for a cohort of 20,892 patients. After multivariable regression analysis, there was no statistical difference in the performance of evidence‐based imaging following a first UTI between hospitals staffed primarily by community pediatricians versus those with pediatric hospitalist systems. However, it should be noted that the evidence base for UTI‐related imaging has been debated in the literature over the past decade.

DISCUSSION

Of the 11 studies selected for this review, 10 measured length of stay as an outcome, with the majority favoring hospitalists but with mixed results. Three of these studies, those by Dwight et al.,12 Bellet and Whitaker,13 and Landrigan et al.,14 demonstrated 11% to 14% improvement for hospitalist services compared to community pediatricians. Boyd et al.,21 however, found exactly the opposite result, and 2 studies by Conway and Keren20 and Ogershok et al.16 found no difference in length of stay. Two more studies found benefits restricted to certain conditions: Wells et al.15 found 32% shorter lengths of stay for asthma, but not for other conditions; Srivastava et al.17 found a 13% reduction in length of stay for asthma and 11% for dehydration, but none for viral illnesses or when all conditions were combined. Bekmezian et al.18 found shorter lengths of stay on a hospitalist service for hematology and gastroenterology patients, and Simon et al.19 attribute a general trend of decreasing lengths of stay on a surgical service to the implementation of hospital comanagement for a small percentage of patients.

The most common quality measures studied were patient satisfaction, readmission rates, and mortality. Only 1 study by Ogershok et al.16 reported on patient satisfaction and found few differences between hospitalists and community pediatricians. Readmission rate were reported by 6 studies. Bellet and Whitaker13 found a higher readmission rate for pediatric hospitalists, Bekmezian et al.18 found a lower rate but on a subspecialty service. The study with the greatest power for this analysis, by Landrigan et al.14 with nearly 18,000 patients, found no difference, and neither did another 3 studies. Unsurprisingly, no study detected differences in mortality; it would be extremely difficult to adequately power a study to do so in the general pediatric setting, where mortality is rare.

The effect of relative experience of hospitalist physicians is uncertain. Boyd et al.21 speculated that 1 possible cause for the decreased lengths of stay and costs associated with their faculty group compared to hospitalists may have been due to the increased experience of the faculty group. Unfortunately, they were unable to generate statistical significance due to the small numbers of physicians in the study. In contrast, the hospitalists in the report by Dwight et al.12 had decreased lengths of stay but were less experienced. In the adult literature, the study by Meltzer et al.8 suggests that improved outcomes from hospitalist systems may not become apparent for 1 or more years after implementation, but none of the pediatric studies included in our review specifically address this issue. This leaves the possibility open that the hospitalist systems evaluated in some studies had insufficient time in which to develop increased efficiencies.

There were several limitations to our studies. First, due to the heterogeneity and methodological variations among the included studies, we were unable to perform a meta‐analysis. Second, the overall quality of evidence is limited due to the lack of randomized control trials. Third, a lack of agreement on appropriate quality markers has limited the study of quality of care. Published reports continue to focus on financial measures, such as length of stay, despite the recommendation in the previous review by Landrigan et al.9 that such studies would be of limited value. Finally, the current variability of hospitalist models and lack of study of factors that might influence outcomes makes comparisons difficult.

Despite these limitations, several interesting trends emerge from these studies. One such trend is that the more recent studies highlight that simple classification of hospitalist system versus traditional system fails to measure the complexity and nuance of care delivery. The 2006 study by Boyd et al.21 is especially notable because it showed the opposite effect of previous studies, namely, an increase in length of stay and costs for hospitalists at St Joseph's Medical Center in Phoenix, Arizona. In this study, the traditional faculty group was employed by the hospital, and the hospitalist group was a private practice model. The authors suggest that their faculty physicians were therefore operating like hospitalists in that almost all of their time was focused on inpatient care while they were on service. They also had a limited number of general pediatricians, who attended in the inpatient setting, who were more experienced than the private practice groups. Also, the authors theorize that their faculty may have had a closer working relationship with their residents due to additional service responsibilities and locations of the faculty group onsite. Further study of the care models utilized by faculty and hospitalist practices at St Joseph's and other hospitals may reveal important insights about improving the quality and efficiency of inpatient pediatric care in general.

Though there is a clear trend in the adult literature indicating that the use of hospitalists results in superior quality of care, there is less evidence for pediatric systems. The aforementioned previous review by Landrigan et al.9, in 2006 concluded that emerging research suggests that pediatric hospitalist systems decrease cost and length of stay, but also the quality of care in pediatric hospitalist systems is unclear, because rigorous metrics to evaluate quality are lacking. Data from the 6 additional studies presented here lend limited support to the first hypothesis, and the presence of only 1 negative study is not sufficient to undermine it.

While data on quality markers such as readmission rate or mortality remain elusive, the 2 studies by Conway et al.20, 22 attempt to evaluate quality by comparing the use of evidence‐based therapies by hospitalists and community pediatricians. Though the use of objective PHIS data for UTI in 2009 did not confirm the conclusion suggested by the 2006 provider survey study, the attempt to find measurable outcomes such as the use of evidence‐based therapies is a start but we need more metrics, including rigorous patient outcome metrics, to define the quality of our care systems. Before the effect of hospitalist systems on quality is fully understood, more work will need to be done defining metrics for comparison.

Unfortunately, over 5 years since the previous review by Landrigan et al.9 called for increased focus on inpatient quality and understanding how to improve, the sophistication of our measurement of pediatric inpatient quality and understanding of the mechanisms underlying improvement is still in its infancy. We propose a solution at multiple levels.

First, the investment in research comparing system‐level interventions (eg, discharge process A vs discharge process B) must be increased. This investment increased significantly due to the over $1 billion in Recovery Act funding for comparative effectiveness research.23 However, the future investment in comparative effectiveness research, often called patient‐centered outcomes research, and proportion of investment focused on delivery system interventions is unclear. We propose that the investment in comparing delivery system interventions is essential to improving not only hospital medicine systems, but, more importantly, the healthcare system broadly. In addition, research investment needs to focus on reliably implementing proven interventions in systems of care, and evaluating both the effects on patient outcomes and cost, and the contextual factors associated with successful implementation.24 A hospital medicine example would be the comparison of the implementation of a guideline for a common disease across a set of hospitals. One could perform a prospective observational design, in which one compares high intensity versus low intensity intervention and assesses the baseline characteristics of the hospital systems, to understand their association with successful implementation and, ultimately, patient outcomes. One could also perform a clustered randomized design.

Second, the development and implementation of pediatric quality of care measures, including in the inpatient setting, needs to increase rapidly. The Children's Health Insurance Program (CHIP) and its focus on an initial core set of quality measures that expands over time, through an investment in measure development and validation, is an opportunity for pediatric hospital medicine. Inpatient measures should be a focus of measure development and implementation. We must move beyond a limited set of inpatient measures to a broader set focused on issues such as patient safety, hospital‐acquired infections, outcomes for common illnesses, and transitions of care. We also need better measures for important pediatric populations, such as children with complex medical conditions.25

Third, our understanding of the mechanisms leading to improvement in hospital medicine systems needs to be developed. Studies of hospital medicine systems should move past simple binary comparisons of hospitalist systems versus traditional systems to understand the effect on patient outcomes and cost of factors such as years of experience, volume of patients seen overall and with a specific condition, staffing model, training, quality improvement knowledge and application, and health information systems. These factors may be additive or multiplicative to the performance of inpatient systems once put into place, but these hypotheses need to be tested.

Fourth, individual hospitalists and their groups must focus on quality measurement and improvement in quality and value delivered. At Cincinnati, we have a portfolio of quality and value projects derived from our strategic objectives, illustrated in Figure 2. The projects have leaders and teams to drive improvement and measure results. Increasingly, we are able to publish these results in peer‐reviewed journals. On a quarterly basis, we review the portfolio via a dashboard and/or run and control charts. We establish new projects and set new goals on at least an annual basis. It is important to note that at the beginning of the 2010‐2011 fiscal year, almost all initiatives identified as priorities were yellow or red. Our group is now planning new initiatives and goals for next year. This is one method applicable to our setting, but a focus on quality and value and measuring results needs to be part of every hospital medicine program. As payer focus on value increases, this will be essential to demonstrate how a hospitalist group improves outcomes and adds value.

Figure 2
Quality dashboard for the hospitalist medicine unit at Cincinnati Children's Hospital. At the beginning of the fiscal year, almost all initiatives identified as priorities were yellow or red. Group is now planning new initiatives and goals for next year. Abbreviations: ED, emergency department; FY, fiscal year; HM, hospital medicine; IV, intravenous; PICU, pediatric intensive care unit.

CONCLUSION

This review suggests that the use of hospitalists can improve the quality of inpatient care in the pediatric population, but this is not a universal finding and, most importantly, the mechanisms of improvement are poorly understood. We propose 4 components to address these issues so that a systematic review 5 years from now would be much more robust. These are: 1) increased investment in research comparing system‐level interventions and reliable implementation; 2) further development and implementation of pediatric quality of care measures in the inpatient setting; 3) understanding the mechanisms and factors leading to improvement in hospital medicine systems; and 4) an increased focus on quality measurement, and improvement in quality and value delivered by all individual hospitalists and their groups.

Files
References
  1. Wachter RM,Goldman L.The emerging role of “hospitalists” in the American health care system.N Engl J Med.1996;335(7):514517.
  2. Lye PS,Rauch DA,Ottolini MC, et al.Pediatric hospitalists: report of a leadership conference.Pediatrics.2006;117(4):11221130.
  3. Institute of Medicine.Crossing the Quality Chasm: A New Health System for the 21st Century.Washington, DC:National Academy Press;2001.
  4. Wachter RM,Goldman L.The hospitalist movement 5 years later.JAMA.2002;287(4):487494.
  5. Coffman J,Rundall TG.The impact of hospitalists on the cost and quality of inpatient care in the United States: a research synthesis.Med Care Res Rev.2005;62(4):379406.
  6. Peterson MC.A systematic review of outcomes and quality measures in adult patients cared for by hospitalists vs nonhospitalists.Mayo Clin Proc.2009;84(3):248254.
  7. Lindenauer PK,Rothberg MB,Pekow PS,Kenwood C,Benjamin EM,Auerbach AD.Outcomes of care by hospitalists, general internists, and family physicians.N Engl J Med.2007;375(25):25892600.
  8. Meltzer D,Manning WG,Morrison J, et al.Effects of physician experience on costs and outcomes on an academic general medicine service: results of a trial of hospitalists.Ann Intern Med.2002;137(11):866875.
  9. Landrigan CP,Conway PH,Edwards S,Srivastava R.Pediatric hospitalists: a systematic review of the literature.Pediatrics.2006;117(5):17361744.
  10. Society of Hospital Medicine. Measuring hospitalist performance: metrics, reports, and dashboards. Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=Publications; April2007.
  11. Oxford Centre for Evidence‐Based Medicine levels of evidence. Updated March 2009. Available at: http://www.cebm.net/index.aspx?o=1025. Accessed March 14,2011.
  12. Dwight P,MacArthur C,Friedman JN,Parkin PC.Evaluation of a staff‐only hospitalist system in a tertiary care, academic children's hospital.Pediatrics.2004;114(6):15451549.
  13. Bellet PS,Whitaker RC.Evaluation of a pediatric hospitalist service: impact on length of stay and hospital charges.Pediatrics.2000;105(3 pt 1):478484.
  14. Landrigan CP,Srivastava R,Muret‐Wagstaff S, et al.Impact of a health maintenance organization hospitalist system in academic pediatrics.Pediatrics.2002;110(4):720728.
  15. Wells RD,Dahl B,Wilson SD.Pediatric hospitalists: quality care for the underserved?Am J Med Qual.2001;16(5):174180.
  16. Ogershok PR,Li X,Palmer HC,Moore RS,Weisse ME,Ferrari ND.Restructuring an academic pediatric inpatient service using concepts developed by hospitalists.Clin Pediatr (Phila).2001;40(12):653662.
  17. Srivastava R,Landrigan CP,Ross‐Degnan D, et al.Impact of a hospitalist system on length of stay and cost for children with common conditions.Pediatrics.2007;120(2):267274.
  18. Bekmezian A,Chung PJ,Yazdani S.Staff‐only pediatric hospitalist care of patients with medically complex subspecialty conditions in a major teaching hospital.Arch Pediatr Adolesc Med.2008;162(10):975980.
  19. Simon TD,Eilert R,Dickinson LM,Kempe A,Benefield E,Berman S.Pediatric hospitalist comanagement of spinal fusion surgery patients.J Hosp Med.2007;2(1):2330.
  20. Conway PH,Keren R.Factors associated with variability in outcomes for children hospitalized with urinary tract infection.J Pediatr.2009;154(6):789796.
  21. Boyd J,Samaddar K,Parra‐Roide L,Allen EP,White B.Comparison of outcome measures for a traditional pediatric faculty service and nonfaculty hospitalist services in a community teaching hospital.Pediatrics.2006;118(4):13271331.
  22. Conway PH,Edwards S,Stucky ER,Chiang VW,Ottolini MC,Landrigan CP.Variations in management of common inpatient pediatric illnesses: hospitalists and community pediatricians.Pediatrics.2006;118(2):441447.
  23. Conway PH,Clancy C.Comparative‐effectiveness research—implications of the federal coordinating council's report.N Engl J Med.2009;361(4):328330.
  24. Conway PH,Clancy C.Charting a path from comparative effectiveness funding to improved patient‐centered health care.JAMA.2010;303(10):985986.
  25. Cohen E,Kuo DZ,Agrawal R, et al.Children with medical complexity: an emerging population for clinical and research initiatives.Pediatrics.2011;127(3):529538.
Article PDF
Issue
Journal of Hospital Medicine - 7(4)
Publications
Page Number
350-357
Sections
Files
Files
Article PDF
Article PDF

In the United States, general medical inpatient care is increasingly provided by hospital‐based physicians, also called hospitalists.1 The field of pediatrics is no exception, and by 2005 there were an estimated 1000 pediatric hospitalists in the workforce.2 Current numbers are likely to be greater than 2500, as the need for pediatric hospitalists has grown considerably.

At the same time, the quality of care delivered by the United States health system has come under increased scrutiny. In 2001, the Institute of Medicine, in its report on the quality of healthcare in America, concluded that between the care we have and what we could have lies not just a gap but a chasm.3 Meanwhile, the cost of healthcare delivery continues to increase. The pressure to deliver cost‐effective, high quality care is among the more important forces driving the proliferation of hospitalists.4

Over the last decade, data supporting the role of hospitalists in improving quality of care for adult patients has continued to accumulate.58 A 2007 retrospective cohort study by Lindenaur et al.7 included nearly 77,000 adult patients and found small reductions in length of stay without adverse effects on mortality or readmission rates, and a 2009 systematic review by Peterson6 included 33 studies and concluded that in general inpatient care of general medical patients by hospitalist physicians leads to decreased hospital cost and length of stay. A 2002 study by Meltzer et al.8 is also interesting, suggesting that improvements in costs and short‐term mortality are related to the disease‐specific experience of hospitalists.

Similar data for pediatric hospitalists has been slower to emerge. A systematic review of the literature by Landrigan et al., which included studies through 2004, concluded that [R]esearch suggests that pediatric hospitalists decrease costs and length of stay . The quality of care in pediatric hospitalist systems is unclear, because rigorous metrics to evaluate quality are lacking.9 Since the publication of that review, there have been multiple studies which have sought to evaluate the quality of pediatric hospitalist systems. This review was undertaken to synthesize this new information, and to determine the effect of pediatric hospitalist systems on quality of care.

METHODS

A review of the available English language literature on the Medline database was undertaken in November of 2010 to answer the question, What are the differences in quality of care and outcomes of inpatient medical care provided by hospitalists versus non‐hospitalists in the pediatric population? Care metrics of interest were categorized according to the Society of Hospital Medicine's recommendations for measuring hospital performance.10

Search terms used (with additional medical subject headings [MeSH] terms in parenthesis) were hospital medicine (hospitalist), pediatrics (child health, child welfare), cost (cost and cost analysis), quality (quality indicators, healthcare), outcomes (outcome assessment, healthcare; outcomes and process assessment, healthcare); volume, patient satisfaction, length of stay, productivity (efficiency), provider satisfaction (attitude of health personnel, job satisfaction), mortality, and readmission rate (patient readmission). The citing articles search tool was used to identify other articles that potentially could meet criteria. Finally, references cited in the selected articles, as well as in excluded literature reviews, were searched for additional articles.

Articles were deemed eligible if they were published in a peer‐reviewed journal, if they had a comparative experimental design for hospitalists versus non‐hospitalists, and if they dealt exclusively with pediatric hospitalists. Noncomparative studies were excluded, as were studies that pertained to settings besides that of an inpatient pediatrics ward, such as pediatric intensive care units or emergency rooms. The search algorithm is diagrammed in Figure 1.

Figure 1
Search strategy. Abbreviations: ICU, intensive care unit.

The selected articles were reviewed for the relevant outcome measures. The quality of each article was assessed using the Oxford Centre for Evidence‐Based Medicine levels of evidence,11 a widely accepted standard for critical analysis of studies. Levels of evidence are assigned to studies, from 1a (systematic reviews of randomized controlled trials) to 5 (expert opinion only). Well‐conducted prospective cohort studies receive a rating of 2c; those with wide confidence intervals due to small sample size receive a minus () modifier. This system does not specifically address survey studies, which were therefore not assigned a level of evidence.

RESULTS

The screening process yielded 92 possible relevant articles, which were then reviewed individually (by G.M.M.) by title and abstract. A total of 81 articles were excluded, including 48 studies that were either noncomparative or descriptive in nature. Ten of the identified articles were reviews and did not contain primary data. Nine studies were not restricted to the pediatric population. Also excluded were 7 studies that did not have outcomes related to quality (eg, billing performance), and 7 studies of hospitalists in settings besides general pediatric wards (eg, pediatric intensive care units). Ten studies were thus identified. The cited reference tool was used to identify an additional article which met criteria, yielding 11 total articles that were included in the review.

Five of the identified studies published prior to 2005 were previously reviewed by Landrigan et al.9 Since then, 6 additional studies of similar nature have been published and were included here. Articles that met criteria but appeared in an earlier review are included in Table 1; new articles appear in Table 2. The results of all 11 articles were included for this discussion.

Previously Reviewed Reports Comparing Outcomes for Hospitalists vs Non‐Hospitalists
Source Site Study Design Outcomes Measured (Oxford Level of Evidence) Results for Hospitalists
  • NOTE: Levels of evidence are assigned to studies, from 1a (systematic reviews of randomized controlled trials) to 5 (expert opinion only). Well‐conducted prospective cohort studies receive a rating of 2c; those with wide confidence intervals due to small sample size receive a minus () modifier.

  • Abbreviations: LOS, length of stay.

Bellet and Whitaker13 (2000) Cincinnati Children's Hospital Medical Center, Cincinnati, OH 1440 general pediatric patients LOS, costs (2c) LOS shorter (2.4 vs 2.7 days)
Retrospective cohort study Readmission rate, subspecialty consultations, mortality (2c, low power) Costs lower ($2720 vs $3002)
Readmissions higher for hospitalists (1% vs 3%)
No differences in consultations
No mortality in study
Ogershok et al.16 (2001) West Virginia University Children's Hospitals, Morgantown, WV 2177 general pediatric patients LOS, cost (2c) No difference in LOS
Retrospective cohort study Readmission rate, patient satisfaction, mortality (2c, low power) Costs lower ($1238 vs $1421)
Lab and radiology tests ordered less often
No difference in mortality or readmission rates
No difference in satisfaction scores
Wells et al.15 (2001) Valley Children's Hospital, Madera, CA 182 general pediatric patients LOS, cost, patient satisfaction, follow‐up rate (2c, low power) LOS shorter (45.2 vs 66.8 hr; P = 0.01)
Prospective cohort study No LOS or cost benefit for patients with bronchiolitis, gastroenteritis, or pneumonia
Costs lower ($2701 vs $4854; P = 0.005) for patients with asthma
No difference in outpatient follow‐up rate
Landrigan et al.14 (2002) Boston Children's Hospital, Boston, MA 17,873 general pediatric patients LOS, cost (2c) LOS shorter (2.2 vs 2.5 days)
Retrospective cohort study Readmission rate, follow‐up rate, mortality (2c, low power) Costs lower ($1139 vs $1356)
No difference in follow‐up rate
No mortality in study
Dwight et al.12 (2004) Hospital for Sick Children, Toronto, Ontario, Canada 3807 general pediatric patients LOS (2c) LOS shorter (from 2.9 to 2.5 days; P = 0.04)
Retrospective cohort study Subspecialty consultations, readmission rate, mortality (2c, low power) No difference in readmission rates
No difference in mortality
Previously Unreviewed Reports Comparing Outcomes for Hospitalists vs Non‐Hospitalists
Source Site Study Design Outcomes Measured (Oxford Level of Evidence) Results for Hospitalists
  • NOTE: Levels of evidence are assigned to studies, from 1a (systematic reviews of randomized controlled trials) to 5 (expert opinion only). Well‐conducted prospective cohort studies receive a rating of 2c; those with wide confidence intervals due to small sample size receive a minus () modifier.

  • Abbreviations: DRGs, diagnosis‐related groups; GI, gastrointestinal; Heme/Onc, hematology/oncology; LOS, length of stay; PHIS, Pediatric Health Information System; UTI, urinary tract infection.

Boyd et al.21 (2006) St Joseph's Hospital and Medical Center, Phoenix, AZ 1009 patients with 11 most common DRGs (3 groups) Cost, LOS, and readmission rate (2c, low power) LOS longer (2.6 2.0 vs 3.1 2.6 vs 2.9 2.3, mean SD)
Retrospective cohort study Costs higher ($1781 $1449 (faculty) vs $1954 $1212 (hospitalist group 1) vs $1964 $1495 (hospitalist group 2)
No difference in readmission rates
Conway et al.22 (2006) National provider survey 213 hospitalists and 352 community pediatrician survey responses Self‐reported evidence‐based medicine use (descriptive study, no assignable level) Hospitalists more likely to follow EBG for following: VCUG and RUS after first UTI, albuterol and ipratropium in first 24 hr for asthma
Descriptive study Hospitalists less likely to use the following unproven therapies: levalbuterol and inhaled or oral steroids for bronchiolitis, stool culture or rotavirus testing for gastroenteritis, or ipratropium after 24 hr for asthma
Srivastava et al.17 (2007) University of Utah Health Sciences Center, Salt Lake City, UT 1970 patients with asthma, dehydration, or viral illness LOS, cost (2c, no confidence intervals reported) LOS shorter for asthma (0.23 days, 13%) and for dehydration (0.19 days, 11%)
Retrospective cohort study No LOS difference for patients with viral illness
Costs lower for asthma ($105.51, 9.3%) and for dehydration ($86.22, 7.8%)
Simon et al.19 (2007) Children's Hospital of Denver, Denver, CO 759 patients undergoing spinal fusion before and after availability of hospitalist consultation LOS (4, unaccounted confounding factors) LOS shorter, 6.5 (6.26.7) days to 4.8 (4.55.1)
Retrospective cohort study
Bekmezian et al.18 (2008) UCLA Hospital and Medical Center, Los Angeles, CA 925 subspecialty patients on GI and Heme/Onc services vs hospitalist service LOS, cost, readmission rate, mortality (2c, low power) LOS shorter (38%, P < 0.01)
Retrospective cohort study Cost lower (29%, P < 0.05)
Readmissions lower (36 for faculty vs none for hospitalists, P = 0.02)
No difference in mortality
Conway and Keren20 (2009) Multicenter, 25 children's hospitals 20,892 patients identified with UTI admissions in PHIS database LOS, cost, evidence‐based medicine use (2c) No difference in LOS
Retrospective cohort study No difference in cost
No difference in performance of EBM guideline (VCUG and RUS for first UTI)

Effect on Length of Stay, Cost, and Resource Utilization

Ten articles addressed length of stay as an outcome measure, and 8 included cost as well. Five have been previously reported9 (see Table 1). Of these, Dwight et al.,12, Bellet and Whitaker,13 and Landrigan et al.14 found decreased length of stay (LOS) and cost for all patients. Wells et al.15 found significantly decreased LOS and cost for asthma patients but not for all diagnoses taken together, and Ogershok et al.16 found lower hospital costs but not length of stay. Five of the 6 new studies, listed in Table 2, reported on length of stay and cost. Three showed some benefits for length of stay: Srivastava et al.17 reported improvement in length of stay and cost for asthma and dehydration, but not for all diagnoses together; Bekmezian et al.18 reported improved length of stay and cost for pediatric hospitalists for patients on a hematology and gastroenterology service; and Simon et al.19 attributes a generalized decrease in length of stay on a surgical service to implementation of hospitalist comanagement of their most complex patients, though hospitalists only comanaged 12% of the patients in the study. A multicentered study in 2009 by Conway and Keren20 reported no significant difference in length of stay for general pediatric patients with urinary tract infections.

Of the 4 total studies that showed significant advantage in length of stay for hospitalist groups, improvement ranged from 11% to 38%. All attempted to adjust for diagnosis and severity using diagnosis‐related groups (DRGs) or other methods. Dwight et al.,12 Bellet and Whitaker,13 and Bekmezian et al.18 used retrospective or historical comparison alone, while Landrigan et al.14 had both concurrent and historical comparison groups.

In contrast to the other studies, Boyd et al.21 in 2006 found significant advantages, in both length of stay and cost, for a faculty/resident service in comparison to a hospitalist service. This nonrandomized, retrospective cohort study included 1009 pediatric patients, with the 11 most common DRGs, admitted during the same time period to either a traditional faculty/resident team or 1 of 2 private practice hospitalist groups at an academic medical center. The 8 general pediatric faculty practice attendings were dedicated to inpatient care while on service, and rotated bimonthly. The authors found that the faculty group patients had significantly shorter lengths of stay and total direct patient costs.

Cost‐comparison results were reported by 7 of the studies. Bellet and Whitaker,13 Landrigan et al.,14 Ogershok et al.,16 and Bekmezian et al.18 reported reductions in cost for all patients varying from 9% to 29%, while Wells et al.15 and Srivastava et al.17 found reductions in cost only for patients with certain diagnoses. Srivastava et al.17 analyzed 1970 patients, admitted with primary diagnoses of asthma, dehydration, or viral illness, over a 5‐year period from 1993 to 1997. Cost‐per‐patient was reduced between 9.3% for asthma and 7.8% for dehydrations, but when combined with the viral illness group, the difference was not statistically significant. Wells et al.15 studied 182 admissions over a 1‐year period, and found significant reductions in cost of 44% (P < 0.005) for patients with asthma but not for bronchiolitis, gastroenteritis, or pneumonia. In 2009, Conway and Keren20 studied a multicentered cohort of 20,892 children hospitalized for urinary tract infection, and found no significant difference in hospitalization costs between hospitalist services and more traditional models.

Other Quality Measures

Though financial outcomes (length of stay, cost, and resource utilization) were the primary area of emphasis for most of the selected articles, other parameters with more of a focus on quality were examined as well. The studies by Dwight et al.,12 Bellet and Whitaker,13 Landrigan et al.,14 Ogershok et al.,16 Bekmezian et al.,18 and Boyd et al.21 examined mortality and readmission rate. None of these studies reported differences in mortality rate, though none were powered to do so. When studying readmission rate, Bellet and Whitaker13 reported a statistically significant lower rate of readmission for a traditionally staffed service versus the hospitalist service (1% vs 3%; P = 0.006). In contrast, Bekmezian et al.18 found a lower readmission rate for the hospitalist service (4.4% vs 0%; P = 0.02). The studies by Dwight et al.,12 Landrigan et al.,14 Ogershok et al.,16 and Boyd et al.21 did not detect differences in readmission rates.

Two studies measured patient satisfaction.15, 16 Ogershok et al.16 utilized hospital‐generated patient satisfaction surveys, completed at discharge, for comparison and found no differences between the hospitalist and non‐hospitalist ward services. Wells et al.15 utilized a standardized patient satisfaction assessment tool, given at discharge, followed by a telephone interview after 1 month. At discharge, parents rated hospitalist physicians higher in courtesy (P < 0.05) and friendliness (P < 0.005), though this difference was not detected in the telephone interviews 1 month later. However, at that time, parents did indicate that they received better explanations about their child's illness if their child was seen by their primary care physician rather than a hospitalist.

In 2006, a study by Conway et al.22 reported on the use of evidence‐based therapies and tests by hospitalists as compared to community pediatricians. The survey identified evidence‐based therapies and tests for asthma, bronchiolitis, gastroenteritis, and first‐time urinary tract infection (UTI) diagnosis. A total of 213 hospitalists and 228 community pediatricians met the inclusion criteria by returning the completed survey. After multivariate regression analysis, hospitalists were found to be more likely to use 4 of 5 evidence‐based therapies and recommended tests, and were less likely to use 6 of 7 therapies and tests of unproven benefit. In 2009, Conway and Clancy23 again studied the use of evidence‐based therapies, this time using more objective measures. In this report, the Pediatric Health Information System (PHIS) was examined for a cohort of 20,892 patients. After multivariable regression analysis, there was no statistical difference in the performance of evidence‐based imaging following a first UTI between hospitals staffed primarily by community pediatricians versus those with pediatric hospitalist systems. However, it should be noted that the evidence base for UTI‐related imaging has been debated in the literature over the past decade.

DISCUSSION

Of the 11 studies selected for this review, 10 measured length of stay as an outcome, with the majority favoring hospitalists but with mixed results. Three of these studies, those by Dwight et al.,12 Bellet and Whitaker,13 and Landrigan et al.,14 demonstrated 11% to 14% improvement for hospitalist services compared to community pediatricians. Boyd et al.,21 however, found exactly the opposite result, and 2 studies by Conway and Keren20 and Ogershok et al.16 found no difference in length of stay. Two more studies found benefits restricted to certain conditions: Wells et al.15 found 32% shorter lengths of stay for asthma, but not for other conditions; Srivastava et al.17 found a 13% reduction in length of stay for asthma and 11% for dehydration, but none for viral illnesses or when all conditions were combined. Bekmezian et al.18 found shorter lengths of stay on a hospitalist service for hematology and gastroenterology patients, and Simon et al.19 attribute a general trend of decreasing lengths of stay on a surgical service to the implementation of hospital comanagement for a small percentage of patients.

The most common quality measures studied were patient satisfaction, readmission rates, and mortality. Only 1 study by Ogershok et al.16 reported on patient satisfaction and found few differences between hospitalists and community pediatricians. Readmission rate were reported by 6 studies. Bellet and Whitaker13 found a higher readmission rate for pediatric hospitalists, Bekmezian et al.18 found a lower rate but on a subspecialty service. The study with the greatest power for this analysis, by Landrigan et al.14 with nearly 18,000 patients, found no difference, and neither did another 3 studies. Unsurprisingly, no study detected differences in mortality; it would be extremely difficult to adequately power a study to do so in the general pediatric setting, where mortality is rare.

The effect of relative experience of hospitalist physicians is uncertain. Boyd et al.21 speculated that 1 possible cause for the decreased lengths of stay and costs associated with their faculty group compared to hospitalists may have been due to the increased experience of the faculty group. Unfortunately, they were unable to generate statistical significance due to the small numbers of physicians in the study. In contrast, the hospitalists in the report by Dwight et al.12 had decreased lengths of stay but were less experienced. In the adult literature, the study by Meltzer et al.8 suggests that improved outcomes from hospitalist systems may not become apparent for 1 or more years after implementation, but none of the pediatric studies included in our review specifically address this issue. This leaves the possibility open that the hospitalist systems evaluated in some studies had insufficient time in which to develop increased efficiencies.

There were several limitations to our studies. First, due to the heterogeneity and methodological variations among the included studies, we were unable to perform a meta‐analysis. Second, the overall quality of evidence is limited due to the lack of randomized control trials. Third, a lack of agreement on appropriate quality markers has limited the study of quality of care. Published reports continue to focus on financial measures, such as length of stay, despite the recommendation in the previous review by Landrigan et al.9 that such studies would be of limited value. Finally, the current variability of hospitalist models and lack of study of factors that might influence outcomes makes comparisons difficult.

Despite these limitations, several interesting trends emerge from these studies. One such trend is that the more recent studies highlight that simple classification of hospitalist system versus traditional system fails to measure the complexity and nuance of care delivery. The 2006 study by Boyd et al.21 is especially notable because it showed the opposite effect of previous studies, namely, an increase in length of stay and costs for hospitalists at St Joseph's Medical Center in Phoenix, Arizona. In this study, the traditional faculty group was employed by the hospital, and the hospitalist group was a private practice model. The authors suggest that their faculty physicians were therefore operating like hospitalists in that almost all of their time was focused on inpatient care while they were on service. They also had a limited number of general pediatricians, who attended in the inpatient setting, who were more experienced than the private practice groups. Also, the authors theorize that their faculty may have had a closer working relationship with their residents due to additional service responsibilities and locations of the faculty group onsite. Further study of the care models utilized by faculty and hospitalist practices at St Joseph's and other hospitals may reveal important insights about improving the quality and efficiency of inpatient pediatric care in general.

Though there is a clear trend in the adult literature indicating that the use of hospitalists results in superior quality of care, there is less evidence for pediatric systems. The aforementioned previous review by Landrigan et al.9, in 2006 concluded that emerging research suggests that pediatric hospitalist systems decrease cost and length of stay, but also the quality of care in pediatric hospitalist systems is unclear, because rigorous metrics to evaluate quality are lacking. Data from the 6 additional studies presented here lend limited support to the first hypothesis, and the presence of only 1 negative study is not sufficient to undermine it.

While data on quality markers such as readmission rate or mortality remain elusive, the 2 studies by Conway et al.20, 22 attempt to evaluate quality by comparing the use of evidence‐based therapies by hospitalists and community pediatricians. Though the use of objective PHIS data for UTI in 2009 did not confirm the conclusion suggested by the 2006 provider survey study, the attempt to find measurable outcomes such as the use of evidence‐based therapies is a start but we need more metrics, including rigorous patient outcome metrics, to define the quality of our care systems. Before the effect of hospitalist systems on quality is fully understood, more work will need to be done defining metrics for comparison.

Unfortunately, over 5 years since the previous review by Landrigan et al.9 called for increased focus on inpatient quality and understanding how to improve, the sophistication of our measurement of pediatric inpatient quality and understanding of the mechanisms underlying improvement is still in its infancy. We propose a solution at multiple levels.

First, the investment in research comparing system‐level interventions (eg, discharge process A vs discharge process B) must be increased. This investment increased significantly due to the over $1 billion in Recovery Act funding for comparative effectiveness research.23 However, the future investment in comparative effectiveness research, often called patient‐centered outcomes research, and proportion of investment focused on delivery system interventions is unclear. We propose that the investment in comparing delivery system interventions is essential to improving not only hospital medicine systems, but, more importantly, the healthcare system broadly. In addition, research investment needs to focus on reliably implementing proven interventions in systems of care, and evaluating both the effects on patient outcomes and cost, and the contextual factors associated with successful implementation.24 A hospital medicine example would be the comparison of the implementation of a guideline for a common disease across a set of hospitals. One could perform a prospective observational design, in which one compares high intensity versus low intensity intervention and assesses the baseline characteristics of the hospital systems, to understand their association with successful implementation and, ultimately, patient outcomes. One could also perform a clustered randomized design.

Second, the development and implementation of pediatric quality of care measures, including in the inpatient setting, needs to increase rapidly. The Children's Health Insurance Program (CHIP) and its focus on an initial core set of quality measures that expands over time, through an investment in measure development and validation, is an opportunity for pediatric hospital medicine. Inpatient measures should be a focus of measure development and implementation. We must move beyond a limited set of inpatient measures to a broader set focused on issues such as patient safety, hospital‐acquired infections, outcomes for common illnesses, and transitions of care. We also need better measures for important pediatric populations, such as children with complex medical conditions.25

Third, our understanding of the mechanisms leading to improvement in hospital medicine systems needs to be developed. Studies of hospital medicine systems should move past simple binary comparisons of hospitalist systems versus traditional systems to understand the effect on patient outcomes and cost of factors such as years of experience, volume of patients seen overall and with a specific condition, staffing model, training, quality improvement knowledge and application, and health information systems. These factors may be additive or multiplicative to the performance of inpatient systems once put into place, but these hypotheses need to be tested.

Fourth, individual hospitalists and their groups must focus on quality measurement and improvement in quality and value delivered. At Cincinnati, we have a portfolio of quality and value projects derived from our strategic objectives, illustrated in Figure 2. The projects have leaders and teams to drive improvement and measure results. Increasingly, we are able to publish these results in peer‐reviewed journals. On a quarterly basis, we review the portfolio via a dashboard and/or run and control charts. We establish new projects and set new goals on at least an annual basis. It is important to note that at the beginning of the 2010‐2011 fiscal year, almost all initiatives identified as priorities were yellow or red. Our group is now planning new initiatives and goals for next year. This is one method applicable to our setting, but a focus on quality and value and measuring results needs to be part of every hospital medicine program. As payer focus on value increases, this will be essential to demonstrate how a hospitalist group improves outcomes and adds value.

Figure 2
Quality dashboard for the hospitalist medicine unit at Cincinnati Children's Hospital. At the beginning of the fiscal year, almost all initiatives identified as priorities were yellow or red. Group is now planning new initiatives and goals for next year. Abbreviations: ED, emergency department; FY, fiscal year; HM, hospital medicine; IV, intravenous; PICU, pediatric intensive care unit.

CONCLUSION

This review suggests that the use of hospitalists can improve the quality of inpatient care in the pediatric population, but this is not a universal finding and, most importantly, the mechanisms of improvement are poorly understood. We propose 4 components to address these issues so that a systematic review 5 years from now would be much more robust. These are: 1) increased investment in research comparing system‐level interventions and reliable implementation; 2) further development and implementation of pediatric quality of care measures in the inpatient setting; 3) understanding the mechanisms and factors leading to improvement in hospital medicine systems; and 4) an increased focus on quality measurement, and improvement in quality and value delivered by all individual hospitalists and their groups.

In the United States, general medical inpatient care is increasingly provided by hospital‐based physicians, also called hospitalists.1 The field of pediatrics is no exception, and by 2005 there were an estimated 1000 pediatric hospitalists in the workforce.2 Current numbers are likely to be greater than 2500, as the need for pediatric hospitalists has grown considerably.

At the same time, the quality of care delivered by the United States health system has come under increased scrutiny. In 2001, the Institute of Medicine, in its report on the quality of healthcare in America, concluded that between the care we have and what we could have lies not just a gap but a chasm.3 Meanwhile, the cost of healthcare delivery continues to increase. The pressure to deliver cost‐effective, high quality care is among the more important forces driving the proliferation of hospitalists.4

Over the last decade, data supporting the role of hospitalists in improving quality of care for adult patients has continued to accumulate.58 A 2007 retrospective cohort study by Lindenaur et al.7 included nearly 77,000 adult patients and found small reductions in length of stay without adverse effects on mortality or readmission rates, and a 2009 systematic review by Peterson6 included 33 studies and concluded that in general inpatient care of general medical patients by hospitalist physicians leads to decreased hospital cost and length of stay. A 2002 study by Meltzer et al.8 is also interesting, suggesting that improvements in costs and short‐term mortality are related to the disease‐specific experience of hospitalists.

Similar data for pediatric hospitalists has been slower to emerge. A systematic review of the literature by Landrigan et al., which included studies through 2004, concluded that [R]esearch suggests that pediatric hospitalists decrease costs and length of stay . The quality of care in pediatric hospitalist systems is unclear, because rigorous metrics to evaluate quality are lacking.9 Since the publication of that review, there have been multiple studies which have sought to evaluate the quality of pediatric hospitalist systems. This review was undertaken to synthesize this new information, and to determine the effect of pediatric hospitalist systems on quality of care.

METHODS

A review of the available English language literature on the Medline database was undertaken in November of 2010 to answer the question, What are the differences in quality of care and outcomes of inpatient medical care provided by hospitalists versus non‐hospitalists in the pediatric population? Care metrics of interest were categorized according to the Society of Hospital Medicine's recommendations for measuring hospital performance.10

Search terms used (with additional medical subject headings [MeSH] terms in parenthesis) were hospital medicine (hospitalist), pediatrics (child health, child welfare), cost (cost and cost analysis), quality (quality indicators, healthcare), outcomes (outcome assessment, healthcare; outcomes and process assessment, healthcare); volume, patient satisfaction, length of stay, productivity (efficiency), provider satisfaction (attitude of health personnel, job satisfaction), mortality, and readmission rate (patient readmission). The citing articles search tool was used to identify other articles that potentially could meet criteria. Finally, references cited in the selected articles, as well as in excluded literature reviews, were searched for additional articles.

Articles were deemed eligible if they were published in a peer‐reviewed journal, if they had a comparative experimental design for hospitalists versus non‐hospitalists, and if they dealt exclusively with pediatric hospitalists. Noncomparative studies were excluded, as were studies that pertained to settings besides that of an inpatient pediatrics ward, such as pediatric intensive care units or emergency rooms. The search algorithm is diagrammed in Figure 1.

Figure 1
Search strategy. Abbreviations: ICU, intensive care unit.

The selected articles were reviewed for the relevant outcome measures. The quality of each article was assessed using the Oxford Centre for Evidence‐Based Medicine levels of evidence,11 a widely accepted standard for critical analysis of studies. Levels of evidence are assigned to studies, from 1a (systematic reviews of randomized controlled trials) to 5 (expert opinion only). Well‐conducted prospective cohort studies receive a rating of 2c; those with wide confidence intervals due to small sample size receive a minus () modifier. This system does not specifically address survey studies, which were therefore not assigned a level of evidence.

RESULTS

The screening process yielded 92 possible relevant articles, which were then reviewed individually (by G.M.M.) by title and abstract. A total of 81 articles were excluded, including 48 studies that were either noncomparative or descriptive in nature. Ten of the identified articles were reviews and did not contain primary data. Nine studies were not restricted to the pediatric population. Also excluded were 7 studies that did not have outcomes related to quality (eg, billing performance), and 7 studies of hospitalists in settings besides general pediatric wards (eg, pediatric intensive care units). Ten studies were thus identified. The cited reference tool was used to identify an additional article which met criteria, yielding 11 total articles that were included in the review.

Five of the identified studies published prior to 2005 were previously reviewed by Landrigan et al.9 Since then, 6 additional studies of similar nature have been published and were included here. Articles that met criteria but appeared in an earlier review are included in Table 1; new articles appear in Table 2. The results of all 11 articles were included for this discussion.

Previously Reviewed Reports Comparing Outcomes for Hospitalists vs Non‐Hospitalists
Source Site Study Design Outcomes Measured (Oxford Level of Evidence) Results for Hospitalists
  • NOTE: Levels of evidence are assigned to studies, from 1a (systematic reviews of randomized controlled trials) to 5 (expert opinion only). Well‐conducted prospective cohort studies receive a rating of 2c; those with wide confidence intervals due to small sample size receive a minus () modifier.

  • Abbreviations: LOS, length of stay.

Bellet and Whitaker13 (2000) Cincinnati Children's Hospital Medical Center, Cincinnati, OH 1440 general pediatric patients LOS, costs (2c) LOS shorter (2.4 vs 2.7 days)
Retrospective cohort study Readmission rate, subspecialty consultations, mortality (2c, low power) Costs lower ($2720 vs $3002)
Readmissions higher for hospitalists (1% vs 3%)
No differences in consultations
No mortality in study
Ogershok et al.16 (2001) West Virginia University Children's Hospitals, Morgantown, WV 2177 general pediatric patients LOS, cost (2c) No difference in LOS
Retrospective cohort study Readmission rate, patient satisfaction, mortality (2c, low power) Costs lower ($1238 vs $1421)
Lab and radiology tests ordered less often
No difference in mortality or readmission rates
No difference in satisfaction scores
Wells et al.15 (2001) Valley Children's Hospital, Madera, CA 182 general pediatric patients LOS, cost, patient satisfaction, follow‐up rate (2c, low power) LOS shorter (45.2 vs 66.8 hr; P = 0.01)
Prospective cohort study No LOS or cost benefit for patients with bronchiolitis, gastroenteritis, or pneumonia
Costs lower ($2701 vs $4854; P = 0.005) for patients with asthma
No difference in outpatient follow‐up rate
Landrigan et al.14 (2002) Boston Children's Hospital, Boston, MA 17,873 general pediatric patients LOS, cost (2c) LOS shorter (2.2 vs 2.5 days)
Retrospective cohort study Readmission rate, follow‐up rate, mortality (2c, low power) Costs lower ($1139 vs $1356)
No difference in follow‐up rate
No mortality in study
Dwight et al.12 (2004) Hospital for Sick Children, Toronto, Ontario, Canada 3807 general pediatric patients LOS (2c) LOS shorter (from 2.9 to 2.5 days; P = 0.04)
Retrospective cohort study Subspecialty consultations, readmission rate, mortality (2c, low power) No difference in readmission rates
No difference in mortality
Previously Unreviewed Reports Comparing Outcomes for Hospitalists vs Non‐Hospitalists
Source Site Study Design Outcomes Measured (Oxford Level of Evidence) Results for Hospitalists
  • NOTE: Levels of evidence are assigned to studies, from 1a (systematic reviews of randomized controlled trials) to 5 (expert opinion only). Well‐conducted prospective cohort studies receive a rating of 2c; those with wide confidence intervals due to small sample size receive a minus () modifier.

  • Abbreviations: DRGs, diagnosis‐related groups; GI, gastrointestinal; Heme/Onc, hematology/oncology; LOS, length of stay; PHIS, Pediatric Health Information System; UTI, urinary tract infection.

Boyd et al.21 (2006) St Joseph's Hospital and Medical Center, Phoenix, AZ 1009 patients with 11 most common DRGs (3 groups) Cost, LOS, and readmission rate (2c, low power) LOS longer (2.6 2.0 vs 3.1 2.6 vs 2.9 2.3, mean SD)
Retrospective cohort study Costs higher ($1781 $1449 (faculty) vs $1954 $1212 (hospitalist group 1) vs $1964 $1495 (hospitalist group 2)
No difference in readmission rates
Conway et al.22 (2006) National provider survey 213 hospitalists and 352 community pediatrician survey responses Self‐reported evidence‐based medicine use (descriptive study, no assignable level) Hospitalists more likely to follow EBG for following: VCUG and RUS after first UTI, albuterol and ipratropium in first 24 hr for asthma
Descriptive study Hospitalists less likely to use the following unproven therapies: levalbuterol and inhaled or oral steroids for bronchiolitis, stool culture or rotavirus testing for gastroenteritis, or ipratropium after 24 hr for asthma
Srivastava et al.17 (2007) University of Utah Health Sciences Center, Salt Lake City, UT 1970 patients with asthma, dehydration, or viral illness LOS, cost (2c, no confidence intervals reported) LOS shorter for asthma (0.23 days, 13%) and for dehydration (0.19 days, 11%)
Retrospective cohort study No LOS difference for patients with viral illness
Costs lower for asthma ($105.51, 9.3%) and for dehydration ($86.22, 7.8%)
Simon et al.19 (2007) Children's Hospital of Denver, Denver, CO 759 patients undergoing spinal fusion before and after availability of hospitalist consultation LOS (4, unaccounted confounding factors) LOS shorter, 6.5 (6.26.7) days to 4.8 (4.55.1)
Retrospective cohort study
Bekmezian et al.18 (2008) UCLA Hospital and Medical Center, Los Angeles, CA 925 subspecialty patients on GI and Heme/Onc services vs hospitalist service LOS, cost, readmission rate, mortality (2c, low power) LOS shorter (38%, P < 0.01)
Retrospective cohort study Cost lower (29%, P < 0.05)
Readmissions lower (36 for faculty vs none for hospitalists, P = 0.02)
No difference in mortality
Conway and Keren20 (2009) Multicenter, 25 children's hospitals 20,892 patients identified with UTI admissions in PHIS database LOS, cost, evidence‐based medicine use (2c) No difference in LOS
Retrospective cohort study No difference in cost
No difference in performance of EBM guideline (VCUG and RUS for first UTI)

Effect on Length of Stay, Cost, and Resource Utilization

Ten articles addressed length of stay as an outcome measure, and 8 included cost as well. Five have been previously reported9 (see Table 1). Of these, Dwight et al.,12, Bellet and Whitaker,13 and Landrigan et al.14 found decreased length of stay (LOS) and cost for all patients. Wells et al.15 found significantly decreased LOS and cost for asthma patients but not for all diagnoses taken together, and Ogershok et al.16 found lower hospital costs but not length of stay. Five of the 6 new studies, listed in Table 2, reported on length of stay and cost. Three showed some benefits for length of stay: Srivastava et al.17 reported improvement in length of stay and cost for asthma and dehydration, but not for all diagnoses together; Bekmezian et al.18 reported improved length of stay and cost for pediatric hospitalists for patients on a hematology and gastroenterology service; and Simon et al.19 attributes a generalized decrease in length of stay on a surgical service to implementation of hospitalist comanagement of their most complex patients, though hospitalists only comanaged 12% of the patients in the study. A multicentered study in 2009 by Conway and Keren20 reported no significant difference in length of stay for general pediatric patients with urinary tract infections.

Of the 4 total studies that showed significant advantage in length of stay for hospitalist groups, improvement ranged from 11% to 38%. All attempted to adjust for diagnosis and severity using diagnosis‐related groups (DRGs) or other methods. Dwight et al.,12 Bellet and Whitaker,13 and Bekmezian et al.18 used retrospective or historical comparison alone, while Landrigan et al.14 had both concurrent and historical comparison groups.

In contrast to the other studies, Boyd et al.21 in 2006 found significant advantages, in both length of stay and cost, for a faculty/resident service in comparison to a hospitalist service. This nonrandomized, retrospective cohort study included 1009 pediatric patients, with the 11 most common DRGs, admitted during the same time period to either a traditional faculty/resident team or 1 of 2 private practice hospitalist groups at an academic medical center. The 8 general pediatric faculty practice attendings were dedicated to inpatient care while on service, and rotated bimonthly. The authors found that the faculty group patients had significantly shorter lengths of stay and total direct patient costs.

Cost‐comparison results were reported by 7 of the studies. Bellet and Whitaker,13 Landrigan et al.,14 Ogershok et al.,16 and Bekmezian et al.18 reported reductions in cost for all patients varying from 9% to 29%, while Wells et al.15 and Srivastava et al.17 found reductions in cost only for patients with certain diagnoses. Srivastava et al.17 analyzed 1970 patients, admitted with primary diagnoses of asthma, dehydration, or viral illness, over a 5‐year period from 1993 to 1997. Cost‐per‐patient was reduced between 9.3% for asthma and 7.8% for dehydrations, but when combined with the viral illness group, the difference was not statistically significant. Wells et al.15 studied 182 admissions over a 1‐year period, and found significant reductions in cost of 44% (P < 0.005) for patients with asthma but not for bronchiolitis, gastroenteritis, or pneumonia. In 2009, Conway and Keren20 studied a multicentered cohort of 20,892 children hospitalized for urinary tract infection, and found no significant difference in hospitalization costs between hospitalist services and more traditional models.

Other Quality Measures

Though financial outcomes (length of stay, cost, and resource utilization) were the primary area of emphasis for most of the selected articles, other parameters with more of a focus on quality were examined as well. The studies by Dwight et al.,12 Bellet and Whitaker,13 Landrigan et al.,14 Ogershok et al.,16 Bekmezian et al.,18 and Boyd et al.21 examined mortality and readmission rate. None of these studies reported differences in mortality rate, though none were powered to do so. When studying readmission rate, Bellet and Whitaker13 reported a statistically significant lower rate of readmission for a traditionally staffed service versus the hospitalist service (1% vs 3%; P = 0.006). In contrast, Bekmezian et al.18 found a lower readmission rate for the hospitalist service (4.4% vs 0%; P = 0.02). The studies by Dwight et al.,12 Landrigan et al.,14 Ogershok et al.,16 and Boyd et al.21 did not detect differences in readmission rates.

Two studies measured patient satisfaction.15, 16 Ogershok et al.16 utilized hospital‐generated patient satisfaction surveys, completed at discharge, for comparison and found no differences between the hospitalist and non‐hospitalist ward services. Wells et al.15 utilized a standardized patient satisfaction assessment tool, given at discharge, followed by a telephone interview after 1 month. At discharge, parents rated hospitalist physicians higher in courtesy (P < 0.05) and friendliness (P < 0.005), though this difference was not detected in the telephone interviews 1 month later. However, at that time, parents did indicate that they received better explanations about their child's illness if their child was seen by their primary care physician rather than a hospitalist.

In 2006, a study by Conway et al.22 reported on the use of evidence‐based therapies and tests by hospitalists as compared to community pediatricians. The survey identified evidence‐based therapies and tests for asthma, bronchiolitis, gastroenteritis, and first‐time urinary tract infection (UTI) diagnosis. A total of 213 hospitalists and 228 community pediatricians met the inclusion criteria by returning the completed survey. After multivariate regression analysis, hospitalists were found to be more likely to use 4 of 5 evidence‐based therapies and recommended tests, and were less likely to use 6 of 7 therapies and tests of unproven benefit. In 2009, Conway and Clancy23 again studied the use of evidence‐based therapies, this time using more objective measures. In this report, the Pediatric Health Information System (PHIS) was examined for a cohort of 20,892 patients. After multivariable regression analysis, there was no statistical difference in the performance of evidence‐based imaging following a first UTI between hospitals staffed primarily by community pediatricians versus those with pediatric hospitalist systems. However, it should be noted that the evidence base for UTI‐related imaging has been debated in the literature over the past decade.

DISCUSSION

Of the 11 studies selected for this review, 10 measured length of stay as an outcome, with the majority favoring hospitalists but with mixed results. Three of these studies, those by Dwight et al.,12 Bellet and Whitaker,13 and Landrigan et al.,14 demonstrated 11% to 14% improvement for hospitalist services compared to community pediatricians. Boyd et al.,21 however, found exactly the opposite result, and 2 studies by Conway and Keren20 and Ogershok et al.16 found no difference in length of stay. Two more studies found benefits restricted to certain conditions: Wells et al.15 found 32% shorter lengths of stay for asthma, but not for other conditions; Srivastava et al.17 found a 13% reduction in length of stay for asthma and 11% for dehydration, but none for viral illnesses or when all conditions were combined. Bekmezian et al.18 found shorter lengths of stay on a hospitalist service for hematology and gastroenterology patients, and Simon et al.19 attribute a general trend of decreasing lengths of stay on a surgical service to the implementation of hospital comanagement for a small percentage of patients.

The most common quality measures studied were patient satisfaction, readmission rates, and mortality. Only 1 study by Ogershok et al.16 reported on patient satisfaction and found few differences between hospitalists and community pediatricians. Readmission rate were reported by 6 studies. Bellet and Whitaker13 found a higher readmission rate for pediatric hospitalists, Bekmezian et al.18 found a lower rate but on a subspecialty service. The study with the greatest power for this analysis, by Landrigan et al.14 with nearly 18,000 patients, found no difference, and neither did another 3 studies. Unsurprisingly, no study detected differences in mortality; it would be extremely difficult to adequately power a study to do so in the general pediatric setting, where mortality is rare.

The effect of relative experience of hospitalist physicians is uncertain. Boyd et al.21 speculated that 1 possible cause for the decreased lengths of stay and costs associated with their faculty group compared to hospitalists may have been due to the increased experience of the faculty group. Unfortunately, they were unable to generate statistical significance due to the small numbers of physicians in the study. In contrast, the hospitalists in the report by Dwight et al.12 had decreased lengths of stay but were less experienced. In the adult literature, the study by Meltzer et al.8 suggests that improved outcomes from hospitalist systems may not become apparent for 1 or more years after implementation, but none of the pediatric studies included in our review specifically address this issue. This leaves the possibility open that the hospitalist systems evaluated in some studies had insufficient time in which to develop increased efficiencies.

There were several limitations to our studies. First, due to the heterogeneity and methodological variations among the included studies, we were unable to perform a meta‐analysis. Second, the overall quality of evidence is limited due to the lack of randomized control trials. Third, a lack of agreement on appropriate quality markers has limited the study of quality of care. Published reports continue to focus on financial measures, such as length of stay, despite the recommendation in the previous review by Landrigan et al.9 that such studies would be of limited value. Finally, the current variability of hospitalist models and lack of study of factors that might influence outcomes makes comparisons difficult.

Despite these limitations, several interesting trends emerge from these studies. One such trend is that the more recent studies highlight that simple classification of hospitalist system versus traditional system fails to measure the complexity and nuance of care delivery. The 2006 study by Boyd et al.21 is especially notable because it showed the opposite effect of previous studies, namely, an increase in length of stay and costs for hospitalists at St Joseph's Medical Center in Phoenix, Arizona. In this study, the traditional faculty group was employed by the hospital, and the hospitalist group was a private practice model. The authors suggest that their faculty physicians were therefore operating like hospitalists in that almost all of their time was focused on inpatient care while they were on service. They also had a limited number of general pediatricians, who attended in the inpatient setting, who were more experienced than the private practice groups. Also, the authors theorize that their faculty may have had a closer working relationship with their residents due to additional service responsibilities and locations of the faculty group onsite. Further study of the care models utilized by faculty and hospitalist practices at St Joseph's and other hospitals may reveal important insights about improving the quality and efficiency of inpatient pediatric care in general.

Though there is a clear trend in the adult literature indicating that the use of hospitalists results in superior quality of care, there is less evidence for pediatric systems. The aforementioned previous review by Landrigan et al.9, in 2006 concluded that emerging research suggests that pediatric hospitalist systems decrease cost and length of stay, but also the quality of care in pediatric hospitalist systems is unclear, because rigorous metrics to evaluate quality are lacking. Data from the 6 additional studies presented here lend limited support to the first hypothesis, and the presence of only 1 negative study is not sufficient to undermine it.

While data on quality markers such as readmission rate or mortality remain elusive, the 2 studies by Conway et al.20, 22 attempt to evaluate quality by comparing the use of evidence‐based therapies by hospitalists and community pediatricians. Though the use of objective PHIS data for UTI in 2009 did not confirm the conclusion suggested by the 2006 provider survey study, the attempt to find measurable outcomes such as the use of evidence‐based therapies is a start but we need more metrics, including rigorous patient outcome metrics, to define the quality of our care systems. Before the effect of hospitalist systems on quality is fully understood, more work will need to be done defining metrics for comparison.

Unfortunately, over 5 years since the previous review by Landrigan et al.9 called for increased focus on inpatient quality and understanding how to improve, the sophistication of our measurement of pediatric inpatient quality and understanding of the mechanisms underlying improvement is still in its infancy. We propose a solution at multiple levels.

First, the investment in research comparing system‐level interventions (eg, discharge process A vs discharge process B) must be increased. This investment increased significantly due to the over $1 billion in Recovery Act funding for comparative effectiveness research.23 However, the future investment in comparative effectiveness research, often called patient‐centered outcomes research, and proportion of investment focused on delivery system interventions is unclear. We propose that the investment in comparing delivery system interventions is essential to improving not only hospital medicine systems, but, more importantly, the healthcare system broadly. In addition, research investment needs to focus on reliably implementing proven interventions in systems of care, and evaluating both the effects on patient outcomes and cost, and the contextual factors associated with successful implementation.24 A hospital medicine example would be the comparison of the implementation of a guideline for a common disease across a set of hospitals. One could perform a prospective observational design, in which one compares high intensity versus low intensity intervention and assesses the baseline characteristics of the hospital systems, to understand their association with successful implementation and, ultimately, patient outcomes. One could also perform a clustered randomized design.

Second, the development and implementation of pediatric quality of care measures, including in the inpatient setting, needs to increase rapidly. The Children's Health Insurance Program (CHIP) and its focus on an initial core set of quality measures that expands over time, through an investment in measure development and validation, is an opportunity for pediatric hospital medicine. Inpatient measures should be a focus of measure development and implementation. We must move beyond a limited set of inpatient measures to a broader set focused on issues such as patient safety, hospital‐acquired infections, outcomes for common illnesses, and transitions of care. We also need better measures for important pediatric populations, such as children with complex medical conditions.25

Third, our understanding of the mechanisms leading to improvement in hospital medicine systems needs to be developed. Studies of hospital medicine systems should move past simple binary comparisons of hospitalist systems versus traditional systems to understand the effect on patient outcomes and cost of factors such as years of experience, volume of patients seen overall and with a specific condition, staffing model, training, quality improvement knowledge and application, and health information systems. These factors may be additive or multiplicative to the performance of inpatient systems once put into place, but these hypotheses need to be tested.

Fourth, individual hospitalists and their groups must focus on quality measurement and improvement in quality and value delivered. At Cincinnati, we have a portfolio of quality and value projects derived from our strategic objectives, illustrated in Figure 2. The projects have leaders and teams to drive improvement and measure results. Increasingly, we are able to publish these results in peer‐reviewed journals. On a quarterly basis, we review the portfolio via a dashboard and/or run and control charts. We establish new projects and set new goals on at least an annual basis. It is important to note that at the beginning of the 2010‐2011 fiscal year, almost all initiatives identified as priorities were yellow or red. Our group is now planning new initiatives and goals for next year. This is one method applicable to our setting, but a focus on quality and value and measuring results needs to be part of every hospital medicine program. As payer focus on value increases, this will be essential to demonstrate how a hospitalist group improves outcomes and adds value.

Figure 2
Quality dashboard for the hospitalist medicine unit at Cincinnati Children's Hospital. At the beginning of the fiscal year, almost all initiatives identified as priorities were yellow or red. Group is now planning new initiatives and goals for next year. Abbreviations: ED, emergency department; FY, fiscal year; HM, hospital medicine; IV, intravenous; PICU, pediatric intensive care unit.

CONCLUSION

This review suggests that the use of hospitalists can improve the quality of inpatient care in the pediatric population, but this is not a universal finding and, most importantly, the mechanisms of improvement are poorly understood. We propose 4 components to address these issues so that a systematic review 5 years from now would be much more robust. These are: 1) increased investment in research comparing system‐level interventions and reliable implementation; 2) further development and implementation of pediatric quality of care measures in the inpatient setting; 3) understanding the mechanisms and factors leading to improvement in hospital medicine systems; and 4) an increased focus on quality measurement, and improvement in quality and value delivered by all individual hospitalists and their groups.

References
  1. Wachter RM,Goldman L.The emerging role of “hospitalists” in the American health care system.N Engl J Med.1996;335(7):514517.
  2. Lye PS,Rauch DA,Ottolini MC, et al.Pediatric hospitalists: report of a leadership conference.Pediatrics.2006;117(4):11221130.
  3. Institute of Medicine.Crossing the Quality Chasm: A New Health System for the 21st Century.Washington, DC:National Academy Press;2001.
  4. Wachter RM,Goldman L.The hospitalist movement 5 years later.JAMA.2002;287(4):487494.
  5. Coffman J,Rundall TG.The impact of hospitalists on the cost and quality of inpatient care in the United States: a research synthesis.Med Care Res Rev.2005;62(4):379406.
  6. Peterson MC.A systematic review of outcomes and quality measures in adult patients cared for by hospitalists vs nonhospitalists.Mayo Clin Proc.2009;84(3):248254.
  7. Lindenauer PK,Rothberg MB,Pekow PS,Kenwood C,Benjamin EM,Auerbach AD.Outcomes of care by hospitalists, general internists, and family physicians.N Engl J Med.2007;375(25):25892600.
  8. Meltzer D,Manning WG,Morrison J, et al.Effects of physician experience on costs and outcomes on an academic general medicine service: results of a trial of hospitalists.Ann Intern Med.2002;137(11):866875.
  9. Landrigan CP,Conway PH,Edwards S,Srivastava R.Pediatric hospitalists: a systematic review of the literature.Pediatrics.2006;117(5):17361744.
  10. Society of Hospital Medicine. Measuring hospitalist performance: metrics, reports, and dashboards. Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=Publications; April2007.
  11. Oxford Centre for Evidence‐Based Medicine levels of evidence. Updated March 2009. Available at: http://www.cebm.net/index.aspx?o=1025. Accessed March 14,2011.
  12. Dwight P,MacArthur C,Friedman JN,Parkin PC.Evaluation of a staff‐only hospitalist system in a tertiary care, academic children's hospital.Pediatrics.2004;114(6):15451549.
  13. Bellet PS,Whitaker RC.Evaluation of a pediatric hospitalist service: impact on length of stay and hospital charges.Pediatrics.2000;105(3 pt 1):478484.
  14. Landrigan CP,Srivastava R,Muret‐Wagstaff S, et al.Impact of a health maintenance organization hospitalist system in academic pediatrics.Pediatrics.2002;110(4):720728.
  15. Wells RD,Dahl B,Wilson SD.Pediatric hospitalists: quality care for the underserved?Am J Med Qual.2001;16(5):174180.
  16. Ogershok PR,Li X,Palmer HC,Moore RS,Weisse ME,Ferrari ND.Restructuring an academic pediatric inpatient service using concepts developed by hospitalists.Clin Pediatr (Phila).2001;40(12):653662.
  17. Srivastava R,Landrigan CP,Ross‐Degnan D, et al.Impact of a hospitalist system on length of stay and cost for children with common conditions.Pediatrics.2007;120(2):267274.
  18. Bekmezian A,Chung PJ,Yazdani S.Staff‐only pediatric hospitalist care of patients with medically complex subspecialty conditions in a major teaching hospital.Arch Pediatr Adolesc Med.2008;162(10):975980.
  19. Simon TD,Eilert R,Dickinson LM,Kempe A,Benefield E,Berman S.Pediatric hospitalist comanagement of spinal fusion surgery patients.J Hosp Med.2007;2(1):2330.
  20. Conway PH,Keren R.Factors associated with variability in outcomes for children hospitalized with urinary tract infection.J Pediatr.2009;154(6):789796.
  21. Boyd J,Samaddar K,Parra‐Roide L,Allen EP,White B.Comparison of outcome measures for a traditional pediatric faculty service and nonfaculty hospitalist services in a community teaching hospital.Pediatrics.2006;118(4):13271331.
  22. Conway PH,Edwards S,Stucky ER,Chiang VW,Ottolini MC,Landrigan CP.Variations in management of common inpatient pediatric illnesses: hospitalists and community pediatricians.Pediatrics.2006;118(2):441447.
  23. Conway PH,Clancy C.Comparative‐effectiveness research—implications of the federal coordinating council's report.N Engl J Med.2009;361(4):328330.
  24. Conway PH,Clancy C.Charting a path from comparative effectiveness funding to improved patient‐centered health care.JAMA.2010;303(10):985986.
  25. Cohen E,Kuo DZ,Agrawal R, et al.Children with medical complexity: an emerging population for clinical and research initiatives.Pediatrics.2011;127(3):529538.
References
  1. Wachter RM,Goldman L.The emerging role of “hospitalists” in the American health care system.N Engl J Med.1996;335(7):514517.
  2. Lye PS,Rauch DA,Ottolini MC, et al.Pediatric hospitalists: report of a leadership conference.Pediatrics.2006;117(4):11221130.
  3. Institute of Medicine.Crossing the Quality Chasm: A New Health System for the 21st Century.Washington, DC:National Academy Press;2001.
  4. Wachter RM,Goldman L.The hospitalist movement 5 years later.JAMA.2002;287(4):487494.
  5. Coffman J,Rundall TG.The impact of hospitalists on the cost and quality of inpatient care in the United States: a research synthesis.Med Care Res Rev.2005;62(4):379406.
  6. Peterson MC.A systematic review of outcomes and quality measures in adult patients cared for by hospitalists vs nonhospitalists.Mayo Clin Proc.2009;84(3):248254.
  7. Lindenauer PK,Rothberg MB,Pekow PS,Kenwood C,Benjamin EM,Auerbach AD.Outcomes of care by hospitalists, general internists, and family physicians.N Engl J Med.2007;375(25):25892600.
  8. Meltzer D,Manning WG,Morrison J, et al.Effects of physician experience on costs and outcomes on an academic general medicine service: results of a trial of hospitalists.Ann Intern Med.2002;137(11):866875.
  9. Landrigan CP,Conway PH,Edwards S,Srivastava R.Pediatric hospitalists: a systematic review of the literature.Pediatrics.2006;117(5):17361744.
  10. Society of Hospital Medicine. Measuring hospitalist performance: metrics, reports, and dashboards. Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=Publications; April2007.
  11. Oxford Centre for Evidence‐Based Medicine levels of evidence. Updated March 2009. Available at: http://www.cebm.net/index.aspx?o=1025. Accessed March 14,2011.
  12. Dwight P,MacArthur C,Friedman JN,Parkin PC.Evaluation of a staff‐only hospitalist system in a tertiary care, academic children's hospital.Pediatrics.2004;114(6):15451549.
  13. Bellet PS,Whitaker RC.Evaluation of a pediatric hospitalist service: impact on length of stay and hospital charges.Pediatrics.2000;105(3 pt 1):478484.
  14. Landrigan CP,Srivastava R,Muret‐Wagstaff S, et al.Impact of a health maintenance organization hospitalist system in academic pediatrics.Pediatrics.2002;110(4):720728.
  15. Wells RD,Dahl B,Wilson SD.Pediatric hospitalists: quality care for the underserved?Am J Med Qual.2001;16(5):174180.
  16. Ogershok PR,Li X,Palmer HC,Moore RS,Weisse ME,Ferrari ND.Restructuring an academic pediatric inpatient service using concepts developed by hospitalists.Clin Pediatr (Phila).2001;40(12):653662.
  17. Srivastava R,Landrigan CP,Ross‐Degnan D, et al.Impact of a hospitalist system on length of stay and cost for children with common conditions.Pediatrics.2007;120(2):267274.
  18. Bekmezian A,Chung PJ,Yazdani S.Staff‐only pediatric hospitalist care of patients with medically complex subspecialty conditions in a major teaching hospital.Arch Pediatr Adolesc Med.2008;162(10):975980.
  19. Simon TD,Eilert R,Dickinson LM,Kempe A,Benefield E,Berman S.Pediatric hospitalist comanagement of spinal fusion surgery patients.J Hosp Med.2007;2(1):2330.
  20. Conway PH,Keren R.Factors associated with variability in outcomes for children hospitalized with urinary tract infection.J Pediatr.2009;154(6):789796.
  21. Boyd J,Samaddar K,Parra‐Roide L,Allen EP,White B.Comparison of outcome measures for a traditional pediatric faculty service and nonfaculty hospitalist services in a community teaching hospital.Pediatrics.2006;118(4):13271331.
  22. Conway PH,Edwards S,Stucky ER,Chiang VW,Ottolini MC,Landrigan CP.Variations in management of common inpatient pediatric illnesses: hospitalists and community pediatricians.Pediatrics.2006;118(2):441447.
  23. Conway PH,Clancy C.Comparative‐effectiveness research—implications of the federal coordinating council's report.N Engl J Med.2009;361(4):328330.
  24. Conway PH,Clancy C.Charting a path from comparative effectiveness funding to improved patient‐centered health care.JAMA.2010;303(10):985986.
  25. Cohen E,Kuo DZ,Agrawal R, et al.Children with medical complexity: an emerging population for clinical and research initiatives.Pediatrics.2011;127(3):529538.
Issue
Journal of Hospital Medicine - 7(4)
Issue
Journal of Hospital Medicine - 7(4)
Page Number
350-357
Page Number
350-357
Publications
Publications
Article Type
Display Headline
Pediatric hospitalist systems versus traditional models of care: Effect on quality and cost outcomes
Display Headline
Pediatric hospitalist systems versus traditional models of care: Effect on quality and cost outcomes
Sections
Article Source
Copyright © 2011 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Division of General and Community Pediatrics, Hospital Medicine, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave, MLC 2011, Cincinnati, OH 45229
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files