User login
Project BOOST
Enactment of federal legislation imposing hospital reimbursement penalties for excess rates of rehospitalizations among Medicare fee for service beneficiaries markedly increased interest in hospital quality improvement (QI) efforts to reduce the observed 30‐day rehospitalization of 19.6% in this elderly population.[1, 2] The Congressional Budget Office estimated that reimbursement penalties to hospitals for high readmission rates are expected to save the Medicare program approximately $7 billion between 2010 and 2019.[3] These penalties are complemented by resources from the Center for Medicare and Medicaid Innovation aiming to reduce hospital readmissions by 20% by the end of 2013 through the Partnership for Patients campaign.[4] Although potential financial penalties and provision of resources for QI intensified efforts to enhance the quality of the hospital discharge transition, patient safety risks associated with hospital discharge are well documented.[5, 6] Approximately 20% of patients discharged from the hospital may suffer adverse events,[7, 8] of which up to three‐quarters (72%) are medication related,[9] and over one‐third of required follow‐up testing after discharge is not completed.[10] Such findings indicate opportunities for improvement in the discharge process.[11]
Numerous publications describe studies aiming to improve the hospital discharge process and mitigate these hazards, though a systematic review of interventions to reduce 30‐day rehospitalization indicated that the existing evidence base for the effectiveness of transition interventions demonstrates irregular effectiveness and limitations to generalizability.[12] Most studies showing effectiveness are confined to single academic medical centers. Existing evidence supports multifaceted interventions implemented in both the pre‐ and postdischarge periods and focused on risk assessment and tailored, patient‐centered application of interventions to mitigate risk. For example Project RED (Re‐Engineered Discharge) applied a bundled intervention consisting of intensified patient education and discharge planning, improved medication reconciliation and discharge instructions, and longitudinal patient contact with follow‐up phone calls and a dedicated discharge advocate.[13] However, the mean age of patients participating in the study was 50 years, and it excluded patients admitted from or discharged to skilled nursing facilities, making generalizability to the geriatric population uncertain.
An integral aspect of QI projects is the contribution of local context to translation of best practices to disparate settings.[14, 15, 16] Most available reports of successful interventions to reduce rehospitalization have not fully described the specifics of either the intervention context or design. Moreover, the available evidence base for common interventions to reduce rehospitalization was developed in the academic setting. Validation of single academic center studies in a broader healthcare context is necessary.
Project BOOST (Better Outcomes for Older adults through Safe Transitions) recruited a diverse national cohort of both academic and nonacademic hospitals to participate in a QI effort to implement best practices for hospital discharge care transitions using a national collaborative approach facilitated by external expert mentorship. This study aimed to determine the effectiveness of BOOST in lowering hospital readmission rates and impact on length of stay.
METHODS
The study of Project BOOST was undertaken in accordance with the SQUIRE (Standards for Quality Improvement Reporting Excellence) Guidelines.[17]
Participants
The unit of observation for the prospective cohort study was the clinical acute‐care unit within hospitals. Sites were instructed to designate a pilot unit for the intervention that cared for medical or mixed medicalsurgical patient populations. Sites were also asked to provide outcome data for a clinically and organizationally similar non‐BOOST unit to provide a site‐matched control. Control units were matched by local site leadership based on comparable patient demographics, clinical mix, and extent of housestaff presence. An initial cohort of 6 hospitals in 2008 was followed by a second cohort of 24 hospitals initiated in 2009. All hospitals were invited to participate in the national effectiveness analysis, which required submission of readmission and length of stay data for both a BOOST intervention unit and a clinically matched control unit.
Description of the Intervention
The BOOST intervention consisted of 2 major sequential processes, planning and implementation, both facilitated by external site mentorsphysicians expert in QI and care transitionsfor a period of 12 months. Extensive background on the planning and implementation components is available at
Enrollment Sites, n=30 | Sites Reporting Outcome Data, n=11 | Sites Not Reporting Outcome Data, n=19 | P Value for Comparison of Outcome Data Sites Compared to Othersa | |
---|---|---|---|---|
| ||||
Region, n (%) | 0.194 | |||
Northeast | 8 (26.7) | 2 (18.2) | 6 (31.6) | |
West | 7 (23.4) | 2 (18.2) | 5 (26.3) | |
South | 7 (23.4) | 3 (27.3) | 4 (21.1) | |
Midwest | 8 (26.7) | 4 (36.4) | 4 (21.1) | |
Urban location, n (%) | 25 (83.3) | 11 (100) | 15 (78.9) | 0.035 |
Teaching status, n (%) | 0.036 | |||
Academic medical center | 10 (33.4) | 5 (45.5) | 5 (26.3) | |
Community teaching | 8 (26.7) | 3 (27.3) | 5 (26.3) | |
Community nonteaching | 12 (40) | 3 (27.3) | 9 (47.4) | |
Beds number, mean (SD) | 426.6 (220.6) | 559.2 (187.8) | 349.79 (204.48) | 0.003 |
Number of tools implemented, n (%) | 0.194 | |||
0 | 2 (6.7) | 0 | 2 (10.5) | |
1 | 2 (6.7) | 0 | 2 (10.5) | |
2 | 4 (13.3) | 2 (18.2) | 2 (10.5) | |
3 | 12 (40.0) | 3 (27.3) | 8 (42.1) | |
4 | 9 (30.0) | 5 (45.5) | 4 (21.1) | |
5 | 1 (3.3) | 1 (9.1) | 1 (5.3) |
Mentor engagement with sites consisted of a 2‐day kickoff training on the BOOST tools, where site teams met their mentor and initiated development of structured action plans, followed by 5 to 6 scheduled phone calls in the subsequent 12 months. During these conference calls, mentors gauged progress and sought to help troubleshoot barriers to implementation. Some mentors also conducted a site visit with participant sites. Project BOOST provided sites with several collaborative activities including online webinars and an online listserv. Sites also received a quarterly newsletter.
Outcome Measures
The primary outcome was 30‐day rehospitalization defined as same hospital, all‐cause rehospitalization. Home discharges as well as discharges or transfers to other healthcare facilities were included in the discharge calculation. Elective or scheduled rehospitalizations as well as multiple rehospitalizations in the same 30‐day window were considered individual rehospitalization events. Rehospitalization was reported as a ratio of 30‐day rehospitalizations divided by live discharges in a calendar month. Length of stay was reported as the mean length of stay among live discharges in a calendar month. Outcomes were calculated at the participant site and then uploaded as overall monthly unit outcomes to a Web‐based research database.
To account for seasonal trends as well as marked variation in month‐to‐month rehospitalization rates identified in longitudinal data, we elected to compare 3‐month year‐over‐year averages to determine relative changes in readmission rates from the period prior to BOOST implementation to the period after BOOST implementation. We calculated averages for rehospitalization and length of stay in the 3‐month period preceding the sites' first reported month of front‐line implementation and in the corresponding 3‐month period in the subsequent calendar year. For example, if a site reported implementing its first tool in April 2010, the average readmission rate in the unit for January 2011 through March 2011 was subtracted from the average readmission rate for January 2010 through March 2010.
Sites were surveyed regarding tool implementation rates 6 months and 24 months after the 2009 kickoff training session. Surveys were electronically completed by site leaders in consultation with site team members. The survey identified new tool implementation as well as modification of existing care processes using the BOOST tools (admission risk assessment, discharge readiness checklist, teach back use, mandate regarding discharge summary completion, follow‐up phone calls to >80% of discharges). Use of a sixth tool, creation of individualized written discharge instructions, was not measured. We credited sites with tool implementation if they reported either de novo tool use or alteration of previous care processes influenced by BOOST tools.
Clinical outcome reporting was voluntary, and sites did not receive compensation and were not subject to penalty for the degree of implementation or outcome reporting. No patient‐level information was collected for the analysis, which was approved by the Northwestern University institutional review board.
Data Sources and Methods
Readmission and length of stay data, including the unit level readmission rate, as collected from administrative sources at each hospital, were collected using templated spreadsheet software between December 2008 and June 2010, after which data were loaded directly to a Web‐based data‐tracking platform. Sites were asked to load data as they became available. Sites were asked to report the number of study unit discharges as well as the number of those discharges readmitted within 30 days; however, reporting of the number of patient discharges was inconsistent across sites. Serial outreach consisting of monthly phone calls or email messaging to site leaders was conducted throughout 2011 to increase site participation in the project analysis.
Implementation date information was collected from 2 sources. The first was through online surveys distributed in November 2009 and April 2011. The second was through fields in the Web‐based data tracking platform to which sites uploaded data. In cases where disagreement was found between these 2 sources, the site leader was contacted for clarification.
Practice setting (community teaching, community nonteaching, academic medical center) was determined by site‐leader report within the Web‐based data tracking platform. Data for hospital characteristics (number of licensed beds and geographic region) were obtained from the American Hospital Association's Annual Survey of Hospitals.[18] Hospital region was characterized as West, South, Midwest, or Northeast.
Analysis
The null hypothesis was that no prepost difference existed in readmission rates within BOOST units, and no difference existed in the prepost change in readmission rates in BOOST units when compared to site‐matched control units. The Wilcoxon rank sum test was used to test whether observed changes described above were significantly different from 0, supporting rejection of the null hypotheses. We performed similar tests to determine the significance of observed changes in length of stay. We performed our analysis using SAS 9.3 (SAS Institute Inc., Cary, NC).
RESULTS
Eleven hospitals provided rehospitalization and length‐of‐stay outcome data for both a BOOST and control unit for the pre‐ and postimplementation periods. Compared to the 19 sites that did not participate in the analysis, these 11 sites were significantly larger (559188 beds vs 350205 beds, P=0.003), more likely to be located in an urban area (100.0% [n=11] vs 78.9% [n=15], P=0.035), and more likely to be academic medical centers (45.5% [n=5] vs 26.3% [n=5], P=0.036) (Table 1).
The mean number of tools implemented by sites participating in the analysis was 3.50.9. All sites implemented at least 2 tools. The duration between attendance at the BOOST kickoff event and first tool implementation ranged from 3 months (first tool implemented prior to attending the kickoff) and 9 months (mean duration, 3.34.3 months) (Table 2).
Hospital | Region | Hospital Type | No. Licensed Beds | Kickoff Implementationa | Risk Assessment | Discharge Checklist | Teach Back | Discharge Summary Completion | Follow‐up Phone Call | Total |
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
1 | Midwest | Community teaching | <300 | 8 | 3 | |||||
2 | West | Community teaching | >600 | 0 | 4 | |||||
3 | Northeast | Academic medical center | >600 | 2 | 4 | |||||
4 | Northeast | Community nonteaching | <300 | 9 | 2 | |||||
5 | South | Community nonteaching | >600 | 6 | 3 | |||||
6 | South | Community nonteaching | >600 | 3 | 4 | |||||
7 | Midwest | Community teaching | 300600 | 1 | 5 | |||||
8 | West | Academic medical center | 300600 | 1 | 4 | |||||
9 | South | Academic medical center | >600 | 4 | 4 | |||||
10 | Midwest | Academic medical center | 300600 | 3 | 3 | |||||
11 | Midwest | Academic medical center | >600 | 9 | 2 |
The average rate of 30‐day rehospitalization among BOOST units was 14.7% in the preimplementation period and 12.7% during the postimplementation period (P=0.010) (Figure 1). Rehospitalization rates for matched control units were 14.0% in the preintervention period and 14.1% in the postintervention period (P=0.831). The mean absolute reduction in readmission rates over the 1‐year study period in BOOST units compared to control units was 2.0%, or a relative reduction of 13.6% (P=0.054 for signed rank test comparing differences in readmission rate reduction in BOOST units compared to site‐matched control units). Length of stay in BOOST and control units decreased an average of 0.5 days and 0.3 days, respectively. There was no difference in length of stay change between BOOST units and control units (P=0.966).
DISCUSSION
As hospitals strive to reduce their readmission rates to avoid Centers for Medicare and Medicaid Services penalties, Project BOOST may be a viable QI approach to achieve their goals. This initial evaluation of participation in Project BOOST by 11 hospitals of varying sizes across the United States showed an associated reduction in rehospitalization rates (absolute=2.0% and relative=13.6%, P=0.054). We did not find any significant change in length of stay among these hospitals implementing BOOST tools.
The tools provided to participating hospitals were developed from evidence found in peer‐reviewed literature established through experimental methods in well‐controlled academic settings. Further tool development was informed by recommendations of an advisory board consisting of expert representatives and advocates involved in the hospital discharge process: patients, caregivers, physicians, nurses, case managers, social workers, insurers, and regulatory and research agencies.[19] The toolkit components address multiple aspects of hospital discharge and follow‐up with the goal of improving health by optimizing the safety of care transitions. Our observation that readmission rates appeared to improve in a diverse hospital sample including nonacademic and community hospitals engaged in Project BOOST is reassuring that the benefits seen in existing research literature, developed in distinctly academic settings, can be replicated in diverse acute‐care settings.
The effect size observed in our study was modest but consistent with several studies identified in a recent review of trials measuring interventions to reduce rehospitalization, where 7 of 16 studies showing a significant improvement registered change in the 0% to 5% absolute range.[12] Impact of this project may have been tempered by the need to translate external QI content to the local setting. Additionally, in contrast to experimental studies that are limited in scope and timing and often scaled to a research budget, BOOST sites were encouraged to implement Project BOOST in the clinical setting even if no new funds were available to support the effort.[12]
The recruitment of a national sample of both academic and nonacademic hospital participants imposed several limitations on our study and analysis. We recognize that intervention units selected by hospitals may have had unmeasured unit and patient characteristics that facilitated successful change and contributed to the observed improvements. However, because external pressure to reduce readmission is present across all hospitals independent of the BOOST intervention, we felt site‐matched controls were essential to understanding effects attributable to the BOOST tools. Differences between units would be expected to be stable over the course of the study period, and comparison of outcome differences between 2 different time periods would be reasonable. Additionally, we could not collect data on readmissions to other hospitals. Theoretically, patients discharged from BOOST units might be more likely to have been rehospitalized elsewhere, but the fraction of rehospitalizations occurring at alternate facilities would also be expected to be similar on the matched control unit.
We report findings from a voluntary cohort willing and capable of designating a comparison clinical unit and contributing the requested data outcomes. Pilot sites that did not report outcomes were not analyzed, but comparison of hospital characteristics shows that participating hospitals were more likely to be large, urban, academic medical centers. Although barriers to data submission were not formally analyzed, reports from nonparticipating sites describe data submission limited by local implementation design (no geographic rollout or simultaneous rollout on all appropriate clinical units), site specific inability to generate unit level outcome statistics, and competing organizational priorities for data analyst time (electronic medical record deployment, alternative QI initiatives). The external validity of our results may be limited to organizations capable of analytics at the level of the individual clinical unit as well as those with sufficient QI resources to support reporting to a national database in the absence of a payer mandate. It is possible that additional financial support for on‐site data collection would have bolstered participation, making the example of participation rates we present potentially informative to organizations hoping to widely disseminate a QI agenda.
Nonetheless, the effectiveness demonstrated in the 11 sites that did participate is encouraging, and ongoing collaboration with subsequent BOOST cohorts has been designed to further facilitate data collection. Among the insights gained from this pilot experience, and incorporated into ongoing BOOST cohorts, is the importance of intensive mentor engagement to foster accountability among participant sites, assist with implementation troubleshooting, and offer expertise that is often particularly effective in gaining local support. We now encourage sites to have 2 mentor site visits to further these roles and more frequent conference calls. Further research to understand the marginal benefit of the mentored implementation approach is ongoing.
The limitations in data submission we experienced with the pilot cohort likely reflect resource constraints not uncommon at many hospitals. Increasing pressure placed on hospitals as a result of the Readmission Reduction Program within the Affordable Care Act as well as increasing interest from private and Medicaid payors to incorporate similar readmission‐based penalties provide encouragement for hospitals to enhance their data and analytic skills. National incentives for implementation of electronic health records (EHR) should also foster such capabilities, though we often saw EHRs as a barrier to QI, especially rapid cycle trials. Fortunately, hospitals are increasingly being afforded access to comprehensive claims databases to assist in tracking readmission rates to other facilities, and these data are becoming available in a more timely fashion. This more robust data collection, facilitated by private payors, state QI organizations, and state hospital associations, will support additional analytic methods such as multivariate regression models and interrupted time series designs to appreciate the experience of current BOOST participants.
Additional research is needed to understand the role of organizational context in the effectiveness of Project BOOST. Differences in rates of tool implementation and changes in clinical outcomes are likely dependent on local implementation context at the level of the healthcare organization and individual clinical unit.[20] Progress reports from site mentors and previously described experiences of QI implementation indicate that successful implementation of a multidimensional bundle of interventions may have reflected a higher level of institutional support, more robust team engagement in the work of reducing readmissions, increased clinical staff support for change, the presence of an effective project champion, or a key facilitating role of external mentorship.[21, 22] Ongoing data collection will continue to measure the sustainability of tool use and observed outcome changes to inform strategies to maintain gains associated with implementation. The role of mentored implementation in facilitating gains also requires further study.
Increasing attention to the problem of avoidable rehospitalization is driving hospitals, insurers, and policy makers to pursue QI efforts that favorably impact readmission rates. Our analysis of the BOOST intervention suggests that modest gains can be achieved following evidence‐based hospital process change facilitated by a mentored implementation model. However, realization of the goal of a 20% reduction in rehospitalization proposed by the Center for Medicare and Medicaid Services' Partnership for Patients initiative may be difficult to achieve on a national scale,[23] especially if efforts focus on just the hospital.
Acknowledgments
The authors acknowledge the contributions of Amanda Creden, BA (data collection), Julia Lee (biostatistical support), and the support of Amy Berman, BS, RN, Senior Program Officer at The John A. Hartford Foundation.
Disclosures
Project BOOST was funded by a grant from The John A. Hartford Foundation. Project BOOST is administered by the Society of Hospital Medicine (SHM). The development of the Project BOOST toolkit, recruitment of sites for this study, mentorship of the pilot cohort, project evaluation planning, and collection of pilot data were funded by a grant from The John A. Harford Foundation. Additional funding for continued data collection and analysis was funded by the SHM through funds from hospitals to participate in Project BOOST, specifically with funding support for Dr. Hansen. Dr. Williams has received funding to serve as Principal Investigator for Project BOOST. Since the time of initial cohort participation, approximately 125 additional hospitals have participated in the mentored implementation of Project BOOST. This participation was funded through a combination of site‐based tuition, third‐party payor support from private insurers, foundations, and federal funding through the Center for Medicare and Medicaid Innovation Partnership for Patients program. Drs. Greenwald, Hansen, and Williams are Project BOOST mentors for current Project BOOST sites and receive financial support through the SHM for this work. Dr. Howell has previously received funding as a Project BOOST mentor. Ms. Budnitz is the BOOST Project Director and is Chief Strategy and Development Officer for the HM. Dr. Maynard is the Senior Vice President of the SHM's Center for Hospital Innovation and Improvement.
References
- Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):1418–1428. , , .
- United States Congress. House Committee on Education and Labor. Coe on Ways and Means, Committee on Energy and Commerce, Compilation of Patient Protection and Affordable Care Act: as amended through November 1, 2010 including Patient Protection and Affordable Care Act health‐related portions of the Health Care and Education Reconciliation Act of 2010. Washington, DC: US Government Printing Office; 2010.
- Cost estimate for the amendment in the nature of a substitute to H.R. 3590, as proposed in the Senate on November 18, 2009. Washington, DC: Congressional Budget Office; 2009.
- Partnership for Patients, Center for Medicare and Medicaid Innovation. Available at: http://www.innovations.cms.gov/initiatives/Partnership‐for‐Patients/index.html. Accessed December 12, 2012.
- Providers have failed to work for continuity. Hospitals. 1979;53(10):79. , .
- Executing high‐quality care transitions: a call to do it right. J Hosp Med. 2007;2(5):287–290. , .
- The incidence and severity of adverse events affecting patients after discharge from the hospital. Ann Intern Med. 2003;138(3):161–167. , , , , .
- Adverse events among medical patients after discharge from hospital. CMAJ. 2004;170(3):345–349. , , , et al.
- Making inpatient medication reconciliation patient centered, clinically relevant and implementable: a consensus statement on key principles and necessary first steps. J Hosp Med. 2010;5(8):477–485. , , , et al.
- Tying up loose ends: discharging patients with unresolved medical issues. Arch Intern Med. 2007;167(12):1305. , , .
- Deficits in communication and information transfer between hospital‐based and primary care physicians. JAMA. 2007;297(8):831–841. , , , , , .
- Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520–528. , , , , .
- A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Intern Med. 2009;150(3):178. , , , et al.
- Advancing the science of patient safety. Ann Intern Med. 2011;154(10):693–696. , , , et al.
- From best evidence to best practice: effective implementation of change in patients' care. Lancet. 2003;362(9391):1225–1230. , .
- Quality improvement projects targeting health care‐associated infections: comparing virtual collaborative and toolkit approaches. J Hosp Med. 2011;6(5):271–278. , , , et al.
- Publication guidelines for improvement studies in health care: evolution of the SQUIRE project. Ann Intern Med. 2008;149(9):670–676. , , , , .
- Risk stratification and therapeutic decision making in acute coronary syndromes. JAMA. 2000;284(7):876–878. , , , .
- Are diagnosis specific outcome indicators based on administrative data useful in assessing quality of hospital care? BMJ. 2004;13(1):32. , , .
- What distinguishes top‐performing hospitals in acute myocardial infarction mortality rates? Ann Intern Med. 2011;154(6):384–390. , , , et al.
- The Model for Understanding Success in Quality (MUSIQ): building a theory of context in healthcare quality improvement. BMJ Qual Saf. 2012;21(1):13–20. , , , .
- Evidence‐based quality improvement: the state of the science. Health Aff (Millwood). 2005;24(1):138–150. , .
- Center for Medicare and Medicaid Innovation. Partnership for patients. Available at: http://www.innovations.cms.gov/initiatives/Partnership‐for‐Patients/index.html. Accessed April 2, 2012.
Enactment of federal legislation imposing hospital reimbursement penalties for excess rates of rehospitalizations among Medicare fee for service beneficiaries markedly increased interest in hospital quality improvement (QI) efforts to reduce the observed 30‐day rehospitalization of 19.6% in this elderly population.[1, 2] The Congressional Budget Office estimated that reimbursement penalties to hospitals for high readmission rates are expected to save the Medicare program approximately $7 billion between 2010 and 2019.[3] These penalties are complemented by resources from the Center for Medicare and Medicaid Innovation aiming to reduce hospital readmissions by 20% by the end of 2013 through the Partnership for Patients campaign.[4] Although potential financial penalties and provision of resources for QI intensified efforts to enhance the quality of the hospital discharge transition, patient safety risks associated with hospital discharge are well documented.[5, 6] Approximately 20% of patients discharged from the hospital may suffer adverse events,[7, 8] of which up to three‐quarters (72%) are medication related,[9] and over one‐third of required follow‐up testing after discharge is not completed.[10] Such findings indicate opportunities for improvement in the discharge process.[11]
Numerous publications describe studies aiming to improve the hospital discharge process and mitigate these hazards, though a systematic review of interventions to reduce 30‐day rehospitalization indicated that the existing evidence base for the effectiveness of transition interventions demonstrates irregular effectiveness and limitations to generalizability.[12] Most studies showing effectiveness are confined to single academic medical centers. Existing evidence supports multifaceted interventions implemented in both the pre‐ and postdischarge periods and focused on risk assessment and tailored, patient‐centered application of interventions to mitigate risk. For example Project RED (Re‐Engineered Discharge) applied a bundled intervention consisting of intensified patient education and discharge planning, improved medication reconciliation and discharge instructions, and longitudinal patient contact with follow‐up phone calls and a dedicated discharge advocate.[13] However, the mean age of patients participating in the study was 50 years, and it excluded patients admitted from or discharged to skilled nursing facilities, making generalizability to the geriatric population uncertain.
An integral aspect of QI projects is the contribution of local context to translation of best practices to disparate settings.[14, 15, 16] Most available reports of successful interventions to reduce rehospitalization have not fully described the specifics of either the intervention context or design. Moreover, the available evidence base for common interventions to reduce rehospitalization was developed in the academic setting. Validation of single academic center studies in a broader healthcare context is necessary.
Project BOOST (Better Outcomes for Older adults through Safe Transitions) recruited a diverse national cohort of both academic and nonacademic hospitals to participate in a QI effort to implement best practices for hospital discharge care transitions using a national collaborative approach facilitated by external expert mentorship. This study aimed to determine the effectiveness of BOOST in lowering hospital readmission rates and impact on length of stay.
METHODS
The study of Project BOOST was undertaken in accordance with the SQUIRE (Standards for Quality Improvement Reporting Excellence) Guidelines.[17]
Participants
The unit of observation for the prospective cohort study was the clinical acute‐care unit within hospitals. Sites were instructed to designate a pilot unit for the intervention that cared for medical or mixed medicalsurgical patient populations. Sites were also asked to provide outcome data for a clinically and organizationally similar non‐BOOST unit to provide a site‐matched control. Control units were matched by local site leadership based on comparable patient demographics, clinical mix, and extent of housestaff presence. An initial cohort of 6 hospitals in 2008 was followed by a second cohort of 24 hospitals initiated in 2009. All hospitals were invited to participate in the national effectiveness analysis, which required submission of readmission and length of stay data for both a BOOST intervention unit and a clinically matched control unit.
Description of the Intervention
The BOOST intervention consisted of 2 major sequential processes, planning and implementation, both facilitated by external site mentorsphysicians expert in QI and care transitionsfor a period of 12 months. Extensive background on the planning and implementation components is available at
Enrollment Sites, n=30 | Sites Reporting Outcome Data, n=11 | Sites Not Reporting Outcome Data, n=19 | P Value for Comparison of Outcome Data Sites Compared to Othersa | |
---|---|---|---|---|
| ||||
Region, n (%) | 0.194 | |||
Northeast | 8 (26.7) | 2 (18.2) | 6 (31.6) | |
West | 7 (23.4) | 2 (18.2) | 5 (26.3) | |
South | 7 (23.4) | 3 (27.3) | 4 (21.1) | |
Midwest | 8 (26.7) | 4 (36.4) | 4 (21.1) | |
Urban location, n (%) | 25 (83.3) | 11 (100) | 15 (78.9) | 0.035 |
Teaching status, n (%) | 0.036 | |||
Academic medical center | 10 (33.4) | 5 (45.5) | 5 (26.3) | |
Community teaching | 8 (26.7) | 3 (27.3) | 5 (26.3) | |
Community nonteaching | 12 (40) | 3 (27.3) | 9 (47.4) | |
Beds number, mean (SD) | 426.6 (220.6) | 559.2 (187.8) | 349.79 (204.48) | 0.003 |
Number of tools implemented, n (%) | 0.194 | |||
0 | 2 (6.7) | 0 | 2 (10.5) | |
1 | 2 (6.7) | 0 | 2 (10.5) | |
2 | 4 (13.3) | 2 (18.2) | 2 (10.5) | |
3 | 12 (40.0) | 3 (27.3) | 8 (42.1) | |
4 | 9 (30.0) | 5 (45.5) | 4 (21.1) | |
5 | 1 (3.3) | 1 (9.1) | 1 (5.3) |
Mentor engagement with sites consisted of a 2‐day kickoff training on the BOOST tools, where site teams met their mentor and initiated development of structured action plans, followed by 5 to 6 scheduled phone calls in the subsequent 12 months. During these conference calls, mentors gauged progress and sought to help troubleshoot barriers to implementation. Some mentors also conducted a site visit with participant sites. Project BOOST provided sites with several collaborative activities including online webinars and an online listserv. Sites also received a quarterly newsletter.
Outcome Measures
The primary outcome was 30‐day rehospitalization defined as same hospital, all‐cause rehospitalization. Home discharges as well as discharges or transfers to other healthcare facilities were included in the discharge calculation. Elective or scheduled rehospitalizations as well as multiple rehospitalizations in the same 30‐day window were considered individual rehospitalization events. Rehospitalization was reported as a ratio of 30‐day rehospitalizations divided by live discharges in a calendar month. Length of stay was reported as the mean length of stay among live discharges in a calendar month. Outcomes were calculated at the participant site and then uploaded as overall monthly unit outcomes to a Web‐based research database.
To account for seasonal trends as well as marked variation in month‐to‐month rehospitalization rates identified in longitudinal data, we elected to compare 3‐month year‐over‐year averages to determine relative changes in readmission rates from the period prior to BOOST implementation to the period after BOOST implementation. We calculated averages for rehospitalization and length of stay in the 3‐month period preceding the sites' first reported month of front‐line implementation and in the corresponding 3‐month period in the subsequent calendar year. For example, if a site reported implementing its first tool in April 2010, the average readmission rate in the unit for January 2011 through March 2011 was subtracted from the average readmission rate for January 2010 through March 2010.
Sites were surveyed regarding tool implementation rates 6 months and 24 months after the 2009 kickoff training session. Surveys were electronically completed by site leaders in consultation with site team members. The survey identified new tool implementation as well as modification of existing care processes using the BOOST tools (admission risk assessment, discharge readiness checklist, teach back use, mandate regarding discharge summary completion, follow‐up phone calls to >80% of discharges). Use of a sixth tool, creation of individualized written discharge instructions, was not measured. We credited sites with tool implementation if they reported either de novo tool use or alteration of previous care processes influenced by BOOST tools.
Clinical outcome reporting was voluntary, and sites did not receive compensation and were not subject to penalty for the degree of implementation or outcome reporting. No patient‐level information was collected for the analysis, which was approved by the Northwestern University institutional review board.
Data Sources and Methods
Readmission and length of stay data, including the unit level readmission rate, as collected from administrative sources at each hospital, were collected using templated spreadsheet software between December 2008 and June 2010, after which data were loaded directly to a Web‐based data‐tracking platform. Sites were asked to load data as they became available. Sites were asked to report the number of study unit discharges as well as the number of those discharges readmitted within 30 days; however, reporting of the number of patient discharges was inconsistent across sites. Serial outreach consisting of monthly phone calls or email messaging to site leaders was conducted throughout 2011 to increase site participation in the project analysis.
Implementation date information was collected from 2 sources. The first was through online surveys distributed in November 2009 and April 2011. The second was through fields in the Web‐based data tracking platform to which sites uploaded data. In cases where disagreement was found between these 2 sources, the site leader was contacted for clarification.
Practice setting (community teaching, community nonteaching, academic medical center) was determined by site‐leader report within the Web‐based data tracking platform. Data for hospital characteristics (number of licensed beds and geographic region) were obtained from the American Hospital Association's Annual Survey of Hospitals.[18] Hospital region was characterized as West, South, Midwest, or Northeast.
Analysis
The null hypothesis was that no prepost difference existed in readmission rates within BOOST units, and no difference existed in the prepost change in readmission rates in BOOST units when compared to site‐matched control units. The Wilcoxon rank sum test was used to test whether observed changes described above were significantly different from 0, supporting rejection of the null hypotheses. We performed similar tests to determine the significance of observed changes in length of stay. We performed our analysis using SAS 9.3 (SAS Institute Inc., Cary, NC).
RESULTS
Eleven hospitals provided rehospitalization and length‐of‐stay outcome data for both a BOOST and control unit for the pre‐ and postimplementation periods. Compared to the 19 sites that did not participate in the analysis, these 11 sites were significantly larger (559188 beds vs 350205 beds, P=0.003), more likely to be located in an urban area (100.0% [n=11] vs 78.9% [n=15], P=0.035), and more likely to be academic medical centers (45.5% [n=5] vs 26.3% [n=5], P=0.036) (Table 1).
The mean number of tools implemented by sites participating in the analysis was 3.50.9. All sites implemented at least 2 tools. The duration between attendance at the BOOST kickoff event and first tool implementation ranged from 3 months (first tool implemented prior to attending the kickoff) and 9 months (mean duration, 3.34.3 months) (Table 2).
Hospital | Region | Hospital Type | No. Licensed Beds | Kickoff Implementationa | Risk Assessment | Discharge Checklist | Teach Back | Discharge Summary Completion | Follow‐up Phone Call | Total |
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
1 | Midwest | Community teaching | <300 | 8 | 3 | |||||
2 | West | Community teaching | >600 | 0 | 4 | |||||
3 | Northeast | Academic medical center | >600 | 2 | 4 | |||||
4 | Northeast | Community nonteaching | <300 | 9 | 2 | |||||
5 | South | Community nonteaching | >600 | 6 | 3 | |||||
6 | South | Community nonteaching | >600 | 3 | 4 | |||||
7 | Midwest | Community teaching | 300600 | 1 | 5 | |||||
8 | West | Academic medical center | 300600 | 1 | 4 | |||||
9 | South | Academic medical center | >600 | 4 | 4 | |||||
10 | Midwest | Academic medical center | 300600 | 3 | 3 | |||||
11 | Midwest | Academic medical center | >600 | 9 | 2 |
The average rate of 30‐day rehospitalization among BOOST units was 14.7% in the preimplementation period and 12.7% during the postimplementation period (P=0.010) (Figure 1). Rehospitalization rates for matched control units were 14.0% in the preintervention period and 14.1% in the postintervention period (P=0.831). The mean absolute reduction in readmission rates over the 1‐year study period in BOOST units compared to control units was 2.0%, or a relative reduction of 13.6% (P=0.054 for signed rank test comparing differences in readmission rate reduction in BOOST units compared to site‐matched control units). Length of stay in BOOST and control units decreased an average of 0.5 days and 0.3 days, respectively. There was no difference in length of stay change between BOOST units and control units (P=0.966).
DISCUSSION
As hospitals strive to reduce their readmission rates to avoid Centers for Medicare and Medicaid Services penalties, Project BOOST may be a viable QI approach to achieve their goals. This initial evaluation of participation in Project BOOST by 11 hospitals of varying sizes across the United States showed an associated reduction in rehospitalization rates (absolute=2.0% and relative=13.6%, P=0.054). We did not find any significant change in length of stay among these hospitals implementing BOOST tools.
The tools provided to participating hospitals were developed from evidence found in peer‐reviewed literature established through experimental methods in well‐controlled academic settings. Further tool development was informed by recommendations of an advisory board consisting of expert representatives and advocates involved in the hospital discharge process: patients, caregivers, physicians, nurses, case managers, social workers, insurers, and regulatory and research agencies.[19] The toolkit components address multiple aspects of hospital discharge and follow‐up with the goal of improving health by optimizing the safety of care transitions. Our observation that readmission rates appeared to improve in a diverse hospital sample including nonacademic and community hospitals engaged in Project BOOST is reassuring that the benefits seen in existing research literature, developed in distinctly academic settings, can be replicated in diverse acute‐care settings.
The effect size observed in our study was modest but consistent with several studies identified in a recent review of trials measuring interventions to reduce rehospitalization, where 7 of 16 studies showing a significant improvement registered change in the 0% to 5% absolute range.[12] Impact of this project may have been tempered by the need to translate external QI content to the local setting. Additionally, in contrast to experimental studies that are limited in scope and timing and often scaled to a research budget, BOOST sites were encouraged to implement Project BOOST in the clinical setting even if no new funds were available to support the effort.[12]
The recruitment of a national sample of both academic and nonacademic hospital participants imposed several limitations on our study and analysis. We recognize that intervention units selected by hospitals may have had unmeasured unit and patient characteristics that facilitated successful change and contributed to the observed improvements. However, because external pressure to reduce readmission is present across all hospitals independent of the BOOST intervention, we felt site‐matched controls were essential to understanding effects attributable to the BOOST tools. Differences between units would be expected to be stable over the course of the study period, and comparison of outcome differences between 2 different time periods would be reasonable. Additionally, we could not collect data on readmissions to other hospitals. Theoretically, patients discharged from BOOST units might be more likely to have been rehospitalized elsewhere, but the fraction of rehospitalizations occurring at alternate facilities would also be expected to be similar on the matched control unit.
We report findings from a voluntary cohort willing and capable of designating a comparison clinical unit and contributing the requested data outcomes. Pilot sites that did not report outcomes were not analyzed, but comparison of hospital characteristics shows that participating hospitals were more likely to be large, urban, academic medical centers. Although barriers to data submission were not formally analyzed, reports from nonparticipating sites describe data submission limited by local implementation design (no geographic rollout or simultaneous rollout on all appropriate clinical units), site specific inability to generate unit level outcome statistics, and competing organizational priorities for data analyst time (electronic medical record deployment, alternative QI initiatives). The external validity of our results may be limited to organizations capable of analytics at the level of the individual clinical unit as well as those with sufficient QI resources to support reporting to a national database in the absence of a payer mandate. It is possible that additional financial support for on‐site data collection would have bolstered participation, making the example of participation rates we present potentially informative to organizations hoping to widely disseminate a QI agenda.
Nonetheless, the effectiveness demonstrated in the 11 sites that did participate is encouraging, and ongoing collaboration with subsequent BOOST cohorts has been designed to further facilitate data collection. Among the insights gained from this pilot experience, and incorporated into ongoing BOOST cohorts, is the importance of intensive mentor engagement to foster accountability among participant sites, assist with implementation troubleshooting, and offer expertise that is often particularly effective in gaining local support. We now encourage sites to have 2 mentor site visits to further these roles and more frequent conference calls. Further research to understand the marginal benefit of the mentored implementation approach is ongoing.
The limitations in data submission we experienced with the pilot cohort likely reflect resource constraints not uncommon at many hospitals. Increasing pressure placed on hospitals as a result of the Readmission Reduction Program within the Affordable Care Act as well as increasing interest from private and Medicaid payors to incorporate similar readmission‐based penalties provide encouragement for hospitals to enhance their data and analytic skills. National incentives for implementation of electronic health records (EHR) should also foster such capabilities, though we often saw EHRs as a barrier to QI, especially rapid cycle trials. Fortunately, hospitals are increasingly being afforded access to comprehensive claims databases to assist in tracking readmission rates to other facilities, and these data are becoming available in a more timely fashion. This more robust data collection, facilitated by private payors, state QI organizations, and state hospital associations, will support additional analytic methods such as multivariate regression models and interrupted time series designs to appreciate the experience of current BOOST participants.
Additional research is needed to understand the role of organizational context in the effectiveness of Project BOOST. Differences in rates of tool implementation and changes in clinical outcomes are likely dependent on local implementation context at the level of the healthcare organization and individual clinical unit.[20] Progress reports from site mentors and previously described experiences of QI implementation indicate that successful implementation of a multidimensional bundle of interventions may have reflected a higher level of institutional support, more robust team engagement in the work of reducing readmissions, increased clinical staff support for change, the presence of an effective project champion, or a key facilitating role of external mentorship.[21, 22] Ongoing data collection will continue to measure the sustainability of tool use and observed outcome changes to inform strategies to maintain gains associated with implementation. The role of mentored implementation in facilitating gains also requires further study.
Increasing attention to the problem of avoidable rehospitalization is driving hospitals, insurers, and policy makers to pursue QI efforts that favorably impact readmission rates. Our analysis of the BOOST intervention suggests that modest gains can be achieved following evidence‐based hospital process change facilitated by a mentored implementation model. However, realization of the goal of a 20% reduction in rehospitalization proposed by the Center for Medicare and Medicaid Services' Partnership for Patients initiative may be difficult to achieve on a national scale,[23] especially if efforts focus on just the hospital.
Acknowledgments
The authors acknowledge the contributions of Amanda Creden, BA (data collection), Julia Lee (biostatistical support), and the support of Amy Berman, BS, RN, Senior Program Officer at The John A. Hartford Foundation.
Disclosures
Project BOOST was funded by a grant from The John A. Hartford Foundation. Project BOOST is administered by the Society of Hospital Medicine (SHM). The development of the Project BOOST toolkit, recruitment of sites for this study, mentorship of the pilot cohort, project evaluation planning, and collection of pilot data were funded by a grant from The John A. Harford Foundation. Additional funding for continued data collection and analysis was funded by the SHM through funds from hospitals to participate in Project BOOST, specifically with funding support for Dr. Hansen. Dr. Williams has received funding to serve as Principal Investigator for Project BOOST. Since the time of initial cohort participation, approximately 125 additional hospitals have participated in the mentored implementation of Project BOOST. This participation was funded through a combination of site‐based tuition, third‐party payor support from private insurers, foundations, and federal funding through the Center for Medicare and Medicaid Innovation Partnership for Patients program. Drs. Greenwald, Hansen, and Williams are Project BOOST mentors for current Project BOOST sites and receive financial support through the SHM for this work. Dr. Howell has previously received funding as a Project BOOST mentor. Ms. Budnitz is the BOOST Project Director and is Chief Strategy and Development Officer for the HM. Dr. Maynard is the Senior Vice President of the SHM's Center for Hospital Innovation and Improvement.
References
Enactment of federal legislation imposing hospital reimbursement penalties for excess rates of rehospitalizations among Medicare fee for service beneficiaries markedly increased interest in hospital quality improvement (QI) efforts to reduce the observed 30‐day rehospitalization of 19.6% in this elderly population.[1, 2] The Congressional Budget Office estimated that reimbursement penalties to hospitals for high readmission rates are expected to save the Medicare program approximately $7 billion between 2010 and 2019.[3] These penalties are complemented by resources from the Center for Medicare and Medicaid Innovation aiming to reduce hospital readmissions by 20% by the end of 2013 through the Partnership for Patients campaign.[4] Although potential financial penalties and provision of resources for QI intensified efforts to enhance the quality of the hospital discharge transition, patient safety risks associated with hospital discharge are well documented.[5, 6] Approximately 20% of patients discharged from the hospital may suffer adverse events,[7, 8] of which up to three‐quarters (72%) are medication related,[9] and over one‐third of required follow‐up testing after discharge is not completed.[10] Such findings indicate opportunities for improvement in the discharge process.[11]
Numerous publications describe studies aiming to improve the hospital discharge process and mitigate these hazards, though a systematic review of interventions to reduce 30‐day rehospitalization indicated that the existing evidence base for the effectiveness of transition interventions demonstrates irregular effectiveness and limitations to generalizability.[12] Most studies showing effectiveness are confined to single academic medical centers. Existing evidence supports multifaceted interventions implemented in both the pre‐ and postdischarge periods and focused on risk assessment and tailored, patient‐centered application of interventions to mitigate risk. For example Project RED (Re‐Engineered Discharge) applied a bundled intervention consisting of intensified patient education and discharge planning, improved medication reconciliation and discharge instructions, and longitudinal patient contact with follow‐up phone calls and a dedicated discharge advocate.[13] However, the mean age of patients participating in the study was 50 years, and it excluded patients admitted from or discharged to skilled nursing facilities, making generalizability to the geriatric population uncertain.
An integral aspect of QI projects is the contribution of local context to translation of best practices to disparate settings.[14, 15, 16] Most available reports of successful interventions to reduce rehospitalization have not fully described the specifics of either the intervention context or design. Moreover, the available evidence base for common interventions to reduce rehospitalization was developed in the academic setting. Validation of single academic center studies in a broader healthcare context is necessary.
Project BOOST (Better Outcomes for Older adults through Safe Transitions) recruited a diverse national cohort of both academic and nonacademic hospitals to participate in a QI effort to implement best practices for hospital discharge care transitions using a national collaborative approach facilitated by external expert mentorship. This study aimed to determine the effectiveness of BOOST in lowering hospital readmission rates and impact on length of stay.
METHODS
The study of Project BOOST was undertaken in accordance with the SQUIRE (Standards for Quality Improvement Reporting Excellence) Guidelines.[17]
Participants
The unit of observation for the prospective cohort study was the clinical acute‐care unit within hospitals. Sites were instructed to designate a pilot unit for the intervention that cared for medical or mixed medicalsurgical patient populations. Sites were also asked to provide outcome data for a clinically and organizationally similar non‐BOOST unit to provide a site‐matched control. Control units were matched by local site leadership based on comparable patient demographics, clinical mix, and extent of housestaff presence. An initial cohort of 6 hospitals in 2008 was followed by a second cohort of 24 hospitals initiated in 2009. All hospitals were invited to participate in the national effectiveness analysis, which required submission of readmission and length of stay data for both a BOOST intervention unit and a clinically matched control unit.
Description of the Intervention
The BOOST intervention consisted of 2 major sequential processes, planning and implementation, both facilitated by external site mentorsphysicians expert in QI and care transitionsfor a period of 12 months. Extensive background on the planning and implementation components is available at
Enrollment Sites, n=30 | Sites Reporting Outcome Data, n=11 | Sites Not Reporting Outcome Data, n=19 | P Value for Comparison of Outcome Data Sites Compared to Othersa | |
---|---|---|---|---|
| ||||
Region, n (%) | 0.194 | |||
Northeast | 8 (26.7) | 2 (18.2) | 6 (31.6) | |
West | 7 (23.4) | 2 (18.2) | 5 (26.3) | |
South | 7 (23.4) | 3 (27.3) | 4 (21.1) | |
Midwest | 8 (26.7) | 4 (36.4) | 4 (21.1) | |
Urban location, n (%) | 25 (83.3) | 11 (100) | 15 (78.9) | 0.035 |
Teaching status, n (%) | 0.036 | |||
Academic medical center | 10 (33.4) | 5 (45.5) | 5 (26.3) | |
Community teaching | 8 (26.7) | 3 (27.3) | 5 (26.3) | |
Community nonteaching | 12 (40) | 3 (27.3) | 9 (47.4) | |
Beds number, mean (SD) | 426.6 (220.6) | 559.2 (187.8) | 349.79 (204.48) | 0.003 |
Number of tools implemented, n (%) | 0.194 | |||
0 | 2 (6.7) | 0 | 2 (10.5) | |
1 | 2 (6.7) | 0 | 2 (10.5) | |
2 | 4 (13.3) | 2 (18.2) | 2 (10.5) | |
3 | 12 (40.0) | 3 (27.3) | 8 (42.1) | |
4 | 9 (30.0) | 5 (45.5) | 4 (21.1) | |
5 | 1 (3.3) | 1 (9.1) | 1 (5.3) |
Mentor engagement with sites consisted of a 2‐day kickoff training on the BOOST tools, where site teams met their mentor and initiated development of structured action plans, followed by 5 to 6 scheduled phone calls in the subsequent 12 months. During these conference calls, mentors gauged progress and sought to help troubleshoot barriers to implementation. Some mentors also conducted a site visit with participant sites. Project BOOST provided sites with several collaborative activities including online webinars and an online listserv. Sites also received a quarterly newsletter.
Outcome Measures
The primary outcome was 30‐day rehospitalization defined as same hospital, all‐cause rehospitalization. Home discharges as well as discharges or transfers to other healthcare facilities were included in the discharge calculation. Elective or scheduled rehospitalizations as well as multiple rehospitalizations in the same 30‐day window were considered individual rehospitalization events. Rehospitalization was reported as a ratio of 30‐day rehospitalizations divided by live discharges in a calendar month. Length of stay was reported as the mean length of stay among live discharges in a calendar month. Outcomes were calculated at the participant site and then uploaded as overall monthly unit outcomes to a Web‐based research database.
To account for seasonal trends as well as marked variation in month‐to‐month rehospitalization rates identified in longitudinal data, we elected to compare 3‐month year‐over‐year averages to determine relative changes in readmission rates from the period prior to BOOST implementation to the period after BOOST implementation. We calculated averages for rehospitalization and length of stay in the 3‐month period preceding the sites' first reported month of front‐line implementation and in the corresponding 3‐month period in the subsequent calendar year. For example, if a site reported implementing its first tool in April 2010, the average readmission rate in the unit for January 2011 through March 2011 was subtracted from the average readmission rate for January 2010 through March 2010.
Sites were surveyed regarding tool implementation rates 6 months and 24 months after the 2009 kickoff training session. Surveys were electronically completed by site leaders in consultation with site team members. The survey identified new tool implementation as well as modification of existing care processes using the BOOST tools (admission risk assessment, discharge readiness checklist, teach back use, mandate regarding discharge summary completion, follow‐up phone calls to >80% of discharges). Use of a sixth tool, creation of individualized written discharge instructions, was not measured. We credited sites with tool implementation if they reported either de novo tool use or alteration of previous care processes influenced by BOOST tools.
Clinical outcome reporting was voluntary, and sites did not receive compensation and were not subject to penalty for the degree of implementation or outcome reporting. No patient‐level information was collected for the analysis, which was approved by the Northwestern University institutional review board.
Data Sources and Methods
Readmission and length of stay data, including the unit level readmission rate, as collected from administrative sources at each hospital, were collected using templated spreadsheet software between December 2008 and June 2010, after which data were loaded directly to a Web‐based data‐tracking platform. Sites were asked to load data as they became available. Sites were asked to report the number of study unit discharges as well as the number of those discharges readmitted within 30 days; however, reporting of the number of patient discharges was inconsistent across sites. Serial outreach consisting of monthly phone calls or email messaging to site leaders was conducted throughout 2011 to increase site participation in the project analysis.
Implementation date information was collected from 2 sources. The first was through online surveys distributed in November 2009 and April 2011. The second was through fields in the Web‐based data tracking platform to which sites uploaded data. In cases where disagreement was found between these 2 sources, the site leader was contacted for clarification.
Practice setting (community teaching, community nonteaching, academic medical center) was determined by site‐leader report within the Web‐based data tracking platform. Data for hospital characteristics (number of licensed beds and geographic region) were obtained from the American Hospital Association's Annual Survey of Hospitals.[18] Hospital region was characterized as West, South, Midwest, or Northeast.
Analysis
The null hypothesis was that no prepost difference existed in readmission rates within BOOST units, and no difference existed in the prepost change in readmission rates in BOOST units when compared to site‐matched control units. The Wilcoxon rank sum test was used to test whether observed changes described above were significantly different from 0, supporting rejection of the null hypotheses. We performed similar tests to determine the significance of observed changes in length of stay. We performed our analysis using SAS 9.3 (SAS Institute Inc., Cary, NC).
RESULTS
Eleven hospitals provided rehospitalization and length‐of‐stay outcome data for both a BOOST and control unit for the pre‐ and postimplementation periods. Compared to the 19 sites that did not participate in the analysis, these 11 sites were significantly larger (559188 beds vs 350205 beds, P=0.003), more likely to be located in an urban area (100.0% [n=11] vs 78.9% [n=15], P=0.035), and more likely to be academic medical centers (45.5% [n=5] vs 26.3% [n=5], P=0.036) (Table 1).
The mean number of tools implemented by sites participating in the analysis was 3.50.9. All sites implemented at least 2 tools. The duration between attendance at the BOOST kickoff event and first tool implementation ranged from 3 months (first tool implemented prior to attending the kickoff) and 9 months (mean duration, 3.34.3 months) (Table 2).
Hospital | Region | Hospital Type | No. Licensed Beds | Kickoff Implementationa | Risk Assessment | Discharge Checklist | Teach Back | Discharge Summary Completion | Follow‐up Phone Call | Total |
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
1 | Midwest | Community teaching | <300 | 8 | 3 | |||||
2 | West | Community teaching | >600 | 0 | 4 | |||||
3 | Northeast | Academic medical center | >600 | 2 | 4 | |||||
4 | Northeast | Community nonteaching | <300 | 9 | 2 | |||||
5 | South | Community nonteaching | >600 | 6 | 3 | |||||
6 | South | Community nonteaching | >600 | 3 | 4 | |||||
7 | Midwest | Community teaching | 300600 | 1 | 5 | |||||
8 | West | Academic medical center | 300600 | 1 | 4 | |||||
9 | South | Academic medical center | >600 | 4 | 4 | |||||
10 | Midwest | Academic medical center | 300600 | 3 | 3 | |||||
11 | Midwest | Academic medical center | >600 | 9 | 2 |
The average rate of 30‐day rehospitalization among BOOST units was 14.7% in the preimplementation period and 12.7% during the postimplementation period (P=0.010) (Figure 1). Rehospitalization rates for matched control units were 14.0% in the preintervention period and 14.1% in the postintervention period (P=0.831). The mean absolute reduction in readmission rates over the 1‐year study period in BOOST units compared to control units was 2.0%, or a relative reduction of 13.6% (P=0.054 for signed rank test comparing differences in readmission rate reduction in BOOST units compared to site‐matched control units). Length of stay in BOOST and control units decreased an average of 0.5 days and 0.3 days, respectively. There was no difference in length of stay change between BOOST units and control units (P=0.966).
DISCUSSION
As hospitals strive to reduce their readmission rates to avoid Centers for Medicare and Medicaid Services penalties, Project BOOST may be a viable QI approach to achieve their goals. This initial evaluation of participation in Project BOOST by 11 hospitals of varying sizes across the United States showed an associated reduction in rehospitalization rates (absolute=2.0% and relative=13.6%, P=0.054). We did not find any significant change in length of stay among these hospitals implementing BOOST tools.
The tools provided to participating hospitals were developed from evidence found in peer‐reviewed literature established through experimental methods in well‐controlled academic settings. Further tool development was informed by recommendations of an advisory board consisting of expert representatives and advocates involved in the hospital discharge process: patients, caregivers, physicians, nurses, case managers, social workers, insurers, and regulatory and research agencies.[19] The toolkit components address multiple aspects of hospital discharge and follow‐up with the goal of improving health by optimizing the safety of care transitions. Our observation that readmission rates appeared to improve in a diverse hospital sample including nonacademic and community hospitals engaged in Project BOOST is reassuring that the benefits seen in existing research literature, developed in distinctly academic settings, can be replicated in diverse acute‐care settings.
The effect size observed in our study was modest but consistent with several studies identified in a recent review of trials measuring interventions to reduce rehospitalization, where 7 of 16 studies showing a significant improvement registered change in the 0% to 5% absolute range.[12] Impact of this project may have been tempered by the need to translate external QI content to the local setting. Additionally, in contrast to experimental studies that are limited in scope and timing and often scaled to a research budget, BOOST sites were encouraged to implement Project BOOST in the clinical setting even if no new funds were available to support the effort.[12]
The recruitment of a national sample of both academic and nonacademic hospital participants imposed several limitations on our study and analysis. We recognize that intervention units selected by hospitals may have had unmeasured unit and patient characteristics that facilitated successful change and contributed to the observed improvements. However, because external pressure to reduce readmission is present across all hospitals independent of the BOOST intervention, we felt site‐matched controls were essential to understanding effects attributable to the BOOST tools. Differences between units would be expected to be stable over the course of the study period, and comparison of outcome differences between 2 different time periods would be reasonable. Additionally, we could not collect data on readmissions to other hospitals. Theoretically, patients discharged from BOOST units might be more likely to have been rehospitalized elsewhere, but the fraction of rehospitalizations occurring at alternate facilities would also be expected to be similar on the matched control unit.
We report findings from a voluntary cohort willing and capable of designating a comparison clinical unit and contributing the requested data outcomes. Pilot sites that did not report outcomes were not analyzed, but comparison of hospital characteristics shows that participating hospitals were more likely to be large, urban, academic medical centers. Although barriers to data submission were not formally analyzed, reports from nonparticipating sites describe data submission limited by local implementation design (no geographic rollout or simultaneous rollout on all appropriate clinical units), site specific inability to generate unit level outcome statistics, and competing organizational priorities for data analyst time (electronic medical record deployment, alternative QI initiatives). The external validity of our results may be limited to organizations capable of analytics at the level of the individual clinical unit as well as those with sufficient QI resources to support reporting to a national database in the absence of a payer mandate. It is possible that additional financial support for on‐site data collection would have bolstered participation, making the example of participation rates we present potentially informative to organizations hoping to widely disseminate a QI agenda.
Nonetheless, the effectiveness demonstrated in the 11 sites that did participate is encouraging, and ongoing collaboration with subsequent BOOST cohorts has been designed to further facilitate data collection. Among the insights gained from this pilot experience, and incorporated into ongoing BOOST cohorts, is the importance of intensive mentor engagement to foster accountability among participant sites, assist with implementation troubleshooting, and offer expertise that is often particularly effective in gaining local support. We now encourage sites to have 2 mentor site visits to further these roles and more frequent conference calls. Further research to understand the marginal benefit of the mentored implementation approach is ongoing.
The limitations in data submission we experienced with the pilot cohort likely reflect resource constraints not uncommon at many hospitals. Increasing pressure placed on hospitals as a result of the Readmission Reduction Program within the Affordable Care Act as well as increasing interest from private and Medicaid payors to incorporate similar readmission‐based penalties provide encouragement for hospitals to enhance their data and analytic skills. National incentives for implementation of electronic health records (EHR) should also foster such capabilities, though we often saw EHRs as a barrier to QI, especially rapid cycle trials. Fortunately, hospitals are increasingly being afforded access to comprehensive claims databases to assist in tracking readmission rates to other facilities, and these data are becoming available in a more timely fashion. This more robust data collection, facilitated by private payors, state QI organizations, and state hospital associations, will support additional analytic methods such as multivariate regression models and interrupted time series designs to appreciate the experience of current BOOST participants.
Additional research is needed to understand the role of organizational context in the effectiveness of Project BOOST. Differences in rates of tool implementation and changes in clinical outcomes are likely dependent on local implementation context at the level of the healthcare organization and individual clinical unit.[20] Progress reports from site mentors and previously described experiences of QI implementation indicate that successful implementation of a multidimensional bundle of interventions may have reflected a higher level of institutional support, more robust team engagement in the work of reducing readmissions, increased clinical staff support for change, the presence of an effective project champion, or a key facilitating role of external mentorship.[21, 22] Ongoing data collection will continue to measure the sustainability of tool use and observed outcome changes to inform strategies to maintain gains associated with implementation. The role of mentored implementation in facilitating gains also requires further study.
Increasing attention to the problem of avoidable rehospitalization is driving hospitals, insurers, and policy makers to pursue QI efforts that favorably impact readmission rates. Our analysis of the BOOST intervention suggests that modest gains can be achieved following evidence‐based hospital process change facilitated by a mentored implementation model. However, realization of the goal of a 20% reduction in rehospitalization proposed by the Center for Medicare and Medicaid Services' Partnership for Patients initiative may be difficult to achieve on a national scale,[23] especially if efforts focus on just the hospital.
Acknowledgments
The authors acknowledge the contributions of Amanda Creden, BA (data collection), Julia Lee (biostatistical support), and the support of Amy Berman, BS, RN, Senior Program Officer at The John A. Hartford Foundation.
Disclosures
Project BOOST was funded by a grant from The John A. Hartford Foundation. Project BOOST is administered by the Society of Hospital Medicine (SHM). The development of the Project BOOST toolkit, recruitment of sites for this study, mentorship of the pilot cohort, project evaluation planning, and collection of pilot data were funded by a grant from The John A. Harford Foundation. Additional funding for continued data collection and analysis was funded by the SHM through funds from hospitals to participate in Project BOOST, specifically with funding support for Dr. Hansen. Dr. Williams has received funding to serve as Principal Investigator for Project BOOST. Since the time of initial cohort participation, approximately 125 additional hospitals have participated in the mentored implementation of Project BOOST. This participation was funded through a combination of site‐based tuition, third‐party payor support from private insurers, foundations, and federal funding through the Center for Medicare and Medicaid Innovation Partnership for Patients program. Drs. Greenwald, Hansen, and Williams are Project BOOST mentors for current Project BOOST sites and receive financial support through the SHM for this work. Dr. Howell has previously received funding as a Project BOOST mentor. Ms. Budnitz is the BOOST Project Director and is Chief Strategy and Development Officer for the HM. Dr. Maynard is the Senior Vice President of the SHM's Center for Hospital Innovation and Improvement.
References
- Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):1418–1428. , , .
- United States Congress. House Committee on Education and Labor. Coe on Ways and Means, Committee on Energy and Commerce, Compilation of Patient Protection and Affordable Care Act: as amended through November 1, 2010 including Patient Protection and Affordable Care Act health‐related portions of the Health Care and Education Reconciliation Act of 2010. Washington, DC: US Government Printing Office; 2010.
- Cost estimate for the amendment in the nature of a substitute to H.R. 3590, as proposed in the Senate on November 18, 2009. Washington, DC: Congressional Budget Office; 2009.
- Partnership for Patients, Center for Medicare and Medicaid Innovation. Available at: http://www.innovations.cms.gov/initiatives/Partnership‐for‐Patients/index.html. Accessed December 12, 2012.
- Providers have failed to work for continuity. Hospitals. 1979;53(10):79. , .
- Executing high‐quality care transitions: a call to do it right. J Hosp Med. 2007;2(5):287–290. , .
- The incidence and severity of adverse events affecting patients after discharge from the hospital. Ann Intern Med. 2003;138(3):161–167. , , , , .
- Adverse events among medical patients after discharge from hospital. CMAJ. 2004;170(3):345–349. , , , et al.
- Making inpatient medication reconciliation patient centered, clinically relevant and implementable: a consensus statement on key principles and necessary first steps. J Hosp Med. 2010;5(8):477–485. , , , et al.
- Tying up loose ends: discharging patients with unresolved medical issues. Arch Intern Med. 2007;167(12):1305. , , .
- Deficits in communication and information transfer between hospital‐based and primary care physicians. JAMA. 2007;297(8):831–841. , , , , , .
- Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520–528. , , , , .
- A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Intern Med. 2009;150(3):178. , , , et al.
- Advancing the science of patient safety. Ann Intern Med. 2011;154(10):693–696. , , , et al.
- From best evidence to best practice: effective implementation of change in patients' care. Lancet. 2003;362(9391):1225–1230. , .
- Quality improvement projects targeting health care‐associated infections: comparing virtual collaborative and toolkit approaches. J Hosp Med. 2011;6(5):271–278. , , , et al.
- Publication guidelines for improvement studies in health care: evolution of the SQUIRE project. Ann Intern Med. 2008;149(9):670–676. , , , , .
- Risk stratification and therapeutic decision making in acute coronary syndromes. JAMA. 2000;284(7):876–878. , , , .
- Are diagnosis specific outcome indicators based on administrative data useful in assessing quality of hospital care? BMJ. 2004;13(1):32. , , .
- What distinguishes top‐performing hospitals in acute myocardial infarction mortality rates? Ann Intern Med. 2011;154(6):384–390. , , , et al.
- The Model for Understanding Success in Quality (MUSIQ): building a theory of context in healthcare quality improvement. BMJ Qual Saf. 2012;21(1):13–20. , , , .
- Evidence‐based quality improvement: the state of the science. Health Aff (Millwood). 2005;24(1):138–150. , .
- Center for Medicare and Medicaid Innovation. Partnership for patients. Available at: http://www.innovations.cms.gov/initiatives/Partnership‐for‐Patients/index.html. Accessed April 2, 2012.
- Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):1418–1428. , , .
- United States Congress. House Committee on Education and Labor. Coe on Ways and Means, Committee on Energy and Commerce, Compilation of Patient Protection and Affordable Care Act: as amended through November 1, 2010 including Patient Protection and Affordable Care Act health‐related portions of the Health Care and Education Reconciliation Act of 2010. Washington, DC: US Government Printing Office; 2010.
- Cost estimate for the amendment in the nature of a substitute to H.R. 3590, as proposed in the Senate on November 18, 2009. Washington, DC: Congressional Budget Office; 2009.
- Partnership for Patients, Center for Medicare and Medicaid Innovation. Available at: http://www.innovations.cms.gov/initiatives/Partnership‐for‐Patients/index.html. Accessed December 12, 2012.
- Providers have failed to work for continuity. Hospitals. 1979;53(10):79. , .
- Executing high‐quality care transitions: a call to do it right. J Hosp Med. 2007;2(5):287–290. , .
- The incidence and severity of adverse events affecting patients after discharge from the hospital. Ann Intern Med. 2003;138(3):161–167. , , , , .
- Adverse events among medical patients after discharge from hospital. CMAJ. 2004;170(3):345–349. , , , et al.
- Making inpatient medication reconciliation patient centered, clinically relevant and implementable: a consensus statement on key principles and necessary first steps. J Hosp Med. 2010;5(8):477–485. , , , et al.
- Tying up loose ends: discharging patients with unresolved medical issues. Arch Intern Med. 2007;167(12):1305. , , .
- Deficits in communication and information transfer between hospital‐based and primary care physicians. JAMA. 2007;297(8):831–841. , , , , , .
- Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520–528. , , , , .
- A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Intern Med. 2009;150(3):178. , , , et al.
- Advancing the science of patient safety. Ann Intern Med. 2011;154(10):693–696. , , , et al.
- From best evidence to best practice: effective implementation of change in patients' care. Lancet. 2003;362(9391):1225–1230. , .
- Quality improvement projects targeting health care‐associated infections: comparing virtual collaborative and toolkit approaches. J Hosp Med. 2011;6(5):271–278. , , , et al.
- Publication guidelines for improvement studies in health care: evolution of the SQUIRE project. Ann Intern Med. 2008;149(9):670–676. , , , , .
- Risk stratification and therapeutic decision making in acute coronary syndromes. JAMA. 2000;284(7):876–878. , , , .
- Are diagnosis specific outcome indicators based on administrative data useful in assessing quality of hospital care? BMJ. 2004;13(1):32. , , .
- What distinguishes top‐performing hospitals in acute myocardial infarction mortality rates? Ann Intern Med. 2011;154(6):384–390. , , , et al.
- The Model for Understanding Success in Quality (MUSIQ): building a theory of context in healthcare quality improvement. BMJ Qual Saf. 2012;21(1):13–20. , , , .
- Evidence‐based quality improvement: the state of the science. Health Aff (Millwood). 2005;24(1):138–150. , .
- Center for Medicare and Medicaid Innovation. Partnership for patients. Available at: http://www.innovations.cms.gov/initiatives/Partnership‐for‐Patients/index.html. Accessed April 2, 2012.
Copyright © 2013 Society of Hospital Medicine
HQPS Competencies
Healthcare quality is defined as the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.1 Delivering high quality care to patients in the hospital setting is especially challenging, given the rapid pace of clinical care, the severity and multitude of patient conditions, and the interdependence of complex processes within the hospital system. Research has shown that hospitalized patients do not consistently receive recommended care2 and are at risk for experiencing preventable harm.3 In an effort to stimulate improvement, stakeholders have called for increased accountability, including enhanced transparency and differential payment based on performance. A growing number of hospital process and outcome measures are readily available to the public via the Internet.46 The Joint Commission, which accredits US hospitals, requires the collection of core quality measure data7 and sets the expectation that National Patient Safety Goals be met to maintain accreditation.8 Moreover, the Center for Medicare and Medicaid Services (CMS) has developed a Value‐Based Purchasing (VBP) plan intended to adjust hospital payment based on quality measures and the occurrence of certain hospital‐acquired conditions.9, 10
Because of their clinical expertise, understanding of hospital clinical operations, leadership of multidisciplinary inpatient teams, and vested interest to improve the systems in which they work, hospitalists are perfectly positioned to collaborate with their institutions to improve the quality of care delivered to inpatients. However, many hospitalists are inadequately prepared to engage in efforts to improve quality, because medical schools and residency programs have not traditionally included or emphasized healthcare quality and patient safety in their curricula.1113 In a survey of 389 internal medicine‐trained hospitalists, significant educational deficiencies were identified in the area of systems‐based practice.14 Specifically, the topics of quality improvement, team management, practice guideline development, health information systems management, and coordination of care between healthcare settings were listed as essential skills for hospitalist practice but underemphasized in residency training. Recognizing the gap between the needs of practicing physicians and current medical education provided in healthcare quality, professional societies have recently published position papers calling for increased training in quality, safety, and systems, both in medical school11 and residency training.15, 16
The Society of Hospital Medicine (SHM) convened a Quality Summit in December 2008 to develop strategic plans related to healthcare quality. Summit attendees felt that most hospitalists lack the formal training necessary to evaluate, implement, and sustain system changes within the hospital. In response, the SHM Hospital Quality and Patient Safety (HQPS) Committee formed a Quality Improvement Education (QIE) subcommittee in 2009 to assess the needs of hospitalists with respect to hospital quality and patient safety, and to evaluate and expand upon existing educational programs in this area. Membership of the QIE subcommittee consisted of hospitalists with extensive experience in healthcare quality and medical education. The QIE subcommittee refined and expanded upon the healthcare quality and patient safety‐related competencies initially described in the Core Competencies in Hospital Medicine.17 The purpose of this report is to describe the development, provide definitions, and make recommendations on the use of the Hospital Quality and Patient Safety (HQPS) Competencies.
Development of The Hospital Quality and Patient Safety Competencies
The multistep process used by the SHM QIE subcommittee to develop the HQPS Competencies is summarized in Figure 1. We performed an in‐depth evaluation of current educational materials and offerings, including a review of the Core Competencies in Hospital Medicine, past annual SHM Quality Improvement Pre‐Course objectives, and the content of training courses offered by other organizations.1722 Throughout our analysis, we emphasized the identification of gaps in content relevant to hospitalists. We then used the Institute of Medicine's (IOM) 6 aims for healthcare quality as a foundation for developing the HQPS Competencies.1 Specifically, the IOM states that healthcare should be safe, effective, patient‐centered, timely, efficient, and equitable. Additionally, we reviewed and integrated elements of the Practice‐Based Learning and Improvement (PBLI) and Systems‐Based Practice (SBP) competencies as defined by the Accreditation Council for Graduate Medical Education (ACGME).23 We defined general areas of competence and specific standards for knowledge, skills, and attitudes within each area. Subcommittee members reflected on their own experience, as clinicians, educators, and leaders in healthcare quality and patient safety, to inform and refine the competency definitions and standards. Acknowledging that some hospitalists may serve as collaborators or clinical content experts, while others may serve as leaders of hospital quality initiatives, 3 levels of expertise were established: basic, intermediate, and advanced.
The QIE subcommittee presented a draft version of the HQPS Competencies to the HQPS Committee in the fall of 2009 and incorporated suggested revisions. The revised set of competencies was then reviewed by members of the Leadership and Education Committees during the winter of 2009‐2010, and additional recommendations were included in the final version now described.
Description of The Competencies
The 8 areas of competence include: Quality Measurement and Stakeholder Interests, Data Acquisition and Interpretation, Organizational Knowledge and Leadership Skills, Patient Safety Principles, Teamwork and Communication, Quality and Safety Improvement Methods, Health Information Systems, and Patient Centeredness. Three levels of competence and standards within each level and area are defined in Table 1. Standards use carefully selected action verbs to reflect educational goals for hospitalists at each level.24 The basic level represents a minimum level of competency for all practicing hospitalists. The intermediate level represents a hospitalist who is prepared to meaningfully engage and collaborate with his or her institution in quality improvement efforts. A hospitalist at this level may also lead uncomplicated improvement projects for his or her medical center and/or hospital medicine group. The advanced level represents a hospitalist prepared to lead quality improvement efforts for his or her institution and/or hospital medicine group. Many hospitalists at this level will have, or will be prepared to have, leadership positions in quality and patient safety at their institutions. Advanced level hospitalists will also have the expertise to teach and mentor other individuals in their quality improvement efforts.
Competency | Basic | Intermediate | Advanced |
---|---|---|---|
| |||
Quality measurement and stakeholder interests | Define structure, process, and outcome measures | Compare and contrast relative benefits of using one type of measure vs another | Anticipate and respond to stakeholders' needs and interests |
Define stakeholders and understand their interests related to healthcare quality | Explain measures as defined by stakeholders (Center for Medicare and Medicaid Services, Leapfrog, etc) | Anticipate and respond to changes in quality measures and incentive programs | |
Identify measures as defined by stakeholders (Center for Medicare and Medicaid Services, Leapfrog, etc) | Appreciate variation in quality and utilization performance | Lead efforts to reduce variation in care delivery (see also quality improvement methods) | |
Describe potential unintended consequences of quality measurement and incentive programs | Avoid unintended consequences of quality measurement and incentive programs | ||
Data acquisition and interpretation | Interpret simple statistical methods to compare populations within a sample (chi‐square, t tests, etc) | Describe sources of data for quality measurement | Acquire data from internal and external sources |
Define basic terms used to describe continuous and categorical data (mean, median, standard deviation, interquartile range, percentages, rates, etc) | Identify potential pitfalls in administrative data | Create visual representations of data (Bar, Pareto, and Control Charts) | |
Summarize basic principles of statistical process control | Explain variation in data | Use simple statistical methods to compare populations within a sample (chi‐square, t tests, etc) | |
Interpret data displayed in Pareto and Control Charts | Administer and interpret a survey | ||
Summarize basic survey techniques (including methods to maximize response, minimize bias, and use of ordinal response scales) | |||
Use appropriate terms to describe continuous and categorical data (mean, median, standard deviation, interquartile range, percentages, rates, etc) | |||
Organizational knowledge and leadership skills | Describe the organizational structure of one's institution | Define interests of internal and external stakeholders | Effectively negotiate with stakeholders |
Define leaders within the organization and describe their roles | Collaborate as an effective team member of a quality improvement project | Assemble a quality improvement project team and effectively lead meetings (setting agendas, hold members accountable, etc) | |
Exemplify the importance of leading by example | Explain principles of change management and how it can positively or negatively impact quality improvement project implementation | Motivate change and create vision for ideal state | |
Effectively communicate quality or safety issues identified during routine patient care to the appropriate parties | Communicate effectively in a variety of settings (lead a meeting, public speaking, etc) | ||
Serve as a resource and/or mentor for less‐experienced team members | |||
Patient safety principles | Identify potential sources of error encountered during routine patient care | Compare methods to measure errors and adverse events, including administrative data analysis, chart review, and incident reporting systems | Lead efforts to appropriately measure medical error and/or adverse events |
Compare and contrast medical error with adverse event | Identify and explain how human factors can contribute to medical errors | Lead efforts to redesign systems to reduce errors from occurring; this may include the facilitation of a hospital, departmental, or divisional Root Cause Analysis | |
Describe how the systems approach to medical error is more productive than assigning individual blame | Know the difference between a strong vs a weak action plan for improvement (ie, brief education intervention is weak; skills training with deliberate practice or physical changes are stronger) | Lead efforts to advance the culture of patient safety in the hospital | |
Differentiate among types of error (knowledge/judgment vs systems vs procedural/technical; latent vs active) | |||
Explain the role that incident reporting plays in quality improvement efforts and how reporting can foster a culture of safety | |||
Describe principles of medical error disclosure | |||
Teamwork and communication | Explain how poor teamwork and communication failures contribute to adverse events | Collaborate on administration and interpretation of teamwork and safety culture measures | Lead efforts to improve teamwork and safety culture |
Identify the potential for errors during transitions within and between healthcare settings (handoffs, transfers, discharge) | Describe the principles of effective teamwork and identify behaviors consistent with effective teamwork | Lead efforts to improve teamwork in specific settings (intensive care, medical‐surgical unit, etc) | |
Identify deficiencies in transitions within and between healthcare settings (handoffs, transfers, discharge) | Successfully improve the safety of transitions within and between healthcare settings (handoffs, transfers, discharge) | ||
Quality and safety improvement methods and tools | Define the quality improvement methods used and infrastructure in place at one's hospital | Compare and contrast various quality improvement methods, including six sigma, lean, and PDSA | Lead a quality improvement project using six sigma, lean, or PDSA methodology |
Summarize the basic principles and use of Root Cause Analysis as a tool to evaluate medical error | Collaborate on a quality improvement project using six sigma, lean, or PDSA | Use high level process mapping, fishbone diagrams, etc, to identify areas for opportunity in evaluating a process | |
Describe and collaborate on Failure Mode and Effects Analysis | Lead the development and implementation of clinical protocols to standardize care delivery when appropriate | ||
Actively participate in a Root Cause Analysis | Conduct Failure Mode and Effects Analysis | ||
Conduct Root Cause Analysis | |||
Health information systems | Identify the potential for information systems to reduce as well as contribute to medical error | Define types of clinical decision support | Lead or co‐lead efforts to leverage information systems in quality measurement |
Describe how information systems fit into provider workflow and care delivery | Collaborate on the design of health information systems | Lead or co‐lead efforts to leverage information systems to reduce error and/or improve delivery of effective care | |
Anticipate and prevent unintended consequences of implementation or revision of information systems | |||
Lead or co‐lead efforts to leverage clinical decision support to improve quality and safety | |||
Patient centeredness | Explain the clinical benefits of a patient‐centered approach | Explain benefits and potential limitations of patient satisfaction surveys | Interpret data from patient satisfaction surveys and lead efforts to improve patient satisfaction |
Identify system barriers to effective and safe care from the patient's perspective | Identify clinical areas with suboptimal efficiency and/or timeliness from the patient's perspective | Lead effort to reduce inefficiency and/or improve timeliness from the patient's perspective | |
Describe the value of patient satisfaction surveys and patient and family partnership in care | Promote patient and caregiver education including use of effective education tools | Lead efforts to eliminate system barriers to effective and safe care from the patient's perspective | |
Lead efforts to improve patent and caregiver education including development or implementation of effective education tools | |||
Lead efforts to actively involve patients and families in the redesign of healthcare delivery systems and processes |
Recommended Use of The Competencies
The HQPS Competencies provide a framework for curricula and other professional development experiences in healthcare quality and patient safety. We recommend a step‐wise approach to curriculum development which includes conducting a targeted needs assessment, defining goals and specific learning objectives, and evaluation of the curriculum.25 The HQPS Competencies can be used at each step and provide educational targets for learners across a range of interest and experience.
Professional Development
Since residency programs historically have not trained their graduates to achieve a basic level of competence, practicing hospitalists will need to seek out professional development opportunities. Some educational opportunities which already exist include the Quality Track sessions during the SHM Annual Meeting, and the SHM Quality Improvement Pre‐Course. Hospitalist leaders are currently using the HQPS Competencies to review and revise annual meeting and pre‐course objectives and content in an effort to meet the expected level of competence for SHM members. Similarly, local SHM Chapter and regional hospital medicine leaders should look to the competencies to help select topics and objectives for future presentations. Additionally, the SHM Web site offers tools to develop skills, including a resource room and quality improvement primer.26 Mentored‐implementation programs, supported by SHM, can help hospitalists' acquire more advanced experiential training in quality improvement.
New educational opportunities are being developed, including a comprehensive set of Internet‐based modules designed to help practicing hospitalists achieve a basic level of competence. Hospitalists will be able to achieve continuing medical education (CME) credit upon completion of individual modules. Plans are underway to provide Certification in Hospital Quality and Patient Safety, reflecting an advanced level of competence, upon completion of the entire set, and demonstration of knowledge and skill application through an approved quality improvement project. The certification process will leverage the success of the SHM Leadership Academies and Mentored Implementation projects to help hospitalists apply their new skills in a real world setting.
HQPS Competencies and Focused Practice in Hospital Medicine
Recently, the American Board of Internal Medicine (ABIM) has recognized the field of hospital medicine by developing a new program that provides hospitalists the opportunity to earn Maintenance of Certification (MOC) in Internal Medicine with a Focused Practice in Hospital Medicine.27 Appropriately, hospital quality and patient safety content is included among the knowledge questions on the secure exam, and completion of a practice improvement module (commonly known as PIM) is required for the certification. The SHM Education Committee has developed a Self‐Evaluation of Medical Knowledge module related to hospital quality and patient safety for use in the MOC process. ABIM recertification with Focused Practice in Hospital Medicine is an important and visible step for the Hospital Medicine movement; the content of both the secure exam and the MOC reaffirms the notion that the acquisition of knowledge, skills, and attitudes in hospital quality and patient safety is essential to the practice of hospital medicine.
Medical Education
Because teaching hospitalists frequently serve in important roles as educators and physician leaders in quality improvement, they are often responsible for medical student and resident training in healthcare quality and patient safety. Medical schools and residency programs have struggled to integrate healthcare quality and patient safety into their curricula.11, 12, 28 Hospitalists can play a major role in academic medical centers by helping to develop curricular materials and evaluations related to healthcare quality. Though intended primarily for future and current hospitalists, the HQPS Competencies and standards for the basic level may be adapted to provide educational targets for many learners in undergraduate and graduate medical education. Teaching hospitalists may use these standards to evaluate current educational efforts and design new curricula in collaboration with their medical school and residency program leaders.
Beyond the basic level of training in healthcare quality required for all, many residents will benefit from more advanced training experiences, including opportunities to apply knowledge and develop skills related to quality improvement. A recent report from the ACGME concluded that role models and mentors were essential for engaging residents in quality improvement efforts.29 Hospitalists are ideally suited to serve as role models during residents' experiential learning opportunities related to hospital quality. Several residency programs have begun to implement hospitalist tracks13 and quality improvement rotations.3032 Additionally, some academic medical centers have begun to develop and offer fellowship training in Hospital Medicine.33 These hospitalist‐led educational programs are an ideal opportunity to teach the intermediate and advanced training components, of healthcare quality and patient safety, to residents and fellows that wish to incorporate activity or leadership in quality improvement and patient safety science into their generalist or subspecialty careers. Teaching hospitalists should use the HQPS competency standards to define learning objectives for trainees at this stage of development.
To address the enormous educational needs in quality and safety for future physicians, a cadre of expert teachers in quality and safety will need to be developed. In collaboration with the Alliance for Academic Internal Medicine (AAIM), SHM is developing a Quality and Safety Educators Academy which will target academic hospitalists and other medical educators interested in developing advanced skills in quality improvement and patient safety education.
Assessment of Competence
An essential component of a rigorous faculty development program or medical education initiative is the assessment of whether these endeavors are achieving their stated aims. Published literature provides examples of useful assessment methods applicable to the HQPS Competencies. Knowledge in several areas of HQPS competence may be assessed with the use of multiple choice tests.34, 35 Knowledge of quality improvement methods may be assessed using the Quality Improvement Knowledge Application Tool (QIKAT), an instrument in which the learner responds to each of 3 scenarios with an aim, outcome and process measures, and ideas for changes which may result in improved performance.36 Teamwork and communication skills may be assessed using 360‐degree evaluations3739 and direct observation using behaviorally anchored rating scales.4043 Objective structured clinical examinations have been used to assess knowledge and skills related to patient safety principles.44, 45 Notably, few studies have rigorously assessed the validity and reliability of tools designed to evaluate competence related to healthcare quality.46 Additionally, to our knowledge, no prior research has evaluated assessment specifically for hospitalists. Thus, the development and validation of new assessment tools based on the HQPS Competencies for learners at each level is a crucial next step in the educational process. Additionally, evaluation of educational initiatives should include analyses of clinical benefit, as the ultimate goal of these efforts is to improve patient care.47, 48
Conclusion
Hospitalists are poised to have a tremendous impact on improving the quality of care for hospitalized patients. The lack of training in quality improvement in traditional medical education programs, in which most current hospitalists were trained, can be overcome through appropriate use of the HQPS Competencies. Formal incorporation of the HQPS Competencies into professional development programs, and innovative educational initiatives and curricula, will help provide current hospitalists and the next generations of hospitalists with the needed skills to be successful.
- Crossing the Quality Chasm: A New Health System for the Twenty‐first Century.Washington, DC:Institute of Medicine;2001.
- Care in U.S. hospitals—the Hospital Quality Alliance program.N Engl J Med.2005;353(3):265–274. , , , .
- Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization.JAMA.2003;290(14):1868–1874. , .
- Hospital Compare—A quality tool provided by Medicare. Available at: http://www.hospitalcompare.hhs.gov/. Accessed April 23,2010.
- The Leapfrog Group: Hospital Quality Ratings. Available at: http://www.leapfroggroup.org/cp. Accessed April 30,2010.
- Why Not the Best? A Healthcare Quality Improvement Resource. Available at: http://www.whynotthebest.org/. Accessed April 30,2010.
- The Joint Commission: Facts about ORYX for hospitals (National Hospital Quality Measures). Available at: http://www.jointcommission.org/accreditationprograms/hospitals/oryx/oryx_facts.htm. Accessed August 19,2010.
- The Joint Commission: National Patient Safety Goals. Available at: http://www.jointcommission.org/patientsafety/nationalpatientsafetygoals/. Accessed August 9,2010.
- Hospital Acquired Conditions: Overview. Available at: http://www.cms.gov/HospitalAcqCond/01_Overview.asp. Accessed April 30,2010.
- Report to Congress:Plan to Implement a Medicare Hospital Value‐based Purchasing Program. Washington, DC: US Department of Health and Human Services, Center for Medicare and Medicaid Services;2007.
- Unmet Needs: Teaching Physicians to Provide Safe Patient Care.Boston, MA:Lucian Leape Institute at the National Patient Safety Foundation;2010.
- Patient safety education at U.S. and Canadian medical schools: results from the 2006 Clerkship Directors in Internal Medicine survey.Acad Med.2009;84(12):1672–1676. , , , , .
- Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists' needs.J Gen Intern Med.2008;23(7):1110–1115. , , , , .
- Hospitalists' perceptions of their residency training needs: results of a national survey.Am J Med.2001;111(3):247–254. , , , .
- Redesigning residency education in internal medicine: a position paper from the Association of Program Directors in Internal Medicine.Ann Intern Med.2006;144(12):920–926. , , , , .
- Redesigning training for internal medicine.Ann Intern Med.2006;144(12):927–932. , , .
- Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1(1):48–56. , , , , .
- Intermountain Healthcare. 20‐Day Course for Executives 2001.
- Curriculum Development for Medical Education: A Six‐step Approach.Baltimore, MD:Johns Hopkins Press;1998. , , , .
- Society of Hospital Medicine Quality Improvement Basics. Available at: http://www.hospitalmedicine.org/Content/NavigationMenu/QualityImprovement/QIPrimer/QI_Primer_Landing_Pa.htm. Accessed June 4,2010.
- American Board of Internal Medicine: Questions and Answers Regarding ABIM's Maintenance of Certification in Internal Medicine With a Focused Practice in Hospital Medicine Program. Available at: http://www.abim.org/news/news/focused‐practice‐hospital‐medicine‐qa.aspx. Accessed August 9,2010.
- Assessing the needs of residency program directors to meet the ACGME general competencies.Acad Med.2002;77(7):750. , , .
- Accreditation Council for Graduate Medical Education and Institute for Healthcare Improvement 90‐Day Project. Involving Residents in Quality Improvement: Contrasting “Top‐Down” and “Bottom‐Up” Approaches.Chicago, IL;ACGME;2008. .
- Teaching internal medicine residents quality improvement techniques using the ABIM's practice improvement modules.J Gen Intern Med.2008;23(7):927–930. , , , .
- A self‐instructional model to teach systems‐based practice and practice‐based learning and improvement.J Gen Intern Med.2008;23(7):931–936. , , , , .
- Creating a quality improvement elective for medical house officers.J Gen Intern Med.2004;19(8):861–867. , , , , .
- Hospital medicine fellowships: works in progress.Am J Med.2006;119(1):72.e1‐e7. , , , .
- Web‐based education in systems‐based practice: a randomized trial.Arch Intern Med.2007;167(4):361–366. , , , .
- A self‐instructional model to teach systems‐based practice and practice‐based learning and improvement.J Gen Intern Med.2008;23(7):931–936. , , , , .
- The quality improvement knowledge application tool: an instrument to assess knowledge application in practice‐based learning and improvement.J Gen Intern Med.2003;18(suppl 1):250. , , , .
- Effect of multisource feedback on resident communication skills and professionalism: a randomized controlled trial.Arch Pediatr Adolesc Med.2007;161(1):44–49. , , , et al.
- Reliability of a 360‐degree evaluation to assess resident competence.Am J Phys Med Rehabil.2007;86(10):845–852. , .
- Pilot study of a 360‐degree assessment instrument for physical medicine 82(5):394–402. , , , .
- Anaesthetists' non‐technical skills (ANTS): evaluation of a behavioural marker system.Br J Anaesth.2003;90(5):580–588. , , , , , .
- The Mayo high performance teamwork scale: reliability and validity for evaluating key crew resource management skills.Simul Healthc.2007;2(1):4–10. , , , et al.
- Reliability of a revised NOTECHS scale for use in surgical teams.Am J Surg.2008;196(2):184–190. , , , , , .
- Observational teamwork assessment for surgery: construct validation with expert versus novice raters.Ann Surg.2009;249(6):1047–1051. , , , , , .
- A patient safety objective structured clinical examination.J Patient Saf.2009;5(2):55–60. , , , , , .
- The Objective Structured Clinical Examination as an educational tool in patient safety.Jt Comm J Qual Patient Saf.2007;33(1):48–53. , .
- Measurement of the general competencies of the Accreditation Council for Graduate Medical Education: a systematic review.Acad Med.2009;84(3):301–309. , , .
- Effectiveness of teaching quality improvement to clinicians: a systematic review.JAMA.2007;298(9):1023–1037. , , , , , .
- Methodological rigor of quality improvement curricula for physician trainees: a systematic review and recommendations for change.Acad Med.2009;84(12):1677–1692. , , , , .
Healthcare quality is defined as the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.1 Delivering high quality care to patients in the hospital setting is especially challenging, given the rapid pace of clinical care, the severity and multitude of patient conditions, and the interdependence of complex processes within the hospital system. Research has shown that hospitalized patients do not consistently receive recommended care2 and are at risk for experiencing preventable harm.3 In an effort to stimulate improvement, stakeholders have called for increased accountability, including enhanced transparency and differential payment based on performance. A growing number of hospital process and outcome measures are readily available to the public via the Internet.46 The Joint Commission, which accredits US hospitals, requires the collection of core quality measure data7 and sets the expectation that National Patient Safety Goals be met to maintain accreditation.8 Moreover, the Center for Medicare and Medicaid Services (CMS) has developed a Value‐Based Purchasing (VBP) plan intended to adjust hospital payment based on quality measures and the occurrence of certain hospital‐acquired conditions.9, 10
Because of their clinical expertise, understanding of hospital clinical operations, leadership of multidisciplinary inpatient teams, and vested interest to improve the systems in which they work, hospitalists are perfectly positioned to collaborate with their institutions to improve the quality of care delivered to inpatients. However, many hospitalists are inadequately prepared to engage in efforts to improve quality, because medical schools and residency programs have not traditionally included or emphasized healthcare quality and patient safety in their curricula.1113 In a survey of 389 internal medicine‐trained hospitalists, significant educational deficiencies were identified in the area of systems‐based practice.14 Specifically, the topics of quality improvement, team management, practice guideline development, health information systems management, and coordination of care between healthcare settings were listed as essential skills for hospitalist practice but underemphasized in residency training. Recognizing the gap between the needs of practicing physicians and current medical education provided in healthcare quality, professional societies have recently published position papers calling for increased training in quality, safety, and systems, both in medical school11 and residency training.15, 16
The Society of Hospital Medicine (SHM) convened a Quality Summit in December 2008 to develop strategic plans related to healthcare quality. Summit attendees felt that most hospitalists lack the formal training necessary to evaluate, implement, and sustain system changes within the hospital. In response, the SHM Hospital Quality and Patient Safety (HQPS) Committee formed a Quality Improvement Education (QIE) subcommittee in 2009 to assess the needs of hospitalists with respect to hospital quality and patient safety, and to evaluate and expand upon existing educational programs in this area. Membership of the QIE subcommittee consisted of hospitalists with extensive experience in healthcare quality and medical education. The QIE subcommittee refined and expanded upon the healthcare quality and patient safety‐related competencies initially described in the Core Competencies in Hospital Medicine.17 The purpose of this report is to describe the development, provide definitions, and make recommendations on the use of the Hospital Quality and Patient Safety (HQPS) Competencies.
Development of The Hospital Quality and Patient Safety Competencies
The multistep process used by the SHM QIE subcommittee to develop the HQPS Competencies is summarized in Figure 1. We performed an in‐depth evaluation of current educational materials and offerings, including a review of the Core Competencies in Hospital Medicine, past annual SHM Quality Improvement Pre‐Course objectives, and the content of training courses offered by other organizations.1722 Throughout our analysis, we emphasized the identification of gaps in content relevant to hospitalists. We then used the Institute of Medicine's (IOM) 6 aims for healthcare quality as a foundation for developing the HQPS Competencies.1 Specifically, the IOM states that healthcare should be safe, effective, patient‐centered, timely, efficient, and equitable. Additionally, we reviewed and integrated elements of the Practice‐Based Learning and Improvement (PBLI) and Systems‐Based Practice (SBP) competencies as defined by the Accreditation Council for Graduate Medical Education (ACGME).23 We defined general areas of competence and specific standards for knowledge, skills, and attitudes within each area. Subcommittee members reflected on their own experience, as clinicians, educators, and leaders in healthcare quality and patient safety, to inform and refine the competency definitions and standards. Acknowledging that some hospitalists may serve as collaborators or clinical content experts, while others may serve as leaders of hospital quality initiatives, 3 levels of expertise were established: basic, intermediate, and advanced.
The QIE subcommittee presented a draft version of the HQPS Competencies to the HQPS Committee in the fall of 2009 and incorporated suggested revisions. The revised set of competencies was then reviewed by members of the Leadership and Education Committees during the winter of 2009‐2010, and additional recommendations were included in the final version now described.
Description of The Competencies
The 8 areas of competence include: Quality Measurement and Stakeholder Interests, Data Acquisition and Interpretation, Organizational Knowledge and Leadership Skills, Patient Safety Principles, Teamwork and Communication, Quality and Safety Improvement Methods, Health Information Systems, and Patient Centeredness. Three levels of competence and standards within each level and area are defined in Table 1. Standards use carefully selected action verbs to reflect educational goals for hospitalists at each level.24 The basic level represents a minimum level of competency for all practicing hospitalists. The intermediate level represents a hospitalist who is prepared to meaningfully engage and collaborate with his or her institution in quality improvement efforts. A hospitalist at this level may also lead uncomplicated improvement projects for his or her medical center and/or hospital medicine group. The advanced level represents a hospitalist prepared to lead quality improvement efforts for his or her institution and/or hospital medicine group. Many hospitalists at this level will have, or will be prepared to have, leadership positions in quality and patient safety at their institutions. Advanced level hospitalists will also have the expertise to teach and mentor other individuals in their quality improvement efforts.
Competency | Basic | Intermediate | Advanced |
---|---|---|---|
| |||
Quality measurement and stakeholder interests | Define structure, process, and outcome measures | Compare and contrast relative benefits of using one type of measure vs another | Anticipate and respond to stakeholders' needs and interests |
Define stakeholders and understand their interests related to healthcare quality | Explain measures as defined by stakeholders (Center for Medicare and Medicaid Services, Leapfrog, etc) | Anticipate and respond to changes in quality measures and incentive programs | |
Identify measures as defined by stakeholders (Center for Medicare and Medicaid Services, Leapfrog, etc) | Appreciate variation in quality and utilization performance | Lead efforts to reduce variation in care delivery (see also quality improvement methods) | |
Describe potential unintended consequences of quality measurement and incentive programs | Avoid unintended consequences of quality measurement and incentive programs | ||
Data acquisition and interpretation | Interpret simple statistical methods to compare populations within a sample (chi‐square, t tests, etc) | Describe sources of data for quality measurement | Acquire data from internal and external sources |
Define basic terms used to describe continuous and categorical data (mean, median, standard deviation, interquartile range, percentages, rates, etc) | Identify potential pitfalls in administrative data | Create visual representations of data (Bar, Pareto, and Control Charts) | |
Summarize basic principles of statistical process control | Explain variation in data | Use simple statistical methods to compare populations within a sample (chi‐square, t tests, etc) | |
Interpret data displayed in Pareto and Control Charts | Administer and interpret a survey | ||
Summarize basic survey techniques (including methods to maximize response, minimize bias, and use of ordinal response scales) | |||
Use appropriate terms to describe continuous and categorical data (mean, median, standard deviation, interquartile range, percentages, rates, etc) | |||
Organizational knowledge and leadership skills | Describe the organizational structure of one's institution | Define interests of internal and external stakeholders | Effectively negotiate with stakeholders |
Define leaders within the organization and describe their roles | Collaborate as an effective team member of a quality improvement project | Assemble a quality improvement project team and effectively lead meetings (setting agendas, hold members accountable, etc) | |
Exemplify the importance of leading by example | Explain principles of change management and how it can positively or negatively impact quality improvement project implementation | Motivate change and create vision for ideal state | |
Effectively communicate quality or safety issues identified during routine patient care to the appropriate parties | Communicate effectively in a variety of settings (lead a meeting, public speaking, etc) | ||
Serve as a resource and/or mentor for less‐experienced team members | |||
Patient safety principles | Identify potential sources of error encountered during routine patient care | Compare methods to measure errors and adverse events, including administrative data analysis, chart review, and incident reporting systems | Lead efforts to appropriately measure medical error and/or adverse events |
Compare and contrast medical error with adverse event | Identify and explain how human factors can contribute to medical errors | Lead efforts to redesign systems to reduce errors from occurring; this may include the facilitation of a hospital, departmental, or divisional Root Cause Analysis | |
Describe how the systems approach to medical error is more productive than assigning individual blame | Know the difference between a strong vs a weak action plan for improvement (ie, brief education intervention is weak; skills training with deliberate practice or physical changes are stronger) | Lead efforts to advance the culture of patient safety in the hospital | |
Differentiate among types of error (knowledge/judgment vs systems vs procedural/technical; latent vs active) | |||
Explain the role that incident reporting plays in quality improvement efforts and how reporting can foster a culture of safety | |||
Describe principles of medical error disclosure | |||
Teamwork and communication | Explain how poor teamwork and communication failures contribute to adverse events | Collaborate on administration and interpretation of teamwork and safety culture measures | Lead efforts to improve teamwork and safety culture |
Identify the potential for errors during transitions within and between healthcare settings (handoffs, transfers, discharge) | Describe the principles of effective teamwork and identify behaviors consistent with effective teamwork | Lead efforts to improve teamwork in specific settings (intensive care, medical‐surgical unit, etc) | |
Identify deficiencies in transitions within and between healthcare settings (handoffs, transfers, discharge) | Successfully improve the safety of transitions within and between healthcare settings (handoffs, transfers, discharge) | ||
Quality and safety improvement methods and tools | Define the quality improvement methods used and infrastructure in place at one's hospital | Compare and contrast various quality improvement methods, including six sigma, lean, and PDSA | Lead a quality improvement project using six sigma, lean, or PDSA methodology |
Summarize the basic principles and use of Root Cause Analysis as a tool to evaluate medical error | Collaborate on a quality improvement project using six sigma, lean, or PDSA | Use high level process mapping, fishbone diagrams, etc, to identify areas for opportunity in evaluating a process | |
Describe and collaborate on Failure Mode and Effects Analysis | Lead the development and implementation of clinical protocols to standardize care delivery when appropriate | ||
Actively participate in a Root Cause Analysis | Conduct Failure Mode and Effects Analysis | ||
Conduct Root Cause Analysis | |||
Health information systems | Identify the potential for information systems to reduce as well as contribute to medical error | Define types of clinical decision support | Lead or co‐lead efforts to leverage information systems in quality measurement |
Describe how information systems fit into provider workflow and care delivery | Collaborate on the design of health information systems | Lead or co‐lead efforts to leverage information systems to reduce error and/or improve delivery of effective care | |
Anticipate and prevent unintended consequences of implementation or revision of information systems | |||
Lead or co‐lead efforts to leverage clinical decision support to improve quality and safety | |||
Patient centeredness | Explain the clinical benefits of a patient‐centered approach | Explain benefits and potential limitations of patient satisfaction surveys | Interpret data from patient satisfaction surveys and lead efforts to improve patient satisfaction |
Identify system barriers to effective and safe care from the patient's perspective | Identify clinical areas with suboptimal efficiency and/or timeliness from the patient's perspective | Lead effort to reduce inefficiency and/or improve timeliness from the patient's perspective | |
Describe the value of patient satisfaction surveys and patient and family partnership in care | Promote patient and caregiver education including use of effective education tools | Lead efforts to eliminate system barriers to effective and safe care from the patient's perspective | |
Lead efforts to improve patent and caregiver education including development or implementation of effective education tools | |||
Lead efforts to actively involve patients and families in the redesign of healthcare delivery systems and processes |
Recommended Use of The Competencies
The HQPS Competencies provide a framework for curricula and other professional development experiences in healthcare quality and patient safety. We recommend a step‐wise approach to curriculum development which includes conducting a targeted needs assessment, defining goals and specific learning objectives, and evaluation of the curriculum.25 The HQPS Competencies can be used at each step and provide educational targets for learners across a range of interest and experience.
Professional Development
Since residency programs historically have not trained their graduates to achieve a basic level of competence, practicing hospitalists will need to seek out professional development opportunities. Some educational opportunities which already exist include the Quality Track sessions during the SHM Annual Meeting, and the SHM Quality Improvement Pre‐Course. Hospitalist leaders are currently using the HQPS Competencies to review and revise annual meeting and pre‐course objectives and content in an effort to meet the expected level of competence for SHM members. Similarly, local SHM Chapter and regional hospital medicine leaders should look to the competencies to help select topics and objectives for future presentations. Additionally, the SHM Web site offers tools to develop skills, including a resource room and quality improvement primer.26 Mentored‐implementation programs, supported by SHM, can help hospitalists' acquire more advanced experiential training in quality improvement.
New educational opportunities are being developed, including a comprehensive set of Internet‐based modules designed to help practicing hospitalists achieve a basic level of competence. Hospitalists will be able to achieve continuing medical education (CME) credit upon completion of individual modules. Plans are underway to provide Certification in Hospital Quality and Patient Safety, reflecting an advanced level of competence, upon completion of the entire set, and demonstration of knowledge and skill application through an approved quality improvement project. The certification process will leverage the success of the SHM Leadership Academies and Mentored Implementation projects to help hospitalists apply their new skills in a real world setting.
HQPS Competencies and Focused Practice in Hospital Medicine
Recently, the American Board of Internal Medicine (ABIM) has recognized the field of hospital medicine by developing a new program that provides hospitalists the opportunity to earn Maintenance of Certification (MOC) in Internal Medicine with a Focused Practice in Hospital Medicine.27 Appropriately, hospital quality and patient safety content is included among the knowledge questions on the secure exam, and completion of a practice improvement module (commonly known as PIM) is required for the certification. The SHM Education Committee has developed a Self‐Evaluation of Medical Knowledge module related to hospital quality and patient safety for use in the MOC process. ABIM recertification with Focused Practice in Hospital Medicine is an important and visible step for the Hospital Medicine movement; the content of both the secure exam and the MOC reaffirms the notion that the acquisition of knowledge, skills, and attitudes in hospital quality and patient safety is essential to the practice of hospital medicine.
Medical Education
Because teaching hospitalists frequently serve in important roles as educators and physician leaders in quality improvement, they are often responsible for medical student and resident training in healthcare quality and patient safety. Medical schools and residency programs have struggled to integrate healthcare quality and patient safety into their curricula.11, 12, 28 Hospitalists can play a major role in academic medical centers by helping to develop curricular materials and evaluations related to healthcare quality. Though intended primarily for future and current hospitalists, the HQPS Competencies and standards for the basic level may be adapted to provide educational targets for many learners in undergraduate and graduate medical education. Teaching hospitalists may use these standards to evaluate current educational efforts and design new curricula in collaboration with their medical school and residency program leaders.
Beyond the basic level of training in healthcare quality required for all, many residents will benefit from more advanced training experiences, including opportunities to apply knowledge and develop skills related to quality improvement. A recent report from the ACGME concluded that role models and mentors were essential for engaging residents in quality improvement efforts.29 Hospitalists are ideally suited to serve as role models during residents' experiential learning opportunities related to hospital quality. Several residency programs have begun to implement hospitalist tracks13 and quality improvement rotations.3032 Additionally, some academic medical centers have begun to develop and offer fellowship training in Hospital Medicine.33 These hospitalist‐led educational programs are an ideal opportunity to teach the intermediate and advanced training components, of healthcare quality and patient safety, to residents and fellows that wish to incorporate activity or leadership in quality improvement and patient safety science into their generalist or subspecialty careers. Teaching hospitalists should use the HQPS competency standards to define learning objectives for trainees at this stage of development.
To address the enormous educational needs in quality and safety for future physicians, a cadre of expert teachers in quality and safety will need to be developed. In collaboration with the Alliance for Academic Internal Medicine (AAIM), SHM is developing a Quality and Safety Educators Academy which will target academic hospitalists and other medical educators interested in developing advanced skills in quality improvement and patient safety education.
Assessment of Competence
An essential component of a rigorous faculty development program or medical education initiative is the assessment of whether these endeavors are achieving their stated aims. Published literature provides examples of useful assessment methods applicable to the HQPS Competencies. Knowledge in several areas of HQPS competence may be assessed with the use of multiple choice tests.34, 35 Knowledge of quality improvement methods may be assessed using the Quality Improvement Knowledge Application Tool (QIKAT), an instrument in which the learner responds to each of 3 scenarios with an aim, outcome and process measures, and ideas for changes which may result in improved performance.36 Teamwork and communication skills may be assessed using 360‐degree evaluations3739 and direct observation using behaviorally anchored rating scales.4043 Objective structured clinical examinations have been used to assess knowledge and skills related to patient safety principles.44, 45 Notably, few studies have rigorously assessed the validity and reliability of tools designed to evaluate competence related to healthcare quality.46 Additionally, to our knowledge, no prior research has evaluated assessment specifically for hospitalists. Thus, the development and validation of new assessment tools based on the HQPS Competencies for learners at each level is a crucial next step in the educational process. Additionally, evaluation of educational initiatives should include analyses of clinical benefit, as the ultimate goal of these efforts is to improve patient care.47, 48
Conclusion
Hospitalists are poised to have a tremendous impact on improving the quality of care for hospitalized patients. The lack of training in quality improvement in traditional medical education programs, in which most current hospitalists were trained, can be overcome through appropriate use of the HQPS Competencies. Formal incorporation of the HQPS Competencies into professional development programs, and innovative educational initiatives and curricula, will help provide current hospitalists and the next generations of hospitalists with the needed skills to be successful.
Healthcare quality is defined as the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.1 Delivering high quality care to patients in the hospital setting is especially challenging, given the rapid pace of clinical care, the severity and multitude of patient conditions, and the interdependence of complex processes within the hospital system. Research has shown that hospitalized patients do not consistently receive recommended care2 and are at risk for experiencing preventable harm.3 In an effort to stimulate improvement, stakeholders have called for increased accountability, including enhanced transparency and differential payment based on performance. A growing number of hospital process and outcome measures are readily available to the public via the Internet.46 The Joint Commission, which accredits US hospitals, requires the collection of core quality measure data7 and sets the expectation that National Patient Safety Goals be met to maintain accreditation.8 Moreover, the Center for Medicare and Medicaid Services (CMS) has developed a Value‐Based Purchasing (VBP) plan intended to adjust hospital payment based on quality measures and the occurrence of certain hospital‐acquired conditions.9, 10
Because of their clinical expertise, understanding of hospital clinical operations, leadership of multidisciplinary inpatient teams, and vested interest to improve the systems in which they work, hospitalists are perfectly positioned to collaborate with their institutions to improve the quality of care delivered to inpatients. However, many hospitalists are inadequately prepared to engage in efforts to improve quality, because medical schools and residency programs have not traditionally included or emphasized healthcare quality and patient safety in their curricula.1113 In a survey of 389 internal medicine‐trained hospitalists, significant educational deficiencies were identified in the area of systems‐based practice.14 Specifically, the topics of quality improvement, team management, practice guideline development, health information systems management, and coordination of care between healthcare settings were listed as essential skills for hospitalist practice but underemphasized in residency training. Recognizing the gap between the needs of practicing physicians and current medical education provided in healthcare quality, professional societies have recently published position papers calling for increased training in quality, safety, and systems, both in medical school11 and residency training.15, 16
The Society of Hospital Medicine (SHM) convened a Quality Summit in December 2008 to develop strategic plans related to healthcare quality. Summit attendees felt that most hospitalists lack the formal training necessary to evaluate, implement, and sustain system changes within the hospital. In response, the SHM Hospital Quality and Patient Safety (HQPS) Committee formed a Quality Improvement Education (QIE) subcommittee in 2009 to assess the needs of hospitalists with respect to hospital quality and patient safety, and to evaluate and expand upon existing educational programs in this area. Membership of the QIE subcommittee consisted of hospitalists with extensive experience in healthcare quality and medical education. The QIE subcommittee refined and expanded upon the healthcare quality and patient safety‐related competencies initially described in the Core Competencies in Hospital Medicine.17 The purpose of this report is to describe the development, provide definitions, and make recommendations on the use of the Hospital Quality and Patient Safety (HQPS) Competencies.
Development of The Hospital Quality and Patient Safety Competencies
The multistep process used by the SHM QIE subcommittee to develop the HQPS Competencies is summarized in Figure 1. We performed an in‐depth evaluation of current educational materials and offerings, including a review of the Core Competencies in Hospital Medicine, past annual SHM Quality Improvement Pre‐Course objectives, and the content of training courses offered by other organizations.1722 Throughout our analysis, we emphasized the identification of gaps in content relevant to hospitalists. We then used the Institute of Medicine's (IOM) 6 aims for healthcare quality as a foundation for developing the HQPS Competencies.1 Specifically, the IOM states that healthcare should be safe, effective, patient‐centered, timely, efficient, and equitable. Additionally, we reviewed and integrated elements of the Practice‐Based Learning and Improvement (PBLI) and Systems‐Based Practice (SBP) competencies as defined by the Accreditation Council for Graduate Medical Education (ACGME).23 We defined general areas of competence and specific standards for knowledge, skills, and attitudes within each area. Subcommittee members reflected on their own experience, as clinicians, educators, and leaders in healthcare quality and patient safety, to inform and refine the competency definitions and standards. Acknowledging that some hospitalists may serve as collaborators or clinical content experts, while others may serve as leaders of hospital quality initiatives, 3 levels of expertise were established: basic, intermediate, and advanced.
The QIE subcommittee presented a draft version of the HQPS Competencies to the HQPS Committee in the fall of 2009 and incorporated suggested revisions. The revised set of competencies was then reviewed by members of the Leadership and Education Committees during the winter of 2009‐2010, and additional recommendations were included in the final version now described.
Description of The Competencies
The 8 areas of competence include: Quality Measurement and Stakeholder Interests, Data Acquisition and Interpretation, Organizational Knowledge and Leadership Skills, Patient Safety Principles, Teamwork and Communication, Quality and Safety Improvement Methods, Health Information Systems, and Patient Centeredness. Three levels of competence and standards within each level and area are defined in Table 1. Standards use carefully selected action verbs to reflect educational goals for hospitalists at each level.24 The basic level represents a minimum level of competency for all practicing hospitalists. The intermediate level represents a hospitalist who is prepared to meaningfully engage and collaborate with his or her institution in quality improvement efforts. A hospitalist at this level may also lead uncomplicated improvement projects for his or her medical center and/or hospital medicine group. The advanced level represents a hospitalist prepared to lead quality improvement efforts for his or her institution and/or hospital medicine group. Many hospitalists at this level will have, or will be prepared to have, leadership positions in quality and patient safety at their institutions. Advanced level hospitalists will also have the expertise to teach and mentor other individuals in their quality improvement efforts.
Competency | Basic | Intermediate | Advanced |
---|---|---|---|
| |||
Quality measurement and stakeholder interests | Define structure, process, and outcome measures | Compare and contrast relative benefits of using one type of measure vs another | Anticipate and respond to stakeholders' needs and interests |
Define stakeholders and understand their interests related to healthcare quality | Explain measures as defined by stakeholders (Center for Medicare and Medicaid Services, Leapfrog, etc) | Anticipate and respond to changes in quality measures and incentive programs | |
Identify measures as defined by stakeholders (Center for Medicare and Medicaid Services, Leapfrog, etc) | Appreciate variation in quality and utilization performance | Lead efforts to reduce variation in care delivery (see also quality improvement methods) | |
Describe potential unintended consequences of quality measurement and incentive programs | Avoid unintended consequences of quality measurement and incentive programs | ||
Data acquisition and interpretation | Interpret simple statistical methods to compare populations within a sample (chi‐square, t tests, etc) | Describe sources of data for quality measurement | Acquire data from internal and external sources |
Define basic terms used to describe continuous and categorical data (mean, median, standard deviation, interquartile range, percentages, rates, etc) | Identify potential pitfalls in administrative data | Create visual representations of data (Bar, Pareto, and Control Charts) | |
Summarize basic principles of statistical process control | Explain variation in data | Use simple statistical methods to compare populations within a sample (chi‐square, t tests, etc) | |
Interpret data displayed in Pareto and Control Charts | Administer and interpret a survey | ||
Summarize basic survey techniques (including methods to maximize response, minimize bias, and use of ordinal response scales) | |||
Use appropriate terms to describe continuous and categorical data (mean, median, standard deviation, interquartile range, percentages, rates, etc) | |||
Organizational knowledge and leadership skills | Describe the organizational structure of one's institution | Define interests of internal and external stakeholders | Effectively negotiate with stakeholders |
Define leaders within the organization and describe their roles | Collaborate as an effective team member of a quality improvement project | Assemble a quality improvement project team and effectively lead meetings (setting agendas, hold members accountable, etc) | |
Exemplify the importance of leading by example | Explain principles of change management and how it can positively or negatively impact quality improvement project implementation | Motivate change and create vision for ideal state | |
Effectively communicate quality or safety issues identified during routine patient care to the appropriate parties | Communicate effectively in a variety of settings (lead a meeting, public speaking, etc) | ||
Serve as a resource and/or mentor for less‐experienced team members | |||
Patient safety principles | Identify potential sources of error encountered during routine patient care | Compare methods to measure errors and adverse events, including administrative data analysis, chart review, and incident reporting systems | Lead efforts to appropriately measure medical error and/or adverse events |
Compare and contrast medical error with adverse event | Identify and explain how human factors can contribute to medical errors | Lead efforts to redesign systems to reduce errors from occurring; this may include the facilitation of a hospital, departmental, or divisional Root Cause Analysis | |
Describe how the systems approach to medical error is more productive than assigning individual blame | Know the difference between a strong vs a weak action plan for improvement (ie, brief education intervention is weak; skills training with deliberate practice or physical changes are stronger) | Lead efforts to advance the culture of patient safety in the hospital | |
Differentiate among types of error (knowledge/judgment vs systems vs procedural/technical; latent vs active) | |||
Explain the role that incident reporting plays in quality improvement efforts and how reporting can foster a culture of safety | |||
Describe principles of medical error disclosure | |||
Teamwork and communication | Explain how poor teamwork and communication failures contribute to adverse events | Collaborate on administration and interpretation of teamwork and safety culture measures | Lead efforts to improve teamwork and safety culture |
Identify the potential for errors during transitions within and between healthcare settings (handoffs, transfers, discharge) | Describe the principles of effective teamwork and identify behaviors consistent with effective teamwork | Lead efforts to improve teamwork in specific settings (intensive care, medical‐surgical unit, etc) | |
Identify deficiencies in transitions within and between healthcare settings (handoffs, transfers, discharge) | Successfully improve the safety of transitions within and between healthcare settings (handoffs, transfers, discharge) | ||
Quality and safety improvement methods and tools | Define the quality improvement methods used and infrastructure in place at one's hospital | Compare and contrast various quality improvement methods, including six sigma, lean, and PDSA | Lead a quality improvement project using six sigma, lean, or PDSA methodology |
Summarize the basic principles and use of Root Cause Analysis as a tool to evaluate medical error | Collaborate on a quality improvement project using six sigma, lean, or PDSA | Use high level process mapping, fishbone diagrams, etc, to identify areas for opportunity in evaluating a process | |
Describe and collaborate on Failure Mode and Effects Analysis | Lead the development and implementation of clinical protocols to standardize care delivery when appropriate | ||
Actively participate in a Root Cause Analysis | Conduct Failure Mode and Effects Analysis | ||
Conduct Root Cause Analysis | |||
Health information systems | Identify the potential for information systems to reduce as well as contribute to medical error | Define types of clinical decision support | Lead or co‐lead efforts to leverage information systems in quality measurement |
Describe how information systems fit into provider workflow and care delivery | Collaborate on the design of health information systems | Lead or co‐lead efforts to leverage information systems to reduce error and/or improve delivery of effective care | |
Anticipate and prevent unintended consequences of implementation or revision of information systems | |||
Lead or co‐lead efforts to leverage clinical decision support to improve quality and safety | |||
Patient centeredness | Explain the clinical benefits of a patient‐centered approach | Explain benefits and potential limitations of patient satisfaction surveys | Interpret data from patient satisfaction surveys and lead efforts to improve patient satisfaction |
Identify system barriers to effective and safe care from the patient's perspective | Identify clinical areas with suboptimal efficiency and/or timeliness from the patient's perspective | Lead effort to reduce inefficiency and/or improve timeliness from the patient's perspective | |
Describe the value of patient satisfaction surveys and patient and family partnership in care | Promote patient and caregiver education including use of effective education tools | Lead efforts to eliminate system barriers to effective and safe care from the patient's perspective | |
Lead efforts to improve patent and caregiver education including development or implementation of effective education tools | |||
Lead efforts to actively involve patients and families in the redesign of healthcare delivery systems and processes |
Recommended Use of The Competencies
The HQPS Competencies provide a framework for curricula and other professional development experiences in healthcare quality and patient safety. We recommend a step‐wise approach to curriculum development which includes conducting a targeted needs assessment, defining goals and specific learning objectives, and evaluation of the curriculum.25 The HQPS Competencies can be used at each step and provide educational targets for learners across a range of interest and experience.
Professional Development
Since residency programs historically have not trained their graduates to achieve a basic level of competence, practicing hospitalists will need to seek out professional development opportunities. Some educational opportunities which already exist include the Quality Track sessions during the SHM Annual Meeting, and the SHM Quality Improvement Pre‐Course. Hospitalist leaders are currently using the HQPS Competencies to review and revise annual meeting and pre‐course objectives and content in an effort to meet the expected level of competence for SHM members. Similarly, local SHM Chapter and regional hospital medicine leaders should look to the competencies to help select topics and objectives for future presentations. Additionally, the SHM Web site offers tools to develop skills, including a resource room and quality improvement primer.26 Mentored‐implementation programs, supported by SHM, can help hospitalists' acquire more advanced experiential training in quality improvement.
New educational opportunities are being developed, including a comprehensive set of Internet‐based modules designed to help practicing hospitalists achieve a basic level of competence. Hospitalists will be able to achieve continuing medical education (CME) credit upon completion of individual modules. Plans are underway to provide Certification in Hospital Quality and Patient Safety, reflecting an advanced level of competence, upon completion of the entire set, and demonstration of knowledge and skill application through an approved quality improvement project. The certification process will leverage the success of the SHM Leadership Academies and Mentored Implementation projects to help hospitalists apply their new skills in a real world setting.
HQPS Competencies and Focused Practice in Hospital Medicine
Recently, the American Board of Internal Medicine (ABIM) has recognized the field of hospital medicine by developing a new program that provides hospitalists the opportunity to earn Maintenance of Certification (MOC) in Internal Medicine with a Focused Practice in Hospital Medicine.27 Appropriately, hospital quality and patient safety content is included among the knowledge questions on the secure exam, and completion of a practice improvement module (commonly known as PIM) is required for the certification. The SHM Education Committee has developed a Self‐Evaluation of Medical Knowledge module related to hospital quality and patient safety for use in the MOC process. ABIM recertification with Focused Practice in Hospital Medicine is an important and visible step for the Hospital Medicine movement; the content of both the secure exam and the MOC reaffirms the notion that the acquisition of knowledge, skills, and attitudes in hospital quality and patient safety is essential to the practice of hospital medicine.
Medical Education
Because teaching hospitalists frequently serve in important roles as educators and physician leaders in quality improvement, they are often responsible for medical student and resident training in healthcare quality and patient safety. Medical schools and residency programs have struggled to integrate healthcare quality and patient safety into their curricula.11, 12, 28 Hospitalists can play a major role in academic medical centers by helping to develop curricular materials and evaluations related to healthcare quality. Though intended primarily for future and current hospitalists, the HQPS Competencies and standards for the basic level may be adapted to provide educational targets for many learners in undergraduate and graduate medical education. Teaching hospitalists may use these standards to evaluate current educational efforts and design new curricula in collaboration with their medical school and residency program leaders.
Beyond the basic level of training in healthcare quality required for all, many residents will benefit from more advanced training experiences, including opportunities to apply knowledge and develop skills related to quality improvement. A recent report from the ACGME concluded that role models and mentors were essential for engaging residents in quality improvement efforts.29 Hospitalists are ideally suited to serve as role models during residents' experiential learning opportunities related to hospital quality. Several residency programs have begun to implement hospitalist tracks13 and quality improvement rotations.3032 Additionally, some academic medical centers have begun to develop and offer fellowship training in Hospital Medicine.33 These hospitalist‐led educational programs are an ideal opportunity to teach the intermediate and advanced training components, of healthcare quality and patient safety, to residents and fellows that wish to incorporate activity or leadership in quality improvement and patient safety science into their generalist or subspecialty careers. Teaching hospitalists should use the HQPS competency standards to define learning objectives for trainees at this stage of development.
To address the enormous educational needs in quality and safety for future physicians, a cadre of expert teachers in quality and safety will need to be developed. In collaboration with the Alliance for Academic Internal Medicine (AAIM), SHM is developing a Quality and Safety Educators Academy which will target academic hospitalists and other medical educators interested in developing advanced skills in quality improvement and patient safety education.
Assessment of Competence
An essential component of a rigorous faculty development program or medical education initiative is the assessment of whether these endeavors are achieving their stated aims. Published literature provides examples of useful assessment methods applicable to the HQPS Competencies. Knowledge in several areas of HQPS competence may be assessed with the use of multiple choice tests.34, 35 Knowledge of quality improvement methods may be assessed using the Quality Improvement Knowledge Application Tool (QIKAT), an instrument in which the learner responds to each of 3 scenarios with an aim, outcome and process measures, and ideas for changes which may result in improved performance.36 Teamwork and communication skills may be assessed using 360‐degree evaluations3739 and direct observation using behaviorally anchored rating scales.4043 Objective structured clinical examinations have been used to assess knowledge and skills related to patient safety principles.44, 45 Notably, few studies have rigorously assessed the validity and reliability of tools designed to evaluate competence related to healthcare quality.46 Additionally, to our knowledge, no prior research has evaluated assessment specifically for hospitalists. Thus, the development and validation of new assessment tools based on the HQPS Competencies for learners at each level is a crucial next step in the educational process. Additionally, evaluation of educational initiatives should include analyses of clinical benefit, as the ultimate goal of these efforts is to improve patient care.47, 48
Conclusion
Hospitalists are poised to have a tremendous impact on improving the quality of care for hospitalized patients. The lack of training in quality improvement in traditional medical education programs, in which most current hospitalists were trained, can be overcome through appropriate use of the HQPS Competencies. Formal incorporation of the HQPS Competencies into professional development programs, and innovative educational initiatives and curricula, will help provide current hospitalists and the next generations of hospitalists with the needed skills to be successful.
- Crossing the Quality Chasm: A New Health System for the Twenty‐first Century.Washington, DC:Institute of Medicine;2001.
- Care in U.S. hospitals—the Hospital Quality Alliance program.N Engl J Med.2005;353(3):265–274. , , , .
- Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization.JAMA.2003;290(14):1868–1874. , .
- Hospital Compare—A quality tool provided by Medicare. Available at: http://www.hospitalcompare.hhs.gov/. Accessed April 23,2010.
- The Leapfrog Group: Hospital Quality Ratings. Available at: http://www.leapfroggroup.org/cp. Accessed April 30,2010.
- Why Not the Best? A Healthcare Quality Improvement Resource. Available at: http://www.whynotthebest.org/. Accessed April 30,2010.
- The Joint Commission: Facts about ORYX for hospitals (National Hospital Quality Measures). Available at: http://www.jointcommission.org/accreditationprograms/hospitals/oryx/oryx_facts.htm. Accessed August 19,2010.
- The Joint Commission: National Patient Safety Goals. Available at: http://www.jointcommission.org/patientsafety/nationalpatientsafetygoals/. Accessed August 9,2010.
- Hospital Acquired Conditions: Overview. Available at: http://www.cms.gov/HospitalAcqCond/01_Overview.asp. Accessed April 30,2010.
- Report to Congress:Plan to Implement a Medicare Hospital Value‐based Purchasing Program. Washington, DC: US Department of Health and Human Services, Center for Medicare and Medicaid Services;2007.
- Unmet Needs: Teaching Physicians to Provide Safe Patient Care.Boston, MA:Lucian Leape Institute at the National Patient Safety Foundation;2010.
- Patient safety education at U.S. and Canadian medical schools: results from the 2006 Clerkship Directors in Internal Medicine survey.Acad Med.2009;84(12):1672–1676. , , , , .
- Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists' needs.J Gen Intern Med.2008;23(7):1110–1115. , , , , .
- Hospitalists' perceptions of their residency training needs: results of a national survey.Am J Med.2001;111(3):247–254. , , , .
- Redesigning residency education in internal medicine: a position paper from the Association of Program Directors in Internal Medicine.Ann Intern Med.2006;144(12):920–926. , , , , .
- Redesigning training for internal medicine.Ann Intern Med.2006;144(12):927–932. , , .
- Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1(1):48–56. , , , , .
- Intermountain Healthcare. 20‐Day Course for Executives 2001.
- Curriculum Development for Medical Education: A Six‐step Approach.Baltimore, MD:Johns Hopkins Press;1998. , , , .
- Society of Hospital Medicine Quality Improvement Basics. Available at: http://www.hospitalmedicine.org/Content/NavigationMenu/QualityImprovement/QIPrimer/QI_Primer_Landing_Pa.htm. Accessed June 4,2010.
- American Board of Internal Medicine: Questions and Answers Regarding ABIM's Maintenance of Certification in Internal Medicine With a Focused Practice in Hospital Medicine Program. Available at: http://www.abim.org/news/news/focused‐practice‐hospital‐medicine‐qa.aspx. Accessed August 9,2010.
- Assessing the needs of residency program directors to meet the ACGME general competencies.Acad Med.2002;77(7):750. , , .
- Accreditation Council for Graduate Medical Education and Institute for Healthcare Improvement 90‐Day Project. Involving Residents in Quality Improvement: Contrasting “Top‐Down” and “Bottom‐Up” Approaches.Chicago, IL;ACGME;2008. .
- Teaching internal medicine residents quality improvement techniques using the ABIM's practice improvement modules.J Gen Intern Med.2008;23(7):927–930. , , , .
- A self‐instructional model to teach systems‐based practice and practice‐based learning and improvement.J Gen Intern Med.2008;23(7):931–936. , , , , .
- Creating a quality improvement elective for medical house officers.J Gen Intern Med.2004;19(8):861–867. , , , , .
- Hospital medicine fellowships: works in progress.Am J Med.2006;119(1):72.e1‐e7. , , , .
- Web‐based education in systems‐based practice: a randomized trial.Arch Intern Med.2007;167(4):361–366. , , , .
- A self‐instructional model to teach systems‐based practice and practice‐based learning and improvement.J Gen Intern Med.2008;23(7):931–936. , , , , .
- The quality improvement knowledge application tool: an instrument to assess knowledge application in practice‐based learning and improvement.J Gen Intern Med.2003;18(suppl 1):250. , , , .
- Effect of multisource feedback on resident communication skills and professionalism: a randomized controlled trial.Arch Pediatr Adolesc Med.2007;161(1):44–49. , , , et al.
- Reliability of a 360‐degree evaluation to assess resident competence.Am J Phys Med Rehabil.2007;86(10):845–852. , .
- Pilot study of a 360‐degree assessment instrument for physical medicine 82(5):394–402. , , , .
- Anaesthetists' non‐technical skills (ANTS): evaluation of a behavioural marker system.Br J Anaesth.2003;90(5):580–588. , , , , , .
- The Mayo high performance teamwork scale: reliability and validity for evaluating key crew resource management skills.Simul Healthc.2007;2(1):4–10. , , , et al.
- Reliability of a revised NOTECHS scale for use in surgical teams.Am J Surg.2008;196(2):184–190. , , , , , .
- Observational teamwork assessment for surgery: construct validation with expert versus novice raters.Ann Surg.2009;249(6):1047–1051. , , , , , .
- A patient safety objective structured clinical examination.J Patient Saf.2009;5(2):55–60. , , , , , .
- The Objective Structured Clinical Examination as an educational tool in patient safety.Jt Comm J Qual Patient Saf.2007;33(1):48–53. , .
- Measurement of the general competencies of the Accreditation Council for Graduate Medical Education: a systematic review.Acad Med.2009;84(3):301–309. , , .
- Effectiveness of teaching quality improvement to clinicians: a systematic review.JAMA.2007;298(9):1023–1037. , , , , , .
- Methodological rigor of quality improvement curricula for physician trainees: a systematic review and recommendations for change.Acad Med.2009;84(12):1677–1692. , , , , .
- Crossing the Quality Chasm: A New Health System for the Twenty‐first Century.Washington, DC:Institute of Medicine;2001.
- Care in U.S. hospitals—the Hospital Quality Alliance program.N Engl J Med.2005;353(3):265–274. , , , .
- Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization.JAMA.2003;290(14):1868–1874. , .
- Hospital Compare—A quality tool provided by Medicare. Available at: http://www.hospitalcompare.hhs.gov/. Accessed April 23,2010.
- The Leapfrog Group: Hospital Quality Ratings. Available at: http://www.leapfroggroup.org/cp. Accessed April 30,2010.
- Why Not the Best? A Healthcare Quality Improvement Resource. Available at: http://www.whynotthebest.org/. Accessed April 30,2010.
- The Joint Commission: Facts about ORYX for hospitals (National Hospital Quality Measures). Available at: http://www.jointcommission.org/accreditationprograms/hospitals/oryx/oryx_facts.htm. Accessed August 19,2010.
- The Joint Commission: National Patient Safety Goals. Available at: http://www.jointcommission.org/patientsafety/nationalpatientsafetygoals/. Accessed August 9,2010.
- Hospital Acquired Conditions: Overview. Available at: http://www.cms.gov/HospitalAcqCond/01_Overview.asp. Accessed April 30,2010.
- Report to Congress:Plan to Implement a Medicare Hospital Value‐based Purchasing Program. Washington, DC: US Department of Health and Human Services, Center for Medicare and Medicaid Services;2007.
- Unmet Needs: Teaching Physicians to Provide Safe Patient Care.Boston, MA:Lucian Leape Institute at the National Patient Safety Foundation;2010.
- Patient safety education at U.S. and Canadian medical schools: results from the 2006 Clerkship Directors in Internal Medicine survey.Acad Med.2009;84(12):1672–1676. , , , , .
- Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists' needs.J Gen Intern Med.2008;23(7):1110–1115. , , , , .
- Hospitalists' perceptions of their residency training needs: results of a national survey.Am J Med.2001;111(3):247–254. , , , .
- Redesigning residency education in internal medicine: a position paper from the Association of Program Directors in Internal Medicine.Ann Intern Med.2006;144(12):920–926. , , , , .
- Redesigning training for internal medicine.Ann Intern Med.2006;144(12):927–932. , , .
- Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1(1):48–56. , , , , .
- Intermountain Healthcare. 20‐Day Course for Executives 2001.
- Curriculum Development for Medical Education: A Six‐step Approach.Baltimore, MD:Johns Hopkins Press;1998. , , , .
- Society of Hospital Medicine Quality Improvement Basics. Available at: http://www.hospitalmedicine.org/Content/NavigationMenu/QualityImprovement/QIPrimer/QI_Primer_Landing_Pa.htm. Accessed June 4,2010.
- American Board of Internal Medicine: Questions and Answers Regarding ABIM's Maintenance of Certification in Internal Medicine With a Focused Practice in Hospital Medicine Program. Available at: http://www.abim.org/news/news/focused‐practice‐hospital‐medicine‐qa.aspx. Accessed August 9,2010.
- Assessing the needs of residency program directors to meet the ACGME general competencies.Acad Med.2002;77(7):750. , , .
- Accreditation Council for Graduate Medical Education and Institute for Healthcare Improvement 90‐Day Project. Involving Residents in Quality Improvement: Contrasting “Top‐Down” and “Bottom‐Up” Approaches.Chicago, IL;ACGME;2008. .
- Teaching internal medicine residents quality improvement techniques using the ABIM's practice improvement modules.J Gen Intern Med.2008;23(7):927–930. , , , .
- A self‐instructional model to teach systems‐based practice and practice‐based learning and improvement.J Gen Intern Med.2008;23(7):931–936. , , , , .
- Creating a quality improvement elective for medical house officers.J Gen Intern Med.2004;19(8):861–867. , , , , .
- Hospital medicine fellowships: works in progress.Am J Med.2006;119(1):72.e1‐e7. , , , .
- Web‐based education in systems‐based practice: a randomized trial.Arch Intern Med.2007;167(4):361–366. , , , .
- A self‐instructional model to teach systems‐based practice and practice‐based learning and improvement.J Gen Intern Med.2008;23(7):931–936. , , , , .
- The quality improvement knowledge application tool: an instrument to assess knowledge application in practice‐based learning and improvement.J Gen Intern Med.2003;18(suppl 1):250. , , , .
- Effect of multisource feedback on resident communication skills and professionalism: a randomized controlled trial.Arch Pediatr Adolesc Med.2007;161(1):44–49. , , , et al.
- Reliability of a 360‐degree evaluation to assess resident competence.Am J Phys Med Rehabil.2007;86(10):845–852. , .
- Pilot study of a 360‐degree assessment instrument for physical medicine 82(5):394–402. , , , .
- Anaesthetists' non‐technical skills (ANTS): evaluation of a behavioural marker system.Br J Anaesth.2003;90(5):580–588. , , , , , .
- The Mayo high performance teamwork scale: reliability and validity for evaluating key crew resource management skills.Simul Healthc.2007;2(1):4–10. , , , et al.
- Reliability of a revised NOTECHS scale for use in surgical teams.Am J Surg.2008;196(2):184–190. , , , , , .
- Observational teamwork assessment for surgery: construct validation with expert versus novice raters.Ann Surg.2009;249(6):1047–1051. , , , , , .
- A patient safety objective structured clinical examination.J Patient Saf.2009;5(2):55–60. , , , , , .
- The Objective Structured Clinical Examination as an educational tool in patient safety.Jt Comm J Qual Patient Saf.2007;33(1):48–53. , .
- Measurement of the general competencies of the Accreditation Council for Graduate Medical Education: a systematic review.Acad Med.2009;84(3):301–309. , , .
- Effectiveness of teaching quality improvement to clinicians: a systematic review.JAMA.2007;298(9):1023–1037. , , , , , .
- Methodological rigor of quality improvement curricula for physician trainees: a systematic review and recommendations for change.Acad Med.2009;84(12):1677–1692. , , , , .
Transitions of Care Consensus Policy Statement
Studies of the transition of care between inpatient and outpatient settings have shown that there are significant patient safety and quality deficiencies in our current system. The transition from the hospital setting to the outpatient setting has been more extensively studied than the transition from the outpatient setting to the inpatient setting. One prospective cohort study of 400 patients found that 1 in 5 patients discharged from the hospital to home experienced an adverse event, which was defined as an injury resulting from medical management rather than the underlying disease, within 3 weeks of discharge.1 This study also concluded that 66% of these were drug‐related adverse events, many of which could have been avoided or mitigated. Another prospective cross‐sectional study of 2644 patient discharges found that approximately 40% of the patients had pending test results at the time of discharge and that 10% of these required some action, yet the outpatient physicians and patients were unaware of these results.2 Medication discrepancies have also been shown to be prevalent, with 1 prospective observational study of 375 patients showing that 14% of elderly patients had 1 or more medication discrepancies and 14% of those patients with medication discrepancies were rehospitalized within 30 days versus 6% of the patients who did not experience a medication discrepancy.3 A recent review of the literature cited improving transitional care as a key area of opportunity for improving postdischarge care4
Lack of communication has clearly been shown to adversely affect postdischarge care transitions.5 A recent summary of the literature by a Society of Hospital Medicine (SHM)/Society of General Internal Medicine (SGIM) task force found that direct communication between hospital physicians and primary care physicians occurs infrequently (in 3%‐20% of cases studied), and the availability of a discharge summary at the first postdischarge visit is low (12%‐34%) and does not improve greatly even after 4 weeks (51%‐77%); this affects the quality of care in approximately 25% of follow‐up visits.5 This systematic review of the literature also found that discharge summaries often lack important information such as diagnostic test results, the treatment or hospital course, discharge medications, test results pending at discharge, patient or family counseling, and follow‐up plans.
However, the lack of studies of the communication between ambulatory physicians and hospital physicians prior to admission or during emergency department (ED) visits does not imply that this communication is not equally important and essential to high‐quality care. According to the Centers for Disease Control, the greatest source of hospital admissions in many institutions is the ED. Over 115,000,000 visits were made to the nation's approximately 4828 EDs in 2005, and about 85.2% of ED visits end in discharge.6 The ED is also the point of re‐entry into the system for individuals who may have had an adverse outcome linked to a prior hospitalization.6 Communication between hospital physicians and primary care physicians must be established to create a loop of continuous care and diminish morbidity and mortality at this critical transition point.
While transitions can be a risky period for patient safety, observational studies suggest there are benefits to transitions. A new physician may notice something overlooked by the current caregivers.712 Another factor contributing to the challenges of care transitions is the lack of a single clinician or clinical entity taking responsibility for coordination across the continuum of the patient's overall healthcare, regardless of setting.13 Studies indicate that a relationship with a medical home is associated with better health on both the individual and population levels, with lower overall costs of care and with reductions in disparities in health between socially disadvantaged subpopulations and more socially advantaged populations.14 Several medical societies have addressed this issue, including the American College of Physicians (ACP), SGIM, American Academy of Family Physicians, and American Academy of Pediatrics, and they have proposed the concept of the medical home or patient‐centered medical home, which calls for clinicians to assume this responsibility for coordinating their patients' care across settings and for the healthcare system to value and reimburse clinicians for this patient‐centered and comprehensive method of practice.1517
Finally, patients and their families or caregivers have an important role to play in transitions of care. Several observational and cross‐sectional studies have shown that patients and their caregivers and families express significant feelings of anxiety during care transitions. This anxiety can be caused by a lack of understanding and preparation for their self‐care role in the next care setting, confusion due to conflicting advice from different practitioners, and a sense of abandonment attributable to the inability to contact an appropriate healthcare practitioner for guidance, and they report an overall disregard for their preferences and input into the design of the care plan.1820 Clearly, there is room for improvement in all these areas of the inpatient and outpatient care transition, and the Transitions of Care Consensus Conference (TOCCC) attempted to address these areas by developing standards for the transition of care that also harmonize with the work of the Stepping up to the Plate (SUTTP) Alliance of the American Board of Internal Medicine (ABIM) Foundation.21 In addition, other important stakeholders are addressing this topic and actively working to improve communication and continuity in care, including the Centers for Medicare and Medicaid Services (CMS) and the National Quality Forum (NQF). CMS recently developed the Continuity Assessment Record & Evaluation (CARE) tool, a data collection instrument designed to be a standardized, interoperable, common assessment tool to capture key patient characteristics that will provide information related to resource utilization, clinical outcomes, and postdischarge disposition. NQF held a national forum on care coordination in the spring of 2008.
In summary, it is clear that there are qualitative and quantitative deficiencies in transitions of care between the inpatient and outpatient setting that are affecting patient safety and experience with care. The transition from the inpatient setting to the outpatient setting has been more extensively studied, and this body of literature has underscored for the TOCCC several important areas in need of guidance and improvement. Because of this, the scope of application of this document should initially emphasize inpatient‐to‐outpatient transitions as a first step in learning how to improve these processes. However, the transition from the outpatient setting to the inpatient setting also is a clear priority. Because the needs for transfer of information, authority, and responsibility may be different in these situations, a second phase of additional work to develop principles to guide these transitions should be undertaken as quickly as possible. Experience gained in applying these principles to inpatient‐to‐outpatient transitions might usefully inform such work.
Communication among providers and with the patients and their families arose as a clear priority. Medication discrepancies, pending tests, and unknown diagnostic or treatment plans have an immediate impact on patients' health and outcomes. The TOCCC discussed what elements should be among the standard pieces of information exchanged among providers during these transition points. The dire need for coordination of care or a coordinating clinician/medical home became a clear theme in the deliberations of the TOCCC. Most importantly, the role of the patients and their families/caregivers in their continuing care is apparent, and the TOCCC felt this must be an integral part of any principles or standards for transitions of care.
Methods
In the fall/winter of 2006, the executive committees of ACP, SGIM, and SHM agreed to jointly develop a policy statement on transitions of care. Transitions of care specifically between the inpatient and outpatient settings were selected as an ideal topic for collaboration for the 3 societies as they represent the continuum of care for internal medicine within these settings. To accomplish this, the 3 organizations decided to convene a consensus conference to develop consensus guidelines and standards concerning transitions between inpatient and outpatient settings through a multi‐stakeholder process. A steering committee was convened with representatives from ACP, SGIM, SHM, the Agency for Healthcare Research and Quality (AHRQ), ABIM, and the American Geriatric Society (AGS). The steering committee developed the agenda and invitee list for the consensus conference. After the conference was held, the steering committee was expanded to include representation from the American College of Emergency Physicians (ACEP) and the Society for Academic Emergency Medicine (SAEM).
During the planning stages of the TOCCC, the steering committee became aware of the SUTTP Alliance of the ABIM Foundation. The SUTTP Alliance has representation from medical specialties such as internal medicine and its subspecialties, family medicine, and surgery. The alliance was formed in 2006 and has been working on care coordination across multiple settings and specialties. The SUTTP Alliance had developed a set of principles and standards for care transitions and agreed to provide the draft document to the TOCCC for review, input, and further development and refinement.
Recommendations on Principles and Standards for Managing Transitions in Care Between the Inpatient and Outpatient Settings from ACP, SGIM, SHM, AGS, ACEP, and SAEM
The SUTTP Alliance presented a draft document entitled Principles and Standards for Managing Transitions in Care. In this document, the SUTTP Alliance proposes 5 principles and 8 standards for effective care transitions. A key element of the conference was a presentation by NQF on how to move from principles to standards and eventually to measures. This presentation provided the TOCCC with the theoretical underpinnings for the discussion of these principles and standards and how the TOCCC would provide input on them. The presentation provided an outline for the flow from principles to measures. First, there needs to be a framework that provides guiding principles for what we would like to measure and eventually report. From those principles, a set of preferred practices or standards are developed; the standards are more granular and allow for more specificity in describing the desired practice or outcome and its elements. Standards then provide a roadmap for identification and development of performance measures. With this framework in mind, the TOCCC then discussed in detail the SUTTP principles and standards.
The 5 principles for effective care transitions developed by the SUTTP Alliance are as follows:
-
Accountability.
-
Communication: clear and direct communication of treatment plans and follow‐up expectations.
-
Timely feedback and feed‐forward of information.
-
Involvement of the patient and family member, unless inappropriate, in all steps.
-
Respect of the hub of coordination of care.
The TOCCC re‐affirmed these principles and added 4 additional principles to this list. Three of the new principles were statements within the 8 standards developed by the SUTTP, but when taking into consideration the framework for the development of principles into standards, the TOCCC felt that the statements were better represented as principles. They are as follows:
-
All patients and their families/caregivers should have and should be able to identify their medical home or coordinating clinician (ie, practice or practitioner). (This was originally part of the coordinating clinicians standard, and the TOCCC voted to elevate this to a principle).
-
At every point along the transition, the patients and/or their families/caregivers need to know who is responsible for care at that point and who to contact and how.
-
National standards should be established for transitions in care and should be adopted and implemented at the national and community level through public health institutions, national accreditation bodies, medical societies, medical institutions, and so forth in order to improve patient outcomes and patient safety. (This was originally part of the SUTTP community standards standard, and the TOCCC moved to elevate this to a principle).
-
For monitoring and improving transitions, standardized metrics related to these standards should be used in order to lead to continuous quality improvement and accountability. (This was originally part of the measurement standard, and the TOCCC voted to elevate this to a principle).
The SUTTP Alliance proposed the following 8 standards for care transitions:
-
Coordinating clinicians.
-
Care plans.
-
Communication infrastructure.
-
Standard communication formats.
-
Transition responsibility.
-
Timeliness.
-
Community standards.
-
Measurement.
The TOCCC affirmed these standards and through a consensus process added more specificity to most of them and elevated components of some of them to principles, as discussed previously. The TOCCC proposes that the following be merged with the SUTTP standards:
-
Coordinating clinicians. Communication and information exchange between the medical home and the receiving provider should occur in an amount of time that will allow the receiving provider to effectively treat the patient. This communication and information exchange should ideally occur whenever patients are at a transition of care (eg, at discharge from the inpatient setting). The timeliness of this communication should be consistent with the patient's clinical presentation and, in the case of a patient being discharged, the urgency of the follow‐up required. Guidelines will need to be developed that address both the timeliness and means of communication between the discharging physician and the medical home. Communication and information exchange between the medical home and other physicians may be in the form of a call, voice mail, fax, or other secure, private, and accessible means including mutual access to an electronic health record.
The ED represents a unique subset of transitions of care. The potential transition can generally be described as outpatient to outpatient or outpatient to inpatient, depending on whether or not the patient is admitted to the hospital. The outpatient‐to‐outpatient transition can also encompass a number of potential variations. Patients with a medical home may be referred to the ED by the medical home, or they may self‐refer. A significant number of patients do not have a physician and self‐refer to the ED. The disposition from the ED, either outpatient to outpatient or outpatient to inpatient, is similarly represented by a number of variables. Discharged patients may or may not have a medical home, may or may not need a specialist, and may or may not require urgent (<24 hours) follow‐up. Admitted patients may or may not have a medical home and may or may not require specialty care. This variety of variables precludes a single approach to ED transition of care coordination. The determination of which scenarios will be appropriate for the development of standards (coordinating clinicians and transition responsibility) will require further contributions from ACEP and SAEM and review by the steering committee.
-
Care plans/transition record. The TOCCC also agreed that there is a minimal set of data elements that should always be part of the transition record. The TOCCC suggested that this minimal data set be part of an initial implementation of this standard. That list includes the following:
The TOCCC discussed what components should be included in an ideal transition record and agreed on the following elements:
The TOCCC also added a new standard under this heading: Patients and/or their families/caregivers must receive, understand, and be encouraged to participate in the development of the transition record, which should take into consideration patients' health literacy and insurance status and be culturally sensitive.
-
Principle diagnosis and problem list.
-
Medication list (reconciliation) including over‐the‐counter medications/herbals, allergies, and drug interactions.
-
Clear identification of the medical home/transferring coordinating physician/emnstitution and the contact information.
-
Patient's cognitive status.
-
Test results/pending results.
-
Principle diagnosis and problem list.
-
Medication list (reconciliation) including over‐the‐counter medications/herbals, allergies, and drug interactions.
-
Emergency plan and contact number and person.
-
Treatment and diagnostic plan.
-
Prognosis and goals of care.
-
Test results/pending results.
-
Clear identification of the medical home and/or transferring coordinating physician/emnstitution.
-
Patient's cognitive status.
-
Advance directives, power of attorney, and consent.
-
Planned interventions, durable medical equipment, wound care, and so forth.
-
Assessment of caregiver status.
-
Communication infrastructure. All communications between providers and between providers and patients and families/caregivers need to be secure, private, Health Insurance Portability and Accountability Actcompliant, and accessible to patients and those practitioners who care for them. Communication needs to be 2‐way with an opportunity for clarification and feedback. Each sending provider needs to provide a contact name and the number of an individual who can respond to questions or concerns. The content of transferred information needs to include a core standardized data set. This information needs to be transferred as a living database; that is, it is created only once, and then each subsequent provider only needs to update, validate, or modify the information. Patient information should be available to the provider prior to the patient's arrival. Information transfer needs to adhere to national data standards. Patients should be provided with a medication list that is accessible (paper or electronic), clear, and dated.
-
Standard communication formats. Communities need to develop standard data transfer forms (templates and transmission protocols). Access to a patient's medical history needs to be on a current and ongoing basis with the ability to modify information as a patient's condition changes. Patients, families, and caregivers should have access to their information (nothing about me without me). A section on the transfer record should be devoted to communicating a patient's preferences, priorities, goals, and values (eg, the patient does not want intubation).
-
Transition responsibility. The sending provider/emnstitution/team at the clinical organization maintains responsibility for the care of the patient until the receiving clinician/location confirms that the transfer and assumption of responsibility is complete (within a reasonable timeframe for the receiving clinician to receive the information; ie, transfers that occur in the middle of the night can be communicated during standard working hours). The sending provider should be available for clarification with issues of care within a reasonable timeframe after the transfer has been completed, and this timeframe should be based on the conditions of the transfer settings. The patient should be able to identify the responsible provider. In the case of patients who do not have an ongoing ambulatory care provider or whose ambulatory care provider has not assumed responsibility, the hospital‐based clinicians will not be required to assume responsibility for the care of these patients once they are discharged.
-
Timeliness. Timeliness of feedback and feed‐forward of information from a sending provider to a receiving provider should be contingent on 4 factors:
This information should be available at the time of the patient encounter.
-
Transition settings.
-
Patient circumstances.
-
Level of acuity.
-
Clear transition responsibility.
-
Community standards. Medical communities/emnstitutions must demonstrate accountability for transitions of care by adopting national standards, and processes should be established to promote effective transitions of care.
-
Measurement. For monitoring and improving transitions, standardized metrics related to these standards should be used. These metrics/measures should be evidence‐based, address documented gaps, and have a demonstrated impact on improving care (complying with performance measure standards) whenever feasible. Results from measurements using standardized metrics must lead to continuous improvement of the transition process. The validity, reliability, cost, and impact, including unintended consequences, of these measures should be assessed and re‐evaluated.
All these standards should be applied with special attention to the various transition settings and should be appropriate to each transition setting. Measure developers will need to take this into account when developing measures based on these proposed standards.
The TOCCC also went through a consensus prioritization exercise to rank‐order the consensus standards. All meeting participants were asked to rank their top 3 priorities of the 7 standards, giving a numeric score of 1 for their highest priority, a score of 2 for their second highest priority, and a score of 3 for their third highest priority. Summary scores were calculated, and the standards were rank‐ordered from the lowest summary score to the highest. The TOCCC recognizes that full implementation of all of these standards may not be feasible and that these standards may be implemented on a stepped or incremental basis. This prioritization can assist in deciding which of these to implement. The results of the prioritization exercise are as follows:
-
All transitions must include a transition record
-
Transition responsibility
-
Coordinating clinicians
-
Patient and family involvement and ownership of the transition record
-
Communication infrastructure
-
Timeliness
-
Community standards
Future Challenges
In addition to the work on the principles and standards, the TOCCC uncovered six further challenges which are described below.
Electronic Health Record
There was disagreement in the group concerning the extent to which electronic health records would resolve the existing issues involved in poor transfers of care. However, the group did concur that: established transition standards should not be contingent upon the existence of an electronic health record and some universally, nationally‐defined set of core transfer information should be the short‐term target of efforts to establish electronic transfers of information
Use of a Transition Record
There should be a core data set (much smaller than a complete health record or discharge summary) that goes to the patient and the receiving provider, and this data set should include items in the core record described previously.
Medical Home
There was a lot of discussion about the benefits and challenges of establishing a medical home and inculcating the concept into delivery and payment structures. The group was favorable to the concept; however, since the medical home is not yet a nationally defined standard, care transition standards should not be contingent upon the existence of a medical home. Wording of future standards should use a general term for the clinician coordinating care across sites in addition to the term medical home. Using both terms will acknowledge the movement toward the medical home without requiring adoption of medical home practices to refine and implement quality measures for care transitions.
Pay for Performance
The group strongly agreed that behaviors and clinical practices are influenced by payment structures. Therefore, they agreed that a new principle should be established to advocate for changes in reimbursement practices to reward safe, complete transfers of information and care. However, the development of standards and measures should move forward on the basis of the current reimbursement practices and without assumptions of future changes.
Underserved/Disadvantaged Populations
Care transition standards and measures should be the same for all economic groups with careful attention that lower socioeconomic groups are not forgotten or unintentionally disadvantaged, including the potential for cherry‐picking. It should be noted that underserved populations may not always have a medical home because of their disadvantaged access to the health system and providers. Moreover, clinicians who care for underserved/disadvantaged populations should not be penalized by standards that assume continuous clinical care and ongoing relationships with patients who may access the health system only sporadically.
Need for Patient‐Centered Approaches
The group agreed that across all principles and standards previously established by the SUTTP coalition, greater emphasis is needed on patient‐centered approaches to care including, but not limited to, the inclusion of patient and families in care and transition planning, greater access to medical records, and the need for education at the time of discharge regarding self‐care and core transfer information.
Next Steps for the TOCCC
The TOCCC focuses only on the transitions between the inpatient and outpatient settings and does not address the equally important transitions between many other different care settings, such as the transition from a hospital to a nursing home or rehabilitation facility. The intent of the TOCCC is to provide this document to national measure developers such as the Physician Consortium for Performance Improvement and others in order to guide measure development and ultimately lead to improvements in quality and safety in care transitions.
Appendix
Conference Description
The TOCCC was held over 2 days on July 11 to 12, 2007 at ACP headquarters in Philadelphia, PA. There were 51 participants representing over 30 organizations. Participating organizations included medical specialty societies from internal medicine as well as family medicine and pediatrics, governmental agencies such as AHRQ and CMS, performance measure developers such as the National Committee for Quality Assurance and the American Medical Association Physician Consortium on Performance Improvement, nurse associations such as the Visiting Nurse Associations of America and Home Care and Hospice, pharmacist groups, and patient groups such as the Institute for Family‐Centered Care. The morning of the first day was dedicated to presentations covering the AHRQ Stanford Evidence‐Based Practice Center's evidence report on care coordination, the literature concerning transitions of care, the continuum of measurement from principles to standards to measures, and the SUTTP document of principles. The attendees then split into breakout groups that discussed the principles and standards developed by the SUTTP and refined and/or revised them. All discussions were summarized and agreed on by consensus and were presented by the breakout groups to the full conference attendees. The second day was dedicated to reviewing the work of the breakout groups and further refinement of the principles and standards through a group consensus process. Once this was completed, the attendees then prioritized the standards with a group consensus voting process. Each attendee was given 1 vote, and each attendee attached a rating of 1 for highest priority and 3 for lowest priority to the standards. The summary scores were then calculated, and the standards were then ranked from those summary scores.
The final activity of the conference was to discuss some of the overarching themes and environmental factors that could influence the acceptance, endorsement, and implementation of the standards developed. The TOCCC adjourned with the tasks of forwarding its conclusions to the SUTTP Alliance and developing a policy document to be reviewed by other stakeholders not well represented at the conference. Two such pivotal organizations were ACEP and SAEM, which were added to the steering committee after the conference. Subsequently, ACP, SGIM, SHM, AGS, ACEP, and SAEM approved the summary document, and they will forward it to the other participating organizations for possible endorsement and to national developers of measures and standards for use in performance measurement development.
Appendix
Conflict of Interest Statements
This is a summary of conflict of interest statements for faculty, authors, members of the planning committees, and staff (ACP, SHM, and SGIM)
The following members of the steering (or planning) committee and staff of the TOCCC have declared a conflict of interest:
-
Dennis Beck, MD, FACEP (ACEP representative; President and Chief Executive Officer of Beacon Medical Services): 100 units of stock options/holdings in Beacon Hill Medical Services.
-
Tina Budnitz, MPH (SHM staff; Senior Advisor for Quality Initiatives, SHM): employment by SHM
-
Eric S. Holmboe, MD (ABIM representative; Senior Vice President of Quality Research and Academic Affairs, ABIM): employment by ABIM.
-
Vincenza Snow, MD, FACP (ACP staff; Director of Clinical Programs and Quality of Care, ACP): research grants from the Centers for Disease Control, Atlantic Philanthropies, Novo Nordisk, Bristol Myers Squibb, Boehringer Ingelheim, Pfizer, United Healthcare Foundation, and Sanofi Pasteur.
-
Laurence D. Wellikson, MD, FACP (SHM staff; Chief Executive Officer of SHM): employment by SHM.
-
Mark V. Williams, MD, FACP (cochair and SHM representative; Editor in Chief of the Journal of Hospital Medicine and former President of SHM): membership in SHM.
The following members of the steering (or planning) committee and staff of the TOCCC have declared no conflict of interest:
-
David Atkins, MD, MPH [AHRQ representative; Associate Director of Quality Enhancement Research Initiative, Department of Veteran Affairs, Office of Research and Development, Health Services Research & Development (124)].
-
Doriane C. Miller, MD (cochair and SGIM representative; Associate Division Chief of General Internal Medicine, Stroger Hospital of Cook County).
-
Jane Potter, MD (AGS representative; Professor and Chief of Geriatrics, University of Nebraska Medical Center).
-
Robert L. Wears, MD, FACEP (SAEM representative; Professor of the Department of Emergency Medicine, University of Florida).
-
Kevin B. Weiss, MD, MPH, MS, FACP (chair and ACP representative; Chief Executive Officer of the American Board of Medical Specialties).
- The incidence and severity of adverse events affecting patients after discharge from the hospital.Ann Intern Med.2003;138(3):161–167. , , , et al.
- Patient safety concerns arising from test results that return after hospital discharge.Ann Intern Med.2005;143(2):121–128. , , , et al.
- Posthospital medication discrepancies: prevalence and contributing factors.Arch Intern Med.2005;165(16):1842–1847. , , , .
- Addressing post‐discharge adverse events: a neglected area.Jt Comm J Qual Patient Saf.2008;34(2):85–97. , .
- Deficits in communication and information transfer between hospital‐based and primary care physicians: implications for patient safety and continuity of care.JAMA.2007;297(8):831–841. , , , et al.
- National Hospital Ambulatory Medical Care Survey: 2005 Emergency Department Summary.Hyattsville, MD:National Center for Health Statistics;2007.Advance Data from Vital and Health Statistics; vol386. , , .
- Do short breaks increase or decrease anesthetic risk?J Clin Anesth.1989;1(3):228–231. .
- Critical incidents associated with intraoperative exchanges of anesthesia personnel.Anesthesiology.1982;56(6):456–461. , , , .
- Shift changes among emergency physicians: best of times, worst of times. In:Proceedings of the Human Factors and Ergonomics Society 47th Annual Meeting.Denver, CO:Human Factors and Ergonomics Society;2003:1420–1423. , , , et al.
- Transitions in care: signovers in the emergency department. In:Proceedings of the Human Factors and Ergonomics Society 48th Annual Meeting.New Orleans, LA:Human Factors and Ergonomics Society;2004:1625–1628. , , , et al.
- Conceptual framework for the safety of handovers. In: Henriksen K, ed.Advances in Patient Safety.Rockville, MD:Agency for Healthcare Research and Quality/Department of Defense;2005:309–321. , , , et al.
- Medical errors and emergency medicine: will the difficult questions be asked, and answered?Acad Emerg Med.2003;10(8):910–911. .
- Lost in transition: challenges and opportunities for improving the quality of transitional care.Ann Intern Med.2004;141(7):533–536. , .
- The medical home, access to care, and insurance: a review of evidence.Pediatrics.2004;113(5 suppl):1493–1498. , .
- Blue Ribbon Panel of the Society of General Internal Medicine.Redesigning the practice model for general internal medicine. A proposal for coordinated care: a policy monograph of the Society of General Internal Medicine.J Gen Intern Med.2007;22(3):400–409.
- Medical Home Initiatives for Children with Special Needs Project Advisory Committee.The medical home.Pediatrics.2002;110(1 pt 1):184–186.
- American College of Physicians. The advanced medical home: a patient‐centered, physician‐guided model of healthcare. A policy monograph.2006. http://www.acponline.org/advocacy/where_we_stand/policy/adv_med.pdf. Accessed March 13, 2009.
- Development and testing of a measure designed to assess the quality of care transitions.Int J Integr Care.2002;2:e02. , , , et al.
- Carepartner experiences with hospital care.Med Care.1999;37(1):33–38. , , , et al.
- Assessing the quality of preparation for post hospital care from the patient's perspective: the care transitions measure.Med Care.2005;43(3):246–255. , , .
- American Board of Internal Medicine Foundation. Stepping up to the Plate Alliance. Principles and Standards for managing transitions in care (in press). Available at http://www.abimfoundation.org/publications/pdf_issue_brief/F06‐05‐2007_6.pdf. Accessed March 13,2009.
Studies of the transition of care between inpatient and outpatient settings have shown that there are significant patient safety and quality deficiencies in our current system. The transition from the hospital setting to the outpatient setting has been more extensively studied than the transition from the outpatient setting to the inpatient setting. One prospective cohort study of 400 patients found that 1 in 5 patients discharged from the hospital to home experienced an adverse event, which was defined as an injury resulting from medical management rather than the underlying disease, within 3 weeks of discharge.1 This study also concluded that 66% of these were drug‐related adverse events, many of which could have been avoided or mitigated. Another prospective cross‐sectional study of 2644 patient discharges found that approximately 40% of the patients had pending test results at the time of discharge and that 10% of these required some action, yet the outpatient physicians and patients were unaware of these results.2 Medication discrepancies have also been shown to be prevalent, with 1 prospective observational study of 375 patients showing that 14% of elderly patients had 1 or more medication discrepancies and 14% of those patients with medication discrepancies were rehospitalized within 30 days versus 6% of the patients who did not experience a medication discrepancy.3 A recent review of the literature cited improving transitional care as a key area of opportunity for improving postdischarge care4
Lack of communication has clearly been shown to adversely affect postdischarge care transitions.5 A recent summary of the literature by a Society of Hospital Medicine (SHM)/Society of General Internal Medicine (SGIM) task force found that direct communication between hospital physicians and primary care physicians occurs infrequently (in 3%‐20% of cases studied), and the availability of a discharge summary at the first postdischarge visit is low (12%‐34%) and does not improve greatly even after 4 weeks (51%‐77%); this affects the quality of care in approximately 25% of follow‐up visits.5 This systematic review of the literature also found that discharge summaries often lack important information such as diagnostic test results, the treatment or hospital course, discharge medications, test results pending at discharge, patient or family counseling, and follow‐up plans.
However, the lack of studies of the communication between ambulatory physicians and hospital physicians prior to admission or during emergency department (ED) visits does not imply that this communication is not equally important and essential to high‐quality care. According to the Centers for Disease Control, the greatest source of hospital admissions in many institutions is the ED. Over 115,000,000 visits were made to the nation's approximately 4828 EDs in 2005, and about 85.2% of ED visits end in discharge.6 The ED is also the point of re‐entry into the system for individuals who may have had an adverse outcome linked to a prior hospitalization.6 Communication between hospital physicians and primary care physicians must be established to create a loop of continuous care and diminish morbidity and mortality at this critical transition point.
While transitions can be a risky period for patient safety, observational studies suggest there are benefits to transitions. A new physician may notice something overlooked by the current caregivers.712 Another factor contributing to the challenges of care transitions is the lack of a single clinician or clinical entity taking responsibility for coordination across the continuum of the patient's overall healthcare, regardless of setting.13 Studies indicate that a relationship with a medical home is associated with better health on both the individual and population levels, with lower overall costs of care and with reductions in disparities in health between socially disadvantaged subpopulations and more socially advantaged populations.14 Several medical societies have addressed this issue, including the American College of Physicians (ACP), SGIM, American Academy of Family Physicians, and American Academy of Pediatrics, and they have proposed the concept of the medical home or patient‐centered medical home, which calls for clinicians to assume this responsibility for coordinating their patients' care across settings and for the healthcare system to value and reimburse clinicians for this patient‐centered and comprehensive method of practice.1517
Finally, patients and their families or caregivers have an important role to play in transitions of care. Several observational and cross‐sectional studies have shown that patients and their caregivers and families express significant feelings of anxiety during care transitions. This anxiety can be caused by a lack of understanding and preparation for their self‐care role in the next care setting, confusion due to conflicting advice from different practitioners, and a sense of abandonment attributable to the inability to contact an appropriate healthcare practitioner for guidance, and they report an overall disregard for their preferences and input into the design of the care plan.1820 Clearly, there is room for improvement in all these areas of the inpatient and outpatient care transition, and the Transitions of Care Consensus Conference (TOCCC) attempted to address these areas by developing standards for the transition of care that also harmonize with the work of the Stepping up to the Plate (SUTTP) Alliance of the American Board of Internal Medicine (ABIM) Foundation.21 In addition, other important stakeholders are addressing this topic and actively working to improve communication and continuity in care, including the Centers for Medicare and Medicaid Services (CMS) and the National Quality Forum (NQF). CMS recently developed the Continuity Assessment Record & Evaluation (CARE) tool, a data collection instrument designed to be a standardized, interoperable, common assessment tool to capture key patient characteristics that will provide information related to resource utilization, clinical outcomes, and postdischarge disposition. NQF held a national forum on care coordination in the spring of 2008.
In summary, it is clear that there are qualitative and quantitative deficiencies in transitions of care between the inpatient and outpatient setting that are affecting patient safety and experience with care. The transition from the inpatient setting to the outpatient setting has been more extensively studied, and this body of literature has underscored for the TOCCC several important areas in need of guidance and improvement. Because of this, the scope of application of this document should initially emphasize inpatient‐to‐outpatient transitions as a first step in learning how to improve these processes. However, the transition from the outpatient setting to the inpatient setting also is a clear priority. Because the needs for transfer of information, authority, and responsibility may be different in these situations, a second phase of additional work to develop principles to guide these transitions should be undertaken as quickly as possible. Experience gained in applying these principles to inpatient‐to‐outpatient transitions might usefully inform such work.
Communication among providers and with the patients and their families arose as a clear priority. Medication discrepancies, pending tests, and unknown diagnostic or treatment plans have an immediate impact on patients' health and outcomes. The TOCCC discussed what elements should be among the standard pieces of information exchanged among providers during these transition points. The dire need for coordination of care or a coordinating clinician/medical home became a clear theme in the deliberations of the TOCCC. Most importantly, the role of the patients and their families/caregivers in their continuing care is apparent, and the TOCCC felt this must be an integral part of any principles or standards for transitions of care.
Methods
In the fall/winter of 2006, the executive committees of ACP, SGIM, and SHM agreed to jointly develop a policy statement on transitions of care. Transitions of care specifically between the inpatient and outpatient settings were selected as an ideal topic for collaboration for the 3 societies as they represent the continuum of care for internal medicine within these settings. To accomplish this, the 3 organizations decided to convene a consensus conference to develop consensus guidelines and standards concerning transitions between inpatient and outpatient settings through a multi‐stakeholder process. A steering committee was convened with representatives from ACP, SGIM, SHM, the Agency for Healthcare Research and Quality (AHRQ), ABIM, and the American Geriatric Society (AGS). The steering committee developed the agenda and invitee list for the consensus conference. After the conference was held, the steering committee was expanded to include representation from the American College of Emergency Physicians (ACEP) and the Society for Academic Emergency Medicine (SAEM).
During the planning stages of the TOCCC, the steering committee became aware of the SUTTP Alliance of the ABIM Foundation. The SUTTP Alliance has representation from medical specialties such as internal medicine and its subspecialties, family medicine, and surgery. The alliance was formed in 2006 and has been working on care coordination across multiple settings and specialties. The SUTTP Alliance had developed a set of principles and standards for care transitions and agreed to provide the draft document to the TOCCC for review, input, and further development and refinement.
Recommendations on Principles and Standards for Managing Transitions in Care Between the Inpatient and Outpatient Settings from ACP, SGIM, SHM, AGS, ACEP, and SAEM
The SUTTP Alliance presented a draft document entitled Principles and Standards for Managing Transitions in Care. In this document, the SUTTP Alliance proposes 5 principles and 8 standards for effective care transitions. A key element of the conference was a presentation by NQF on how to move from principles to standards and eventually to measures. This presentation provided the TOCCC with the theoretical underpinnings for the discussion of these principles and standards and how the TOCCC would provide input on them. The presentation provided an outline for the flow from principles to measures. First, there needs to be a framework that provides guiding principles for what we would like to measure and eventually report. From those principles, a set of preferred practices or standards are developed; the standards are more granular and allow for more specificity in describing the desired practice or outcome and its elements. Standards then provide a roadmap for identification and development of performance measures. With this framework in mind, the TOCCC then discussed in detail the SUTTP principles and standards.
The 5 principles for effective care transitions developed by the SUTTP Alliance are as follows:
-
Accountability.
-
Communication: clear and direct communication of treatment plans and follow‐up expectations.
-
Timely feedback and feed‐forward of information.
-
Involvement of the patient and family member, unless inappropriate, in all steps.
-
Respect of the hub of coordination of care.
The TOCCC re‐affirmed these principles and added 4 additional principles to this list. Three of the new principles were statements within the 8 standards developed by the SUTTP, but when taking into consideration the framework for the development of principles into standards, the TOCCC felt that the statements were better represented as principles. They are as follows:
-
All patients and their families/caregivers should have and should be able to identify their medical home or coordinating clinician (ie, practice or practitioner). (This was originally part of the coordinating clinicians standard, and the TOCCC voted to elevate this to a principle).
-
At every point along the transition, the patients and/or their families/caregivers need to know who is responsible for care at that point and who to contact and how.
-
National standards should be established for transitions in care and should be adopted and implemented at the national and community level through public health institutions, national accreditation bodies, medical societies, medical institutions, and so forth in order to improve patient outcomes and patient safety. (This was originally part of the SUTTP community standards standard, and the TOCCC moved to elevate this to a principle).
-
For monitoring and improving transitions, standardized metrics related to these standards should be used in order to lead to continuous quality improvement and accountability. (This was originally part of the measurement standard, and the TOCCC voted to elevate this to a principle).
The SUTTP Alliance proposed the following 8 standards for care transitions:
-
Coordinating clinicians.
-
Care plans.
-
Communication infrastructure.
-
Standard communication formats.
-
Transition responsibility.
-
Timeliness.
-
Community standards.
-
Measurement.
The TOCCC affirmed these standards and through a consensus process added more specificity to most of them and elevated components of some of them to principles, as discussed previously. The TOCCC proposes that the following be merged with the SUTTP standards:
-
Coordinating clinicians. Communication and information exchange between the medical home and the receiving provider should occur in an amount of time that will allow the receiving provider to effectively treat the patient. This communication and information exchange should ideally occur whenever patients are at a transition of care (eg, at discharge from the inpatient setting). The timeliness of this communication should be consistent with the patient's clinical presentation and, in the case of a patient being discharged, the urgency of the follow‐up required. Guidelines will need to be developed that address both the timeliness and means of communication between the discharging physician and the medical home. Communication and information exchange between the medical home and other physicians may be in the form of a call, voice mail, fax, or other secure, private, and accessible means including mutual access to an electronic health record.
The ED represents a unique subset of transitions of care. The potential transition can generally be described as outpatient to outpatient or outpatient to inpatient, depending on whether or not the patient is admitted to the hospital. The outpatient‐to‐outpatient transition can also encompass a number of potential variations. Patients with a medical home may be referred to the ED by the medical home, or they may self‐refer. A significant number of patients do not have a physician and self‐refer to the ED. The disposition from the ED, either outpatient to outpatient or outpatient to inpatient, is similarly represented by a number of variables. Discharged patients may or may not have a medical home, may or may not need a specialist, and may or may not require urgent (<24 hours) follow‐up. Admitted patients may or may not have a medical home and may or may not require specialty care. This variety of variables precludes a single approach to ED transition of care coordination. The determination of which scenarios will be appropriate for the development of standards (coordinating clinicians and transition responsibility) will require further contributions from ACEP and SAEM and review by the steering committee.
-
Care plans/transition record. The TOCCC also agreed that there is a minimal set of data elements that should always be part of the transition record. The TOCCC suggested that this minimal data set be part of an initial implementation of this standard. That list includes the following:
The TOCCC discussed what components should be included in an ideal transition record and agreed on the following elements:
The TOCCC also added a new standard under this heading: Patients and/or their families/caregivers must receive, understand, and be encouraged to participate in the development of the transition record, which should take into consideration patients' health literacy and insurance status and be culturally sensitive.
-
Principle diagnosis and problem list.
-
Medication list (reconciliation) including over‐the‐counter medications/herbals, allergies, and drug interactions.
-
Clear identification of the medical home/transferring coordinating physician/emnstitution and the contact information.
-
Patient's cognitive status.
-
Test results/pending results.
-
Principle diagnosis and problem list.
-
Medication list (reconciliation) including over‐the‐counter medications/herbals, allergies, and drug interactions.
-
Emergency plan and contact number and person.
-
Treatment and diagnostic plan.
-
Prognosis and goals of care.
-
Test results/pending results.
-
Clear identification of the medical home and/or transferring coordinating physician/emnstitution.
-
Patient's cognitive status.
-
Advance directives, power of attorney, and consent.
-
Planned interventions, durable medical equipment, wound care, and so forth.
-
Assessment of caregiver status.
-
Communication infrastructure. All communications between providers and between providers and patients and families/caregivers need to be secure, private, Health Insurance Portability and Accountability Actcompliant, and accessible to patients and those practitioners who care for them. Communication needs to be 2‐way with an opportunity for clarification and feedback. Each sending provider needs to provide a contact name and the number of an individual who can respond to questions or concerns. The content of transferred information needs to include a core standardized data set. This information needs to be transferred as a living database; that is, it is created only once, and then each subsequent provider only needs to update, validate, or modify the information. Patient information should be available to the provider prior to the patient's arrival. Information transfer needs to adhere to national data standards. Patients should be provided with a medication list that is accessible (paper or electronic), clear, and dated.
-
Standard communication formats. Communities need to develop standard data transfer forms (templates and transmission protocols). Access to a patient's medical history needs to be on a current and ongoing basis with the ability to modify information as a patient's condition changes. Patients, families, and caregivers should have access to their information (nothing about me without me). A section on the transfer record should be devoted to communicating a patient's preferences, priorities, goals, and values (eg, the patient does not want intubation).
-
Transition responsibility. The sending provider/emnstitution/team at the clinical organization maintains responsibility for the care of the patient until the receiving clinician/location confirms that the transfer and assumption of responsibility is complete (within a reasonable timeframe for the receiving clinician to receive the information; ie, transfers that occur in the middle of the night can be communicated during standard working hours). The sending provider should be available for clarification with issues of care within a reasonable timeframe after the transfer has been completed, and this timeframe should be based on the conditions of the transfer settings. The patient should be able to identify the responsible provider. In the case of patients who do not have an ongoing ambulatory care provider or whose ambulatory care provider has not assumed responsibility, the hospital‐based clinicians will not be required to assume responsibility for the care of these patients once they are discharged.
-
Timeliness. Timeliness of feedback and feed‐forward of information from a sending provider to a receiving provider should be contingent on 4 factors:
This information should be available at the time of the patient encounter.
-
Transition settings.
-
Patient circumstances.
-
Level of acuity.
-
Clear transition responsibility.
-
Community standards. Medical communities/emnstitutions must demonstrate accountability for transitions of care by adopting national standards, and processes should be established to promote effective transitions of care.
-
Measurement. For monitoring and improving transitions, standardized metrics related to these standards should be used. These metrics/measures should be evidence‐based, address documented gaps, and have a demonstrated impact on improving care (complying with performance measure standards) whenever feasible. Results from measurements using standardized metrics must lead to continuous improvement of the transition process. The validity, reliability, cost, and impact, including unintended consequences, of these measures should be assessed and re‐evaluated.
All these standards should be applied with special attention to the various transition settings and should be appropriate to each transition setting. Measure developers will need to take this into account when developing measures based on these proposed standards.
The TOCCC also went through a consensus prioritization exercise to rank‐order the consensus standards. All meeting participants were asked to rank their top 3 priorities of the 7 standards, giving a numeric score of 1 for their highest priority, a score of 2 for their second highest priority, and a score of 3 for their third highest priority. Summary scores were calculated, and the standards were rank‐ordered from the lowest summary score to the highest. The TOCCC recognizes that full implementation of all of these standards may not be feasible and that these standards may be implemented on a stepped or incremental basis. This prioritization can assist in deciding which of these to implement. The results of the prioritization exercise are as follows:
-
All transitions must include a transition record
-
Transition responsibility
-
Coordinating clinicians
-
Patient and family involvement and ownership of the transition record
-
Communication infrastructure
-
Timeliness
-
Community standards
Future Challenges
In addition to the work on the principles and standards, the TOCCC uncovered six further challenges which are described below.
Electronic Health Record
There was disagreement in the group concerning the extent to which electronic health records would resolve the existing issues involved in poor transfers of care. However, the group did concur that: established transition standards should not be contingent upon the existence of an electronic health record and some universally, nationally‐defined set of core transfer information should be the short‐term target of efforts to establish electronic transfers of information
Use of a Transition Record
There should be a core data set (much smaller than a complete health record or discharge summary) that goes to the patient and the receiving provider, and this data set should include items in the core record described previously.
Medical Home
There was a lot of discussion about the benefits and challenges of establishing a medical home and inculcating the concept into delivery and payment structures. The group was favorable to the concept; however, since the medical home is not yet a nationally defined standard, care transition standards should not be contingent upon the existence of a medical home. Wording of future standards should use a general term for the clinician coordinating care across sites in addition to the term medical home. Using both terms will acknowledge the movement toward the medical home without requiring adoption of medical home practices to refine and implement quality measures for care transitions.
Pay for Performance
The group strongly agreed that behaviors and clinical practices are influenced by payment structures. Therefore, they agreed that a new principle should be established to advocate for changes in reimbursement practices to reward safe, complete transfers of information and care. However, the development of standards and measures should move forward on the basis of the current reimbursement practices and without assumptions of future changes.
Underserved/Disadvantaged Populations
Care transition standards and measures should be the same for all economic groups with careful attention that lower socioeconomic groups are not forgotten or unintentionally disadvantaged, including the potential for cherry‐picking. It should be noted that underserved populations may not always have a medical home because of their disadvantaged access to the health system and providers. Moreover, clinicians who care for underserved/disadvantaged populations should not be penalized by standards that assume continuous clinical care and ongoing relationships with patients who may access the health system only sporadically.
Need for Patient‐Centered Approaches
The group agreed that across all principles and standards previously established by the SUTTP coalition, greater emphasis is needed on patient‐centered approaches to care including, but not limited to, the inclusion of patient and families in care and transition planning, greater access to medical records, and the need for education at the time of discharge regarding self‐care and core transfer information.
Next Steps for the TOCCC
The TOCCC focuses only on the transitions between the inpatient and outpatient settings and does not address the equally important transitions between many other different care settings, such as the transition from a hospital to a nursing home or rehabilitation facility. The intent of the TOCCC is to provide this document to national measure developers such as the Physician Consortium for Performance Improvement and others in order to guide measure development and ultimately lead to improvements in quality and safety in care transitions.
Appendix
Conference Description
The TOCCC was held over 2 days on July 11 to 12, 2007 at ACP headquarters in Philadelphia, PA. There were 51 participants representing over 30 organizations. Participating organizations included medical specialty societies from internal medicine as well as family medicine and pediatrics, governmental agencies such as AHRQ and CMS, performance measure developers such as the National Committee for Quality Assurance and the American Medical Association Physician Consortium on Performance Improvement, nurse associations such as the Visiting Nurse Associations of America and Home Care and Hospice, pharmacist groups, and patient groups such as the Institute for Family‐Centered Care. The morning of the first day was dedicated to presentations covering the AHRQ Stanford Evidence‐Based Practice Center's evidence report on care coordination, the literature concerning transitions of care, the continuum of measurement from principles to standards to measures, and the SUTTP document of principles. The attendees then split into breakout groups that discussed the principles and standards developed by the SUTTP and refined and/or revised them. All discussions were summarized and agreed on by consensus and were presented by the breakout groups to the full conference attendees. The second day was dedicated to reviewing the work of the breakout groups and further refinement of the principles and standards through a group consensus process. Once this was completed, the attendees then prioritized the standards with a group consensus voting process. Each attendee was given 1 vote, and each attendee attached a rating of 1 for highest priority and 3 for lowest priority to the standards. The summary scores were then calculated, and the standards were then ranked from those summary scores.
The final activity of the conference was to discuss some of the overarching themes and environmental factors that could influence the acceptance, endorsement, and implementation of the standards developed. The TOCCC adjourned with the tasks of forwarding its conclusions to the SUTTP Alliance and developing a policy document to be reviewed by other stakeholders not well represented at the conference. Two such pivotal organizations were ACEP and SAEM, which were added to the steering committee after the conference. Subsequently, ACP, SGIM, SHM, AGS, ACEP, and SAEM approved the summary document, and they will forward it to the other participating organizations for possible endorsement and to national developers of measures and standards for use in performance measurement development.
Appendix
Conflict of Interest Statements
This is a summary of conflict of interest statements for faculty, authors, members of the planning committees, and staff (ACP, SHM, and SGIM)
The following members of the steering (or planning) committee and staff of the TOCCC have declared a conflict of interest:
-
Dennis Beck, MD, FACEP (ACEP representative; President and Chief Executive Officer of Beacon Medical Services): 100 units of stock options/holdings in Beacon Hill Medical Services.
-
Tina Budnitz, MPH (SHM staff; Senior Advisor for Quality Initiatives, SHM): employment by SHM
-
Eric S. Holmboe, MD (ABIM representative; Senior Vice President of Quality Research and Academic Affairs, ABIM): employment by ABIM.
-
Vincenza Snow, MD, FACP (ACP staff; Director of Clinical Programs and Quality of Care, ACP): research grants from the Centers for Disease Control, Atlantic Philanthropies, Novo Nordisk, Bristol Myers Squibb, Boehringer Ingelheim, Pfizer, United Healthcare Foundation, and Sanofi Pasteur.
-
Laurence D. Wellikson, MD, FACP (SHM staff; Chief Executive Officer of SHM): employment by SHM.
-
Mark V. Williams, MD, FACP (cochair and SHM representative; Editor in Chief of the Journal of Hospital Medicine and former President of SHM): membership in SHM.
The following members of the steering (or planning) committee and staff of the TOCCC have declared no conflict of interest:
-
David Atkins, MD, MPH [AHRQ representative; Associate Director of Quality Enhancement Research Initiative, Department of Veteran Affairs, Office of Research and Development, Health Services Research & Development (124)].
-
Doriane C. Miller, MD (cochair and SGIM representative; Associate Division Chief of General Internal Medicine, Stroger Hospital of Cook County).
-
Jane Potter, MD (AGS representative; Professor and Chief of Geriatrics, University of Nebraska Medical Center).
-
Robert L. Wears, MD, FACEP (SAEM representative; Professor of the Department of Emergency Medicine, University of Florida).
-
Kevin B. Weiss, MD, MPH, MS, FACP (chair and ACP representative; Chief Executive Officer of the American Board of Medical Specialties).
Studies of the transition of care between inpatient and outpatient settings have shown that there are significant patient safety and quality deficiencies in our current system. The transition from the hospital setting to the outpatient setting has been more extensively studied than the transition from the outpatient setting to the inpatient setting. One prospective cohort study of 400 patients found that 1 in 5 patients discharged from the hospital to home experienced an adverse event, which was defined as an injury resulting from medical management rather than the underlying disease, within 3 weeks of discharge.1 This study also concluded that 66% of these were drug‐related adverse events, many of which could have been avoided or mitigated. Another prospective cross‐sectional study of 2644 patient discharges found that approximately 40% of the patients had pending test results at the time of discharge and that 10% of these required some action, yet the outpatient physicians and patients were unaware of these results.2 Medication discrepancies have also been shown to be prevalent, with 1 prospective observational study of 375 patients showing that 14% of elderly patients had 1 or more medication discrepancies and 14% of those patients with medication discrepancies were rehospitalized within 30 days versus 6% of the patients who did not experience a medication discrepancy.3 A recent review of the literature cited improving transitional care as a key area of opportunity for improving postdischarge care4
Lack of communication has clearly been shown to adversely affect postdischarge care transitions.5 A recent summary of the literature by a Society of Hospital Medicine (SHM)/Society of General Internal Medicine (SGIM) task force found that direct communication between hospital physicians and primary care physicians occurs infrequently (in 3%‐20% of cases studied), and the availability of a discharge summary at the first postdischarge visit is low (12%‐34%) and does not improve greatly even after 4 weeks (51%‐77%); this affects the quality of care in approximately 25% of follow‐up visits.5 This systematic review of the literature also found that discharge summaries often lack important information such as diagnostic test results, the treatment or hospital course, discharge medications, test results pending at discharge, patient or family counseling, and follow‐up plans.
However, the lack of studies of the communication between ambulatory physicians and hospital physicians prior to admission or during emergency department (ED) visits does not imply that this communication is not equally important and essential to high‐quality care. According to the Centers for Disease Control, the greatest source of hospital admissions in many institutions is the ED. Over 115,000,000 visits were made to the nation's approximately 4828 EDs in 2005, and about 85.2% of ED visits end in discharge.6 The ED is also the point of re‐entry into the system for individuals who may have had an adverse outcome linked to a prior hospitalization.6 Communication between hospital physicians and primary care physicians must be established to create a loop of continuous care and diminish morbidity and mortality at this critical transition point.
While transitions can be a risky period for patient safety, observational studies suggest there are benefits to transitions. A new physician may notice something overlooked by the current caregivers.712 Another factor contributing to the challenges of care transitions is the lack of a single clinician or clinical entity taking responsibility for coordination across the continuum of the patient's overall healthcare, regardless of setting.13 Studies indicate that a relationship with a medical home is associated with better health on both the individual and population levels, with lower overall costs of care and with reductions in disparities in health between socially disadvantaged subpopulations and more socially advantaged populations.14 Several medical societies have addressed this issue, including the American College of Physicians (ACP), SGIM, American Academy of Family Physicians, and American Academy of Pediatrics, and they have proposed the concept of the medical home or patient‐centered medical home, which calls for clinicians to assume this responsibility for coordinating their patients' care across settings and for the healthcare system to value and reimburse clinicians for this patient‐centered and comprehensive method of practice.1517
Finally, patients and their families or caregivers have an important role to play in transitions of care. Several observational and cross‐sectional studies have shown that patients and their caregivers and families express significant feelings of anxiety during care transitions. This anxiety can be caused by a lack of understanding and preparation for their self‐care role in the next care setting, confusion due to conflicting advice from different practitioners, and a sense of abandonment attributable to the inability to contact an appropriate healthcare practitioner for guidance, and they report an overall disregard for their preferences and input into the design of the care plan.1820 Clearly, there is room for improvement in all these areas of the inpatient and outpatient care transition, and the Transitions of Care Consensus Conference (TOCCC) attempted to address these areas by developing standards for the transition of care that also harmonize with the work of the Stepping up to the Plate (SUTTP) Alliance of the American Board of Internal Medicine (ABIM) Foundation.21 In addition, other important stakeholders are addressing this topic and actively working to improve communication and continuity in care, including the Centers for Medicare and Medicaid Services (CMS) and the National Quality Forum (NQF). CMS recently developed the Continuity Assessment Record & Evaluation (CARE) tool, a data collection instrument designed to be a standardized, interoperable, common assessment tool to capture key patient characteristics that will provide information related to resource utilization, clinical outcomes, and postdischarge disposition. NQF held a national forum on care coordination in the spring of 2008.
In summary, it is clear that there are qualitative and quantitative deficiencies in transitions of care between the inpatient and outpatient setting that are affecting patient safety and experience with care. The transition from the inpatient setting to the outpatient setting has been more extensively studied, and this body of literature has underscored for the TOCCC several important areas in need of guidance and improvement. Because of this, the scope of application of this document should initially emphasize inpatient‐to‐outpatient transitions as a first step in learning how to improve these processes. However, the transition from the outpatient setting to the inpatient setting also is a clear priority. Because the needs for transfer of information, authority, and responsibility may be different in these situations, a second phase of additional work to develop principles to guide these transitions should be undertaken as quickly as possible. Experience gained in applying these principles to inpatient‐to‐outpatient transitions might usefully inform such work.
Communication among providers and with the patients and their families arose as a clear priority. Medication discrepancies, pending tests, and unknown diagnostic or treatment plans have an immediate impact on patients' health and outcomes. The TOCCC discussed what elements should be among the standard pieces of information exchanged among providers during these transition points. The dire need for coordination of care or a coordinating clinician/medical home became a clear theme in the deliberations of the TOCCC. Most importantly, the role of the patients and their families/caregivers in their continuing care is apparent, and the TOCCC felt this must be an integral part of any principles or standards for transitions of care.
Methods
In the fall/winter of 2006, the executive committees of ACP, SGIM, and SHM agreed to jointly develop a policy statement on transitions of care. Transitions of care specifically between the inpatient and outpatient settings were selected as an ideal topic for collaboration for the 3 societies as they represent the continuum of care for internal medicine within these settings. To accomplish this, the 3 organizations decided to convene a consensus conference to develop consensus guidelines and standards concerning transitions between inpatient and outpatient settings through a multi‐stakeholder process. A steering committee was convened with representatives from ACP, SGIM, SHM, the Agency for Healthcare Research and Quality (AHRQ), ABIM, and the American Geriatric Society (AGS). The steering committee developed the agenda and invitee list for the consensus conference. After the conference was held, the steering committee was expanded to include representation from the American College of Emergency Physicians (ACEP) and the Society for Academic Emergency Medicine (SAEM).
During the planning stages of the TOCCC, the steering committee became aware of the SUTTP Alliance of the ABIM Foundation. The SUTTP Alliance has representation from medical specialties such as internal medicine and its subspecialties, family medicine, and surgery. The alliance was formed in 2006 and has been working on care coordination across multiple settings and specialties. The SUTTP Alliance had developed a set of principles and standards for care transitions and agreed to provide the draft document to the TOCCC for review, input, and further development and refinement.
Recommendations on Principles and Standards for Managing Transitions in Care Between the Inpatient and Outpatient Settings from ACP, SGIM, SHM, AGS, ACEP, and SAEM
The SUTTP Alliance presented a draft document entitled Principles and Standards for Managing Transitions in Care. In this document, the SUTTP Alliance proposes 5 principles and 8 standards for effective care transitions. A key element of the conference was a presentation by NQF on how to move from principles to standards and eventually to measures. This presentation provided the TOCCC with the theoretical underpinnings for the discussion of these principles and standards and how the TOCCC would provide input on them. The presentation provided an outline for the flow from principles to measures. First, there needs to be a framework that provides guiding principles for what we would like to measure and eventually report. From those principles, a set of preferred practices or standards are developed; the standards are more granular and allow for more specificity in describing the desired practice or outcome and its elements. Standards then provide a roadmap for identification and development of performance measures. With this framework in mind, the TOCCC then discussed in detail the SUTTP principles and standards.
The 5 principles for effective care transitions developed by the SUTTP Alliance are as follows:
-
Accountability.
-
Communication: clear and direct communication of treatment plans and follow‐up expectations.
-
Timely feedback and feed‐forward of information.
-
Involvement of the patient and family member, unless inappropriate, in all steps.
-
Respect of the hub of coordination of care.
The TOCCC re‐affirmed these principles and added 4 additional principles to this list. Three of the new principles were statements within the 8 standards developed by the SUTTP, but when taking into consideration the framework for the development of principles into standards, the TOCCC felt that the statements were better represented as principles. They are as follows:
-
All patients and their families/caregivers should have and should be able to identify their medical home or coordinating clinician (ie, practice or practitioner). (This was originally part of the coordinating clinicians standard, and the TOCCC voted to elevate this to a principle).
-
At every point along the transition, the patients and/or their families/caregivers need to know who is responsible for care at that point and who to contact and how.
-
National standards should be established for transitions in care and should be adopted and implemented at the national and community level through public health institutions, national accreditation bodies, medical societies, medical institutions, and so forth in order to improve patient outcomes and patient safety. (This was originally part of the SUTTP community standards standard, and the TOCCC moved to elevate this to a principle).
-
For monitoring and improving transitions, standardized metrics related to these standards should be used in order to lead to continuous quality improvement and accountability. (This was originally part of the measurement standard, and the TOCCC voted to elevate this to a principle).
The SUTTP Alliance proposed the following 8 standards for care transitions:
-
Coordinating clinicians.
-
Care plans.
-
Communication infrastructure.
-
Standard communication formats.
-
Transition responsibility.
-
Timeliness.
-
Community standards.
-
Measurement.
The TOCCC affirmed these standards and through a consensus process added more specificity to most of them and elevated components of some of them to principles, as discussed previously. The TOCCC proposes that the following be merged with the SUTTP standards:
-
Coordinating clinicians. Communication and information exchange between the medical home and the receiving provider should occur in an amount of time that will allow the receiving provider to effectively treat the patient. This communication and information exchange should ideally occur whenever patients are at a transition of care (eg, at discharge from the inpatient setting). The timeliness of this communication should be consistent with the patient's clinical presentation and, in the case of a patient being discharged, the urgency of the follow‐up required. Guidelines will need to be developed that address both the timeliness and means of communication between the discharging physician and the medical home. Communication and information exchange between the medical home and other physicians may be in the form of a call, voice mail, fax, or other secure, private, and accessible means including mutual access to an electronic health record.
The ED represents a unique subset of transitions of care. The potential transition can generally be described as outpatient to outpatient or outpatient to inpatient, depending on whether or not the patient is admitted to the hospital. The outpatient‐to‐outpatient transition can also encompass a number of potential variations. Patients with a medical home may be referred to the ED by the medical home, or they may self‐refer. A significant number of patients do not have a physician and self‐refer to the ED. The disposition from the ED, either outpatient to outpatient or outpatient to inpatient, is similarly represented by a number of variables. Discharged patients may or may not have a medical home, may or may not need a specialist, and may or may not require urgent (<24 hours) follow‐up. Admitted patients may or may not have a medical home and may or may not require specialty care. This variety of variables precludes a single approach to ED transition of care coordination. The determination of which scenarios will be appropriate for the development of standards (coordinating clinicians and transition responsibility) will require further contributions from ACEP and SAEM and review by the steering committee.
-
Care plans/transition record. The TOCCC also agreed that there is a minimal set of data elements that should always be part of the transition record. The TOCCC suggested that this minimal data set be part of an initial implementation of this standard. That list includes the following:
The TOCCC discussed what components should be included in an ideal transition record and agreed on the following elements:
The TOCCC also added a new standard under this heading: Patients and/or their families/caregivers must receive, understand, and be encouraged to participate in the development of the transition record, which should take into consideration patients' health literacy and insurance status and be culturally sensitive.
-
Principle diagnosis and problem list.
-
Medication list (reconciliation) including over‐the‐counter medications/herbals, allergies, and drug interactions.
-
Clear identification of the medical home/transferring coordinating physician/emnstitution and the contact information.
-
Patient's cognitive status.
-
Test results/pending results.
-
Principle diagnosis and problem list.
-
Medication list (reconciliation) including over‐the‐counter medications/herbals, allergies, and drug interactions.
-
Emergency plan and contact number and person.
-
Treatment and diagnostic plan.
-
Prognosis and goals of care.
-
Test results/pending results.
-
Clear identification of the medical home and/or transferring coordinating physician/emnstitution.
-
Patient's cognitive status.
-
Advance directives, power of attorney, and consent.
-
Planned interventions, durable medical equipment, wound care, and so forth.
-
Assessment of caregiver status.
-
Communication infrastructure. All communications between providers and between providers and patients and families/caregivers need to be secure, private, Health Insurance Portability and Accountability Actcompliant, and accessible to patients and those practitioners who care for them. Communication needs to be 2‐way with an opportunity for clarification and feedback. Each sending provider needs to provide a contact name and the number of an individual who can respond to questions or concerns. The content of transferred information needs to include a core standardized data set. This information needs to be transferred as a living database; that is, it is created only once, and then each subsequent provider only needs to update, validate, or modify the information. Patient information should be available to the provider prior to the patient's arrival. Information transfer needs to adhere to national data standards. Patients should be provided with a medication list that is accessible (paper or electronic), clear, and dated.
-
Standard communication formats. Communities need to develop standard data transfer forms (templates and transmission protocols). Access to a patient's medical history needs to be on a current and ongoing basis with the ability to modify information as a patient's condition changes. Patients, families, and caregivers should have access to their information (nothing about me without me). A section on the transfer record should be devoted to communicating a patient's preferences, priorities, goals, and values (eg, the patient does not want intubation).
-
Transition responsibility. The sending provider/emnstitution/team at the clinical organization maintains responsibility for the care of the patient until the receiving clinician/location confirms that the transfer and assumption of responsibility is complete (within a reasonable timeframe for the receiving clinician to receive the information; ie, transfers that occur in the middle of the night can be communicated during standard working hours). The sending provider should be available for clarification with issues of care within a reasonable timeframe after the transfer has been completed, and this timeframe should be based on the conditions of the transfer settings. The patient should be able to identify the responsible provider. In the case of patients who do not have an ongoing ambulatory care provider or whose ambulatory care provider has not assumed responsibility, the hospital‐based clinicians will not be required to assume responsibility for the care of these patients once they are discharged.
-
Timeliness. Timeliness of feedback and feed‐forward of information from a sending provider to a receiving provider should be contingent on 4 factors:
This information should be available at the time of the patient encounter.
-
Transition settings.
-
Patient circumstances.
-
Level of acuity.
-
Clear transition responsibility.
-
Community standards. Medical communities/emnstitutions must demonstrate accountability for transitions of care by adopting national standards, and processes should be established to promote effective transitions of care.
-
Measurement. For monitoring and improving transitions, standardized metrics related to these standards should be used. These metrics/measures should be evidence‐based, address documented gaps, and have a demonstrated impact on improving care (complying with performance measure standards) whenever feasible. Results from measurements using standardized metrics must lead to continuous improvement of the transition process. The validity, reliability, cost, and impact, including unintended consequences, of these measures should be assessed and re‐evaluated.
All these standards should be applied with special attention to the various transition settings and should be appropriate to each transition setting. Measure developers will need to take this into account when developing measures based on these proposed standards.
The TOCCC also went through a consensus prioritization exercise to rank‐order the consensus standards. All meeting participants were asked to rank their top 3 priorities of the 7 standards, giving a numeric score of 1 for their highest priority, a score of 2 for their second highest priority, and a score of 3 for their third highest priority. Summary scores were calculated, and the standards were rank‐ordered from the lowest summary score to the highest. The TOCCC recognizes that full implementation of all of these standards may not be feasible and that these standards may be implemented on a stepped or incremental basis. This prioritization can assist in deciding which of these to implement. The results of the prioritization exercise are as follows:
-
All transitions must include a transition record
-
Transition responsibility
-
Coordinating clinicians
-
Patient and family involvement and ownership of the transition record
-
Communication infrastructure
-
Timeliness
-
Community standards
Future Challenges
In addition to the work on the principles and standards, the TOCCC uncovered six further challenges which are described below.
Electronic Health Record
There was disagreement in the group concerning the extent to which electronic health records would resolve the existing issues involved in poor transfers of care. However, the group did concur that: established transition standards should not be contingent upon the existence of an electronic health record and some universally, nationally‐defined set of core transfer information should be the short‐term target of efforts to establish electronic transfers of information
Use of a Transition Record
There should be a core data set (much smaller than a complete health record or discharge summary) that goes to the patient and the receiving provider, and this data set should include items in the core record described previously.
Medical Home
There was a lot of discussion about the benefits and challenges of establishing a medical home and inculcating the concept into delivery and payment structures. The group was favorable to the concept; however, since the medical home is not yet a nationally defined standard, care transition standards should not be contingent upon the existence of a medical home. Wording of future standards should use a general term for the clinician coordinating care across sites in addition to the term medical home. Using both terms will acknowledge the movement toward the medical home without requiring adoption of medical home practices to refine and implement quality measures for care transitions.
Pay for Performance
The group strongly agreed that behaviors and clinical practices are influenced by payment structures. Therefore, they agreed that a new principle should be established to advocate for changes in reimbursement practices to reward safe, complete transfers of information and care. However, the development of standards and measures should move forward on the basis of the current reimbursement practices and without assumptions of future changes.
Underserved/Disadvantaged Populations
Care transition standards and measures should be the same for all economic groups with careful attention that lower socioeconomic groups are not forgotten or unintentionally disadvantaged, including the potential for cherry‐picking. It should be noted that underserved populations may not always have a medical home because of their disadvantaged access to the health system and providers. Moreover, clinicians who care for underserved/disadvantaged populations should not be penalized by standards that assume continuous clinical care and ongoing relationships with patients who may access the health system only sporadically.
Need for Patient‐Centered Approaches
The group agreed that across all principles and standards previously established by the SUTTP coalition, greater emphasis is needed on patient‐centered approaches to care including, but not limited to, the inclusion of patient and families in care and transition planning, greater access to medical records, and the need for education at the time of discharge regarding self‐care and core transfer information.
Next Steps for the TOCCC
The TOCCC focuses only on the transitions between the inpatient and outpatient settings and does not address the equally important transitions between many other different care settings, such as the transition from a hospital to a nursing home or rehabilitation facility. The intent of the TOCCC is to provide this document to national measure developers such as the Physician Consortium for Performance Improvement and others in order to guide measure development and ultimately lead to improvements in quality and safety in care transitions.
Appendix
Conference Description
The TOCCC was held over 2 days on July 11 to 12, 2007 at ACP headquarters in Philadelphia, PA. There were 51 participants representing over 30 organizations. Participating organizations included medical specialty societies from internal medicine as well as family medicine and pediatrics, governmental agencies such as AHRQ and CMS, performance measure developers such as the National Committee for Quality Assurance and the American Medical Association Physician Consortium on Performance Improvement, nurse associations such as the Visiting Nurse Associations of America and Home Care and Hospice, pharmacist groups, and patient groups such as the Institute for Family‐Centered Care. The morning of the first day was dedicated to presentations covering the AHRQ Stanford Evidence‐Based Practice Center's evidence report on care coordination, the literature concerning transitions of care, the continuum of measurement from principles to standards to measures, and the SUTTP document of principles. The attendees then split into breakout groups that discussed the principles and standards developed by the SUTTP and refined and/or revised them. All discussions were summarized and agreed on by consensus and were presented by the breakout groups to the full conference attendees. The second day was dedicated to reviewing the work of the breakout groups and further refinement of the principles and standards through a group consensus process. Once this was completed, the attendees then prioritized the standards with a group consensus voting process. Each attendee was given 1 vote, and each attendee attached a rating of 1 for highest priority and 3 for lowest priority to the standards. The summary scores were then calculated, and the standards were then ranked from those summary scores.
The final activity of the conference was to discuss some of the overarching themes and environmental factors that could influence the acceptance, endorsement, and implementation of the standards developed. The TOCCC adjourned with the tasks of forwarding its conclusions to the SUTTP Alliance and developing a policy document to be reviewed by other stakeholders not well represented at the conference. Two such pivotal organizations were ACEP and SAEM, which were added to the steering committee after the conference. Subsequently, ACP, SGIM, SHM, AGS, ACEP, and SAEM approved the summary document, and they will forward it to the other participating organizations for possible endorsement and to national developers of measures and standards for use in performance measurement development.
Appendix
Conflict of Interest Statements
This is a summary of conflict of interest statements for faculty, authors, members of the planning committees, and staff (ACP, SHM, and SGIM)
The following members of the steering (or planning) committee and staff of the TOCCC have declared a conflict of interest:
-
Dennis Beck, MD, FACEP (ACEP representative; President and Chief Executive Officer of Beacon Medical Services): 100 units of stock options/holdings in Beacon Hill Medical Services.
-
Tina Budnitz, MPH (SHM staff; Senior Advisor for Quality Initiatives, SHM): employment by SHM
-
Eric S. Holmboe, MD (ABIM representative; Senior Vice President of Quality Research and Academic Affairs, ABIM): employment by ABIM.
-
Vincenza Snow, MD, FACP (ACP staff; Director of Clinical Programs and Quality of Care, ACP): research grants from the Centers for Disease Control, Atlantic Philanthropies, Novo Nordisk, Bristol Myers Squibb, Boehringer Ingelheim, Pfizer, United Healthcare Foundation, and Sanofi Pasteur.
-
Laurence D. Wellikson, MD, FACP (SHM staff; Chief Executive Officer of SHM): employment by SHM.
-
Mark V. Williams, MD, FACP (cochair and SHM representative; Editor in Chief of the Journal of Hospital Medicine and former President of SHM): membership in SHM.
The following members of the steering (or planning) committee and staff of the TOCCC have declared no conflict of interest:
-
David Atkins, MD, MPH [AHRQ representative; Associate Director of Quality Enhancement Research Initiative, Department of Veteran Affairs, Office of Research and Development, Health Services Research & Development (124)].
-
Doriane C. Miller, MD (cochair and SGIM representative; Associate Division Chief of General Internal Medicine, Stroger Hospital of Cook County).
-
Jane Potter, MD (AGS representative; Professor and Chief of Geriatrics, University of Nebraska Medical Center).
-
Robert L. Wears, MD, FACEP (SAEM representative; Professor of the Department of Emergency Medicine, University of Florida).
-
Kevin B. Weiss, MD, MPH, MS, FACP (chair and ACP representative; Chief Executive Officer of the American Board of Medical Specialties).
- The incidence and severity of adverse events affecting patients after discharge from the hospital.Ann Intern Med.2003;138(3):161–167. , , , et al.
- Patient safety concerns arising from test results that return after hospital discharge.Ann Intern Med.2005;143(2):121–128. , , , et al.
- Posthospital medication discrepancies: prevalence and contributing factors.Arch Intern Med.2005;165(16):1842–1847. , , , .
- Addressing post‐discharge adverse events: a neglected area.Jt Comm J Qual Patient Saf.2008;34(2):85–97. , .
- Deficits in communication and information transfer between hospital‐based and primary care physicians: implications for patient safety and continuity of care.JAMA.2007;297(8):831–841. , , , et al.
- National Hospital Ambulatory Medical Care Survey: 2005 Emergency Department Summary.Hyattsville, MD:National Center for Health Statistics;2007.Advance Data from Vital and Health Statistics; vol386. , , .
- Do short breaks increase or decrease anesthetic risk?J Clin Anesth.1989;1(3):228–231. .
- Critical incidents associated with intraoperative exchanges of anesthesia personnel.Anesthesiology.1982;56(6):456–461. , , , .
- Shift changes among emergency physicians: best of times, worst of times. In:Proceedings of the Human Factors and Ergonomics Society 47th Annual Meeting.Denver, CO:Human Factors and Ergonomics Society;2003:1420–1423. , , , et al.
- Transitions in care: signovers in the emergency department. In:Proceedings of the Human Factors and Ergonomics Society 48th Annual Meeting.New Orleans, LA:Human Factors and Ergonomics Society;2004:1625–1628. , , , et al.
- Conceptual framework for the safety of handovers. In: Henriksen K, ed.Advances in Patient Safety.Rockville, MD:Agency for Healthcare Research and Quality/Department of Defense;2005:309–321. , , , et al.
- Medical errors and emergency medicine: will the difficult questions be asked, and answered?Acad Emerg Med.2003;10(8):910–911. .
- Lost in transition: challenges and opportunities for improving the quality of transitional care.Ann Intern Med.2004;141(7):533–536. , .
- The medical home, access to care, and insurance: a review of evidence.Pediatrics.2004;113(5 suppl):1493–1498. , .
- Blue Ribbon Panel of the Society of General Internal Medicine.Redesigning the practice model for general internal medicine. A proposal for coordinated care: a policy monograph of the Society of General Internal Medicine.J Gen Intern Med.2007;22(3):400–409.
- Medical Home Initiatives for Children with Special Needs Project Advisory Committee.The medical home.Pediatrics.2002;110(1 pt 1):184–186.
- American College of Physicians. The advanced medical home: a patient‐centered, physician‐guided model of healthcare. A policy monograph.2006. http://www.acponline.org/advocacy/where_we_stand/policy/adv_med.pdf. Accessed March 13, 2009.
- Development and testing of a measure designed to assess the quality of care transitions.Int J Integr Care.2002;2:e02. , , , et al.
- Carepartner experiences with hospital care.Med Care.1999;37(1):33–38. , , , et al.
- Assessing the quality of preparation for post hospital care from the patient's perspective: the care transitions measure.Med Care.2005;43(3):246–255. , , .
- American Board of Internal Medicine Foundation. Stepping up to the Plate Alliance. Principles and Standards for managing transitions in care (in press). Available at http://www.abimfoundation.org/publications/pdf_issue_brief/F06‐05‐2007_6.pdf. Accessed March 13,2009.
- The incidence and severity of adverse events affecting patients after discharge from the hospital.Ann Intern Med.2003;138(3):161–167. , , , et al.
- Patient safety concerns arising from test results that return after hospital discharge.Ann Intern Med.2005;143(2):121–128. , , , et al.
- Posthospital medication discrepancies: prevalence and contributing factors.Arch Intern Med.2005;165(16):1842–1847. , , , .
- Addressing post‐discharge adverse events: a neglected area.Jt Comm J Qual Patient Saf.2008;34(2):85–97. , .
- Deficits in communication and information transfer between hospital‐based and primary care physicians: implications for patient safety and continuity of care.JAMA.2007;297(8):831–841. , , , et al.
- National Hospital Ambulatory Medical Care Survey: 2005 Emergency Department Summary.Hyattsville, MD:National Center for Health Statistics;2007.Advance Data from Vital and Health Statistics; vol386. , , .
- Do short breaks increase or decrease anesthetic risk?J Clin Anesth.1989;1(3):228–231. .
- Critical incidents associated with intraoperative exchanges of anesthesia personnel.Anesthesiology.1982;56(6):456–461. , , , .
- Shift changes among emergency physicians: best of times, worst of times. In:Proceedings of the Human Factors and Ergonomics Society 47th Annual Meeting.Denver, CO:Human Factors and Ergonomics Society;2003:1420–1423. , , , et al.
- Transitions in care: signovers in the emergency department. In:Proceedings of the Human Factors and Ergonomics Society 48th Annual Meeting.New Orleans, LA:Human Factors and Ergonomics Society;2004:1625–1628. , , , et al.
- Conceptual framework for the safety of handovers. In: Henriksen K, ed.Advances in Patient Safety.Rockville, MD:Agency for Healthcare Research and Quality/Department of Defense;2005:309–321. , , , et al.
- Medical errors and emergency medicine: will the difficult questions be asked, and answered?Acad Emerg Med.2003;10(8):910–911. .
- Lost in transition: challenges and opportunities for improving the quality of transitional care.Ann Intern Med.2004;141(7):533–536. , .
- The medical home, access to care, and insurance: a review of evidence.Pediatrics.2004;113(5 suppl):1493–1498. , .
- Blue Ribbon Panel of the Society of General Internal Medicine.Redesigning the practice model for general internal medicine. A proposal for coordinated care: a policy monograph of the Society of General Internal Medicine.J Gen Intern Med.2007;22(3):400–409.
- Medical Home Initiatives for Children with Special Needs Project Advisory Committee.The medical home.Pediatrics.2002;110(1 pt 1):184–186.
- American College of Physicians. The advanced medical home: a patient‐centered, physician‐guided model of healthcare. A policy monograph.2006. http://www.acponline.org/advocacy/where_we_stand/policy/adv_med.pdf. Accessed March 13, 2009.
- Development and testing of a measure designed to assess the quality of care transitions.Int J Integr Care.2002;2:e02. , , , et al.
- Carepartner experiences with hospital care.Med Care.1999;37(1):33–38. , , , et al.
- Assessing the quality of preparation for post hospital care from the patient's perspective: the care transitions measure.Med Care.2005;43(3):246–255. , , .
- American Board of Internal Medicine Foundation. Stepping up to the Plate Alliance. Principles and Standards for managing transitions in care (in press). Available at http://www.abimfoundation.org/publications/pdf_issue_brief/F06‐05‐2007_6.pdf. Accessed March 13,2009.
SHM Workshops on Health Care–Associated Infections and Antimicrobial Resistance / Bush‐Knapp et al.
In the United States, hospitalized patients are at risk of acquiring health careassociated infections that increase morbidity, mortality, length of hospital stay, and cost of care.1 If a health careassociated infection is caused by an antimicrobial‐resistant pathogen, treatment efforts may be further complicated.2, 3 With the decreasing effectiveness of antimicrobials and suboptimal adherence to certain infection control measures, new and multifaceted prevention strategies are necessary to address the problem of health careassociated infections and antimicrobial resistance.410
One strategy that hospitals can use to reduce the incidence of health careassociated infections and antimicrobial resistance is implementation of quality improvement programs. These programs require clinicians to employ techniques, such as root cause analysis (RCA), which investigates contributing factors to an event to prevent reoccurrence, and healthcare failure mode effects analysis (HFMEA), which applies a systematic method of identifying and preventing problems before they occur.1113 Programs and strategies such as these require leadership and adoption within the hospital. Because of their availability and specialized role in the hospital setting, hospitalists are in a unique position to promote and uphold quality improvement efforts.1417 Professional societies, health care organizations, and governmental agencies can play a role in engaging this group of physicians in improving the quality of patient care in hospitals by providing educational programs and materials.18
In 2004, the Society of Hospital Medicine (SHM) collaborated with the Centers for Disease Control and Prevention (CDC) to develop a quality improvement tool kit to reduce antimicrobial resistance and health careassociated infections. The tool kit was based on the CDC's Campaign to Prevent Antimicrobial Resistance in Healthcare Settings (Campaign), an educational program targeted at clinicians.19 The SHM/CDC tool kit contained campaign materials, a set of slides about quality improvement, worksheets, and additional materials such as infection control policies and guidelines to supplement a 90‐minute workshop consisting of didactic lectures about antimicrobial resistance, quality improvement initiatives, RCA, and HFMEA; a lecture and case study about intravascular catheter‐related infections; and small‐group activity and discussion. The complete toolkit is now available online via the SHM Antimicrobial Resistance Resource Room at
The purpose of the workshop was to present the tool kit and increase hospitalists' knowledge and awareness about antimicrobial resistance, health careassociated infections, and quality improvement programs. We assessed the workshop participants' familiarity with the Campaign prior to the workshop, perceptions of antimicrobial resistance, knowledge gained as a result of the workshop, and opinions about the usefulness of the workshop.
METHODS
Data were collected from pretests and posttests administered to participants of one of the SHM workshops in May, June, or July 2005 in Denver, Colorado; Boston, Massachusetts; or Portland, Oregon. One SHM physician leader (D.D.D., coauthor of this article) presented all 3 workshops. The workshops were advertised by SHM using E‐mail to local chapter members. Individual sites used a variety of methods to encourage their hospitalists to attend, and participants were provided a complimentary dinner.
Prior to each workshop, participants completed a 10‐question pretest that had been pilot‐tested by hospitalists in other cities. The pretest assessed demographics; perceptions of the problem of antimicrobial resistance using a Likert scale; familiarity with the Campaign; and knowledge of common infection sites, RCA, HFMEA, and antimicrobial resistance prevention measures.
Immediately following each workshop, a 13‐question posttest was administered to participants. This posttest evaluated the workshop and materials using Likert scales, asked for suggestions for future programming using open‐ended questions, and repeated pretest questions to assess changes in perceptions and knowledge.
Data were entered into an Excel spreadsheet and analyzed using descriptive statistics and t tests to compare pre‐ and posttest changes in knowledge. Likert data assessing perceptions were dichotomized into strongly agree versus all other scale responses. Qualitative open‐ended responses were categorized by theme.
RESULTS
A total of 69 SHM members attended the workshops. Of the 69 participants, 65 completed the pretest, 53 completed the posttest, and 50 completed both the pre‐ and the posttests. Only participants who completed both the pretest and the posttest were included in the analyses (n = 21, Denver; n = 11, Boston; n = 18, Portland). Of the 50 participants who completed both the pre‐ and posttests, 44 (88%) classified themselves as hospitalists in practices ranging from 2 to more than 25 physicians. Participants averaged 9.2 years (range = 1‐27 years) in practice and 4.9 years (range = 1‐10 years) as practicing hospitalists, with no significant differences between the 3 groups. Only 17 participants (34%) were familiar with the Campaign prior to the workshop, and there was no significant variation between the 3 workshops. Those familiar with the Campaign had heard about or received the educational materials from colleagues (n = 5), their facilities (n = 4), professional journals (n = 4), medical conferences (n = 4), or the CDC or SHM websites (n = 4).
Overall, most participants strongly agreed with the statement that antimicrobial resistance was a problem nationally, institutionally, and within their individual practices (Table 1). These perceptions did not significantly differ between the pretest and the posttest. However, statistically significant differences were found when comparing perceptions of the problem of antimicrobial resistance at the national, institutional, and practice levels; more participants strongly agreed that antimicrobial resistance was a problem nationally than within their institutions (pretest, P = .01; posttest, P = .04) or within their practices (pretest, P < .0001; posttest, P = .01).
Nationally | Institutionally | Within own practice | ||||
---|---|---|---|---|---|---|
Pretest | Posttest | Pretest | Posttest | Pretest | Posttest | |
| ||||||
Denver (n = 21) | 100% | 100% | 86% | 95% | 67% | 86% |
Portland (n = 18) | 83% | 94% | 67% | 78% | 67% | 78% |
Boston (n = 11) | 91% | 82% | 91% | 82% | 91% | 82% |
Average | 91% | 94% | 81% | 85% | 72% | 82% |
P value | .28 | .18 | .06 |
On the knowledge‐based questions, the overall average test score was 48% on the pretest and 63% on the posttest (P < .0001), with scores varying by question (Table 2). For example, knowledge of quality improvement initiatives/HFMEA was low (an average of 10% correct on the pretest, 48% on the posttest) compared with knowledge about the key prevention strategies from the Campaign to Prevent Antimicrobial Resistance (average of 94% correct on the pretest, 98% on the posttest). Furthermore, scores also varied by workshop location. On the pretest, participants in Boston and Portland scored higher (both 53%) than Denver participants (40%). On the posttest, Portland participants scored the highest (78%) followed by Boston participants (64%) and then Denver participants (50%). Boston and Denver participants differed significantly on pretest knowledge score (P = .04) and Portland and Denver participants differed significantly on posttest knowledge score (P < .0001).
Question Topic | Pretest average | Posttest average | Percent difference (P value)* |
---|---|---|---|
| |||
Quality improvement initiatives/HFMEA Which quality improvement initiative(s) must be performed yearly by all hospitals (JCAHO accreditation requirement)? | 10% | 48% | 38% (P < .0001) |
Prevention of central venous catheter‐associated bloodstream infections: Key prevention steps for preventing central venous catheter‐associated bloodstream infections include all of the following except: | 62% | 88% | 26% (P = .0001) |
RCA Which of the following is NOT true about root cause analysis? | 20% | 38% | 18% (P = .01) |
Campaign to Prevent Antimicrobial Resistance The key prevention strategies from the Campaign to Prevent Antimicrobial Resistance include all of the following except: | 94% | 98% | 4% (P = .32) |
Common body sites for healthcare‐associated infection: The most common site of hospital‐acquired (nosocomial) infection is: | 52% | 44% | 8% (P = .29) |
Overall average | 48% | 63% | 15% (P < .0001) |
Overall, 43 participants (85%) rated the workshop as either very good or excellent. All but 1 participant (n = 49, 98%) would encourage a colleague to attend the workshop, giving reasons such as that the workshop outlined a major program in delivering good and safe care, offered great information on antimicrobial resistance and methods of quality improvement systems implementation, assisted in find[ing] new tools for improving hospital practice, and addressed a significant factor in hospitals related to morbidity [and] mortality. When asked for general comments about the workshop and suggestions for future improvements, participants requested more direction, more detail, more discussion, specific examples of antimicrobial resistance, and protocols and processes for implementing quality improvement programs. On a scale from 1 (not useful) to 5 (essential), participants rated the usefulness of each workshop segment: intravascular catheter‐related infections lecture and case study (x̄ = 4.3, range = 3‐5), quality improvement initiatives lecture (x̄ = 4.1, range = 2‐5), background on antimicrobial resistance (x̄ = 3.9, range = 2‐5), RCA lecture (x̄ = 3.9, range = 2‐5), HFMEA lecture (x̄ = 3.8, range = 2‐5), and small‐group discussion (x̄ = 3.4, range = 2‐5). These ratings did not vary significantly between the 3 groups.
CONCLUSIONS
To address antimicrobial resistance and health careassociated infections in the hospital setting, the SHM and CDC developed a tool kit and presented a quality improvement workshop to hospitalists in 3 U.S. cities. Overall, the participants scored significantly higher on the knowledge‐based questions on the posttest than on the pretest, indicating that knowledge improved as a result of the workshop. By providing a format that combined didactic lectures with case‐based education, small‐group activities, and discussion, the SHM workshop may have optimized its ability to increase knowledge, similar to the findings in previous research.2021
There were no significant differences between the 3 groups in years of practice, perceptions of the problem, and overall evaluation of the workshop. However, differences were found in knowledge gained as a result of the workshop. For example, the Denver group scored lower on the knowledge‐based questions than did the Boston group on the pretest and the Portland group on the posttest, indicating that knowledge and learning styles may differ by location. These differences may be attributed to variations in hospital environments, hospital‐based educational programs, or medical school and residency training. Differences like these may impact the effectiveness of a program and should be a consideration in the program development process, especially when a program is national in scope, like the CDC's Campaign to Prevent Antimicrobial Resistance in Healthcare Settings. In addition, more than 90% of participants correctly identified key prevention strategies of the Campaign, whereas only 34% were familiar with the Campaign itself prior to the workshop. This result may be a result of the key prevention strategies of the Campaign being derived from well‐established and ‐recognized evidence‐based best practices for patient safety and care.
Although knowledge changed as a result of the workshop, overall perceptions of the problem of antimicrobial resistance did not change significantly from pretest to posttest. It is possible this is because changes in perception require a different or more intensive educational approach. This result also may reflect the initial levels of agreement on the pretest, the measurement instrument itself, and/or the inability to detect differences because of the small number of participants.
Difference did exist in perceptions of the problem of antimicrobial resistance at the national, institutional, and practice levels. Antimicrobial resistance was perceived to be a greater problem on the national level than on the institutional and practice levels. Other studies also have found that clinicians more strongly agree that antimicrobial resistance is a problem nationally than within their institutions and practices.2224 When antimicrobial resistance is not perceived as a problem within institutions and practices, physicians may be less likely to overcome the barriers to following recommended infection prevention guidelines or to implementing quality improvement projects.4 Therefore, educational and intervention efforts like this workshop should address hospitalists' perceptions of the problem of antimicrobial resistance on the individual level as a first step in motivating them to engage in quality improvement.
Although participants' knowledge scores increased from pretest to posttest, gaps in knowledge remained, as indicated by the significantly improved but low overall posttest scores related to RCA and HFMEA. As hospitalists are in a unique position to promote quality improvement programs, these topic areas should be given more attention in future workshops and in training. Furthermore, by adding more specific questions related to each section of the workshop, associations among presentation style, knowledge gained, and perceived usefulness of each section could be evaluated. For example, the participants significantly increased their scores from pretest to posttest on the catheter‐related knowledge‐based question and rated the lecture and case study on intravascular catheter‐related infections as the most useful sections. Future research may explore these possible relationships to better guide selection of presentation styles and topics to ensure that participants gain knowledge and perceive the sections as useful. In addition, by addressing the feedback from participants, such as offering more detail, examples, and discussion, future workshops may have greater perceived usefulness and be better able to increase the knowledge and awareness of quality improvement programs for the prevention of health careassociated infections and antimicrobial resistance.
Although there were 3 workshops conducted in 3 areas across the United States, the sample size at each site was small, and results may not be representative of hospitalists at large. In addition, power calculations should be considered in future studies to increase the ability to better detect differences between and within groups. Another limitation of this study was that the limited data available and participant anonymity meant it was not possible to follow‐up with participants after the workshop to evaluate whether the knowledge they gained was sustained and/or whether they reported changes in practice. However, possession of knowledge and skills to inform practice does not mean that practice will change; therefore, follow‐up is necessary to determine if this workshop was effective in changing behaviors in the long term.25 Although the SHM workshop improved knowledge, more intensive educational strategies may be necessary to affect perceptions and improve the leadership skills required for implementation of quality improvement programs at an institutional level.
Overall, the SHM workshop was found to be a useful tool for increasing knowledge and outlining methods by which hospitalists can lead, coordinate, or participate in measures to prevent infections and improve patient safety. In addition, through the workshop, the SHM and the CDC have provided an example of how professional societies and government agencies can collaborate to address emerging issues in the health care setting.
- Impact of nosocomial infection on cost of illness and length of stay in intensive care units.Infect Control Hosp Epidemiol2005;26:281–287. , , .
- Implementation of strategies to control antimicrobial resistance.Chest.2001;119:405S–411S. .
- Society for Healthcare Epidemiology of America and Infectious Diseases Society of American Joint Committee on the Prevention of Antimicrobial Resistance: guidelines for the prevention of antimicrobial resistance in hospitals.Clin Infect Dis.1997;25:584–599. , , , et al.
- Strategies to prevent and control the emergence and spread of antimicrobial‐resistant microorganisms in hospitals: a challenge to hospital leadership.JAMA.1996;275:234–240. , , , et al.
- Centers for Disease Control and Prevention.Guidelines for hand hygiene in health‐care settings: recommendations of the Healthcare Infection Control Practices Advisory Committee and the HICPAC/SHEA/APIC/IDSA Hand Hygiene Task Force.MMWR Recomm Rep.2002;51:1–44.
- Hospital Infection Control Practices Advisory Committee.Guideline for isolation precautions in hospitals.Infect Control Hosp Epidemiol.1996;17:53–80. .
- SHEA guideline for prevention nosocomial transmission of multidrug‐resistant strains of Staphylococcus aureus and Enterococcus.Infect Control Hosp Epidemiol.2003;24:362–386. , , , et al.
- Improving adherence to hand hygiene practice: a multidisciplinary approach.Emerg Infect Dis.2001;7:234–240. .
- Alcohol‐based handrub improves compliance with hand hygiene in intensive care units.Arch Intern Med.2002;162:1037–1043. , , .
- An organizational climate intervention associated with increased handwashing and decreased nosocomial infections.Behav Med.2000;26:14–22. , , , et al.
- Getting to the root of the matter.AHRQ Web M 29:319–330. , .
- The Basics of FMEA.New York:Quality Resources;1996. , , .
- The hospitalist model of care: A positive influence on efficiency, quality of care, and outcomes.Crit Path Cardiol.2004;3:S5–S7. .
- An introduction to the hospitalist model.Ann Intern Med.1999;130:338–342. .
- The impact of hospitalists on medical education and the academic health systems.Ann Intern Med.1999;130:364–367. .
- Hospitalists' perceptions of their residency training needs: Results of a national survey.Am J Med.2001;111:247–254. , , , et al.
- Preventing the emergence of antimicrobial resistance: A call for action by clinicians, public health officials and patients.JAMA1997;278:944–945. , , .
- Centers for Disease Control and Prevention. Campaign to Prevent Antimicrobial Resistance in Healthcare Settings. 2005. Available at: URL: http://www.cdc.gov/drugresistance/healthcare/default.htm. Accessed November 8,2005.
- Impact of formal continuing medical education: Do conferences, workshops, rounds and other traditional continuing education activities change physician behavior or health care outcomes?JAMA.1999;282:867–874. , , , et al.
- Physician preferences for continuing medical education with a focus on the topic of antimicrobial resistance: Society for Healthcare Epidemiology of America.Infect Control Hosp Epidemiol.2001;22:656–660. , , , et al.
- Clinicians' perceptions of the problem of antimicrobial resistance in health care facilities.Arch Intern Med.2004;164:1662–1668. , , , et al.
- Antibiotic resistance: a survey of physician perceptions.Arch Intern Med.2002;162:2210–2216. , , , et al.
- Assessing motivation for physicians to prevent antimicrobial resistance in hospitalized children using the health belief model as a framework.Am J Infect Control.2004;33:175–181. , , , et al.
- Educational theory into practice: Development of an infection control link nurse programme.Nurs Ed Pract.2001;1:35–41. .
In the United States, hospitalized patients are at risk of acquiring health careassociated infections that increase morbidity, mortality, length of hospital stay, and cost of care.1 If a health careassociated infection is caused by an antimicrobial‐resistant pathogen, treatment efforts may be further complicated.2, 3 With the decreasing effectiveness of antimicrobials and suboptimal adherence to certain infection control measures, new and multifaceted prevention strategies are necessary to address the problem of health careassociated infections and antimicrobial resistance.410
One strategy that hospitals can use to reduce the incidence of health careassociated infections and antimicrobial resistance is implementation of quality improvement programs. These programs require clinicians to employ techniques, such as root cause analysis (RCA), which investigates contributing factors to an event to prevent reoccurrence, and healthcare failure mode effects analysis (HFMEA), which applies a systematic method of identifying and preventing problems before they occur.1113 Programs and strategies such as these require leadership and adoption within the hospital. Because of their availability and specialized role in the hospital setting, hospitalists are in a unique position to promote and uphold quality improvement efforts.1417 Professional societies, health care organizations, and governmental agencies can play a role in engaging this group of physicians in improving the quality of patient care in hospitals by providing educational programs and materials.18
In 2004, the Society of Hospital Medicine (SHM) collaborated with the Centers for Disease Control and Prevention (CDC) to develop a quality improvement tool kit to reduce antimicrobial resistance and health careassociated infections. The tool kit was based on the CDC's Campaign to Prevent Antimicrobial Resistance in Healthcare Settings (Campaign), an educational program targeted at clinicians.19 The SHM/CDC tool kit contained campaign materials, a set of slides about quality improvement, worksheets, and additional materials such as infection control policies and guidelines to supplement a 90‐minute workshop consisting of didactic lectures about antimicrobial resistance, quality improvement initiatives, RCA, and HFMEA; a lecture and case study about intravascular catheter‐related infections; and small‐group activity and discussion. The complete toolkit is now available online via the SHM Antimicrobial Resistance Resource Room at
The purpose of the workshop was to present the tool kit and increase hospitalists' knowledge and awareness about antimicrobial resistance, health careassociated infections, and quality improvement programs. We assessed the workshop participants' familiarity with the Campaign prior to the workshop, perceptions of antimicrobial resistance, knowledge gained as a result of the workshop, and opinions about the usefulness of the workshop.
METHODS
Data were collected from pretests and posttests administered to participants of one of the SHM workshops in May, June, or July 2005 in Denver, Colorado; Boston, Massachusetts; or Portland, Oregon. One SHM physician leader (D.D.D., coauthor of this article) presented all 3 workshops. The workshops were advertised by SHM using E‐mail to local chapter members. Individual sites used a variety of methods to encourage their hospitalists to attend, and participants were provided a complimentary dinner.
Prior to each workshop, participants completed a 10‐question pretest that had been pilot‐tested by hospitalists in other cities. The pretest assessed demographics; perceptions of the problem of antimicrobial resistance using a Likert scale; familiarity with the Campaign; and knowledge of common infection sites, RCA, HFMEA, and antimicrobial resistance prevention measures.
Immediately following each workshop, a 13‐question posttest was administered to participants. This posttest evaluated the workshop and materials using Likert scales, asked for suggestions for future programming using open‐ended questions, and repeated pretest questions to assess changes in perceptions and knowledge.
Data were entered into an Excel spreadsheet and analyzed using descriptive statistics and t tests to compare pre‐ and posttest changes in knowledge. Likert data assessing perceptions were dichotomized into strongly agree versus all other scale responses. Qualitative open‐ended responses were categorized by theme.
RESULTS
A total of 69 SHM members attended the workshops. Of the 69 participants, 65 completed the pretest, 53 completed the posttest, and 50 completed both the pre‐ and the posttests. Only participants who completed both the pretest and the posttest were included in the analyses (n = 21, Denver; n = 11, Boston; n = 18, Portland). Of the 50 participants who completed both the pre‐ and posttests, 44 (88%) classified themselves as hospitalists in practices ranging from 2 to more than 25 physicians. Participants averaged 9.2 years (range = 1‐27 years) in practice and 4.9 years (range = 1‐10 years) as practicing hospitalists, with no significant differences between the 3 groups. Only 17 participants (34%) were familiar with the Campaign prior to the workshop, and there was no significant variation between the 3 workshops. Those familiar with the Campaign had heard about or received the educational materials from colleagues (n = 5), their facilities (n = 4), professional journals (n = 4), medical conferences (n = 4), or the CDC or SHM websites (n = 4).
Overall, most participants strongly agreed with the statement that antimicrobial resistance was a problem nationally, institutionally, and within their individual practices (Table 1). These perceptions did not significantly differ between the pretest and the posttest. However, statistically significant differences were found when comparing perceptions of the problem of antimicrobial resistance at the national, institutional, and practice levels; more participants strongly agreed that antimicrobial resistance was a problem nationally than within their institutions (pretest, P = .01; posttest, P = .04) or within their practices (pretest, P < .0001; posttest, P = .01).
Nationally | Institutionally | Within own practice | ||||
---|---|---|---|---|---|---|
Pretest | Posttest | Pretest | Posttest | Pretest | Posttest | |
| ||||||
Denver (n = 21) | 100% | 100% | 86% | 95% | 67% | 86% |
Portland (n = 18) | 83% | 94% | 67% | 78% | 67% | 78% |
Boston (n = 11) | 91% | 82% | 91% | 82% | 91% | 82% |
Average | 91% | 94% | 81% | 85% | 72% | 82% |
P value | .28 | .18 | .06 |
On the knowledge‐based questions, the overall average test score was 48% on the pretest and 63% on the posttest (P < .0001), with scores varying by question (Table 2). For example, knowledge of quality improvement initiatives/HFMEA was low (an average of 10% correct on the pretest, 48% on the posttest) compared with knowledge about the key prevention strategies from the Campaign to Prevent Antimicrobial Resistance (average of 94% correct on the pretest, 98% on the posttest). Furthermore, scores also varied by workshop location. On the pretest, participants in Boston and Portland scored higher (both 53%) than Denver participants (40%). On the posttest, Portland participants scored the highest (78%) followed by Boston participants (64%) and then Denver participants (50%). Boston and Denver participants differed significantly on pretest knowledge score (P = .04) and Portland and Denver participants differed significantly on posttest knowledge score (P < .0001).
Question Topic | Pretest average | Posttest average | Percent difference (P value)* |
---|---|---|---|
| |||
Quality improvement initiatives/HFMEA Which quality improvement initiative(s) must be performed yearly by all hospitals (JCAHO accreditation requirement)? | 10% | 48% | 38% (P < .0001) |
Prevention of central venous catheter‐associated bloodstream infections: Key prevention steps for preventing central venous catheter‐associated bloodstream infections include all of the following except: | 62% | 88% | 26% (P = .0001) |
RCA Which of the following is NOT true about root cause analysis? | 20% | 38% | 18% (P = .01) |
Campaign to Prevent Antimicrobial Resistance The key prevention strategies from the Campaign to Prevent Antimicrobial Resistance include all of the following except: | 94% | 98% | 4% (P = .32) |
Common body sites for healthcare‐associated infection: The most common site of hospital‐acquired (nosocomial) infection is: | 52% | 44% | 8% (P = .29) |
Overall average | 48% | 63% | 15% (P < .0001) |
Overall, 43 participants (85%) rated the workshop as either very good or excellent. All but 1 participant (n = 49, 98%) would encourage a colleague to attend the workshop, giving reasons such as that the workshop outlined a major program in delivering good and safe care, offered great information on antimicrobial resistance and methods of quality improvement systems implementation, assisted in find[ing] new tools for improving hospital practice, and addressed a significant factor in hospitals related to morbidity [and] mortality. When asked for general comments about the workshop and suggestions for future improvements, participants requested more direction, more detail, more discussion, specific examples of antimicrobial resistance, and protocols and processes for implementing quality improvement programs. On a scale from 1 (not useful) to 5 (essential), participants rated the usefulness of each workshop segment: intravascular catheter‐related infections lecture and case study (x̄ = 4.3, range = 3‐5), quality improvement initiatives lecture (x̄ = 4.1, range = 2‐5), background on antimicrobial resistance (x̄ = 3.9, range = 2‐5), RCA lecture (x̄ = 3.9, range = 2‐5), HFMEA lecture (x̄ = 3.8, range = 2‐5), and small‐group discussion (x̄ = 3.4, range = 2‐5). These ratings did not vary significantly between the 3 groups.
CONCLUSIONS
To address antimicrobial resistance and health careassociated infections in the hospital setting, the SHM and CDC developed a tool kit and presented a quality improvement workshop to hospitalists in 3 U.S. cities. Overall, the participants scored significantly higher on the knowledge‐based questions on the posttest than on the pretest, indicating that knowledge improved as a result of the workshop. By providing a format that combined didactic lectures with case‐based education, small‐group activities, and discussion, the SHM workshop may have optimized its ability to increase knowledge, similar to the findings in previous research.2021
There were no significant differences between the 3 groups in years of practice, perceptions of the problem, and overall evaluation of the workshop. However, differences were found in knowledge gained as a result of the workshop. For example, the Denver group scored lower on the knowledge‐based questions than did the Boston group on the pretest and the Portland group on the posttest, indicating that knowledge and learning styles may differ by location. These differences may be attributed to variations in hospital environments, hospital‐based educational programs, or medical school and residency training. Differences like these may impact the effectiveness of a program and should be a consideration in the program development process, especially when a program is national in scope, like the CDC's Campaign to Prevent Antimicrobial Resistance in Healthcare Settings. In addition, more than 90% of participants correctly identified key prevention strategies of the Campaign, whereas only 34% were familiar with the Campaign itself prior to the workshop. This result may be a result of the key prevention strategies of the Campaign being derived from well‐established and ‐recognized evidence‐based best practices for patient safety and care.
Although knowledge changed as a result of the workshop, overall perceptions of the problem of antimicrobial resistance did not change significantly from pretest to posttest. It is possible this is because changes in perception require a different or more intensive educational approach. This result also may reflect the initial levels of agreement on the pretest, the measurement instrument itself, and/or the inability to detect differences because of the small number of participants.
Difference did exist in perceptions of the problem of antimicrobial resistance at the national, institutional, and practice levels. Antimicrobial resistance was perceived to be a greater problem on the national level than on the institutional and practice levels. Other studies also have found that clinicians more strongly agree that antimicrobial resistance is a problem nationally than within their institutions and practices.2224 When antimicrobial resistance is not perceived as a problem within institutions and practices, physicians may be less likely to overcome the barriers to following recommended infection prevention guidelines or to implementing quality improvement projects.4 Therefore, educational and intervention efforts like this workshop should address hospitalists' perceptions of the problem of antimicrobial resistance on the individual level as a first step in motivating them to engage in quality improvement.
Although participants' knowledge scores increased from pretest to posttest, gaps in knowledge remained, as indicated by the significantly improved but low overall posttest scores related to RCA and HFMEA. As hospitalists are in a unique position to promote quality improvement programs, these topic areas should be given more attention in future workshops and in training. Furthermore, by adding more specific questions related to each section of the workshop, associations among presentation style, knowledge gained, and perceived usefulness of each section could be evaluated. For example, the participants significantly increased their scores from pretest to posttest on the catheter‐related knowledge‐based question and rated the lecture and case study on intravascular catheter‐related infections as the most useful sections. Future research may explore these possible relationships to better guide selection of presentation styles and topics to ensure that participants gain knowledge and perceive the sections as useful. In addition, by addressing the feedback from participants, such as offering more detail, examples, and discussion, future workshops may have greater perceived usefulness and be better able to increase the knowledge and awareness of quality improvement programs for the prevention of health careassociated infections and antimicrobial resistance.
Although there were 3 workshops conducted in 3 areas across the United States, the sample size at each site was small, and results may not be representative of hospitalists at large. In addition, power calculations should be considered in future studies to increase the ability to better detect differences between and within groups. Another limitation of this study was that the limited data available and participant anonymity meant it was not possible to follow‐up with participants after the workshop to evaluate whether the knowledge they gained was sustained and/or whether they reported changes in practice. However, possession of knowledge and skills to inform practice does not mean that practice will change; therefore, follow‐up is necessary to determine if this workshop was effective in changing behaviors in the long term.25 Although the SHM workshop improved knowledge, more intensive educational strategies may be necessary to affect perceptions and improve the leadership skills required for implementation of quality improvement programs at an institutional level.
Overall, the SHM workshop was found to be a useful tool for increasing knowledge and outlining methods by which hospitalists can lead, coordinate, or participate in measures to prevent infections and improve patient safety. In addition, through the workshop, the SHM and the CDC have provided an example of how professional societies and government agencies can collaborate to address emerging issues in the health care setting.
In the United States, hospitalized patients are at risk of acquiring health careassociated infections that increase morbidity, mortality, length of hospital stay, and cost of care.1 If a health careassociated infection is caused by an antimicrobial‐resistant pathogen, treatment efforts may be further complicated.2, 3 With the decreasing effectiveness of antimicrobials and suboptimal adherence to certain infection control measures, new and multifaceted prevention strategies are necessary to address the problem of health careassociated infections and antimicrobial resistance.410
One strategy that hospitals can use to reduce the incidence of health careassociated infections and antimicrobial resistance is implementation of quality improvement programs. These programs require clinicians to employ techniques, such as root cause analysis (RCA), which investigates contributing factors to an event to prevent reoccurrence, and healthcare failure mode effects analysis (HFMEA), which applies a systematic method of identifying and preventing problems before they occur.1113 Programs and strategies such as these require leadership and adoption within the hospital. Because of their availability and specialized role in the hospital setting, hospitalists are in a unique position to promote and uphold quality improvement efforts.1417 Professional societies, health care organizations, and governmental agencies can play a role in engaging this group of physicians in improving the quality of patient care in hospitals by providing educational programs and materials.18
In 2004, the Society of Hospital Medicine (SHM) collaborated with the Centers for Disease Control and Prevention (CDC) to develop a quality improvement tool kit to reduce antimicrobial resistance and health careassociated infections. The tool kit was based on the CDC's Campaign to Prevent Antimicrobial Resistance in Healthcare Settings (Campaign), an educational program targeted at clinicians.19 The SHM/CDC tool kit contained campaign materials, a set of slides about quality improvement, worksheets, and additional materials such as infection control policies and guidelines to supplement a 90‐minute workshop consisting of didactic lectures about antimicrobial resistance, quality improvement initiatives, RCA, and HFMEA; a lecture and case study about intravascular catheter‐related infections; and small‐group activity and discussion. The complete toolkit is now available online via the SHM Antimicrobial Resistance Resource Room at
The purpose of the workshop was to present the tool kit and increase hospitalists' knowledge and awareness about antimicrobial resistance, health careassociated infections, and quality improvement programs. We assessed the workshop participants' familiarity with the Campaign prior to the workshop, perceptions of antimicrobial resistance, knowledge gained as a result of the workshop, and opinions about the usefulness of the workshop.
METHODS
Data were collected from pretests and posttests administered to participants of one of the SHM workshops in May, June, or July 2005 in Denver, Colorado; Boston, Massachusetts; or Portland, Oregon. One SHM physician leader (D.D.D., coauthor of this article) presented all 3 workshops. The workshops were advertised by SHM using E‐mail to local chapter members. Individual sites used a variety of methods to encourage their hospitalists to attend, and participants were provided a complimentary dinner.
Prior to each workshop, participants completed a 10‐question pretest that had been pilot‐tested by hospitalists in other cities. The pretest assessed demographics; perceptions of the problem of antimicrobial resistance using a Likert scale; familiarity with the Campaign; and knowledge of common infection sites, RCA, HFMEA, and antimicrobial resistance prevention measures.
Immediately following each workshop, a 13‐question posttest was administered to participants. This posttest evaluated the workshop and materials using Likert scales, asked for suggestions for future programming using open‐ended questions, and repeated pretest questions to assess changes in perceptions and knowledge.
Data were entered into an Excel spreadsheet and analyzed using descriptive statistics and t tests to compare pre‐ and posttest changes in knowledge. Likert data assessing perceptions were dichotomized into strongly agree versus all other scale responses. Qualitative open‐ended responses were categorized by theme.
RESULTS
A total of 69 SHM members attended the workshops. Of the 69 participants, 65 completed the pretest, 53 completed the posttest, and 50 completed both the pre‐ and the posttests. Only participants who completed both the pretest and the posttest were included in the analyses (n = 21, Denver; n = 11, Boston; n = 18, Portland). Of the 50 participants who completed both the pre‐ and posttests, 44 (88%) classified themselves as hospitalists in practices ranging from 2 to more than 25 physicians. Participants averaged 9.2 years (range = 1‐27 years) in practice and 4.9 years (range = 1‐10 years) as practicing hospitalists, with no significant differences between the 3 groups. Only 17 participants (34%) were familiar with the Campaign prior to the workshop, and there was no significant variation between the 3 workshops. Those familiar with the Campaign had heard about or received the educational materials from colleagues (n = 5), their facilities (n = 4), professional journals (n = 4), medical conferences (n = 4), or the CDC or SHM websites (n = 4).
Overall, most participants strongly agreed with the statement that antimicrobial resistance was a problem nationally, institutionally, and within their individual practices (Table 1). These perceptions did not significantly differ between the pretest and the posttest. However, statistically significant differences were found when comparing perceptions of the problem of antimicrobial resistance at the national, institutional, and practice levels; more participants strongly agreed that antimicrobial resistance was a problem nationally than within their institutions (pretest, P = .01; posttest, P = .04) or within their practices (pretest, P < .0001; posttest, P = .01).
Nationally | Institutionally | Within own practice | ||||
---|---|---|---|---|---|---|
Pretest | Posttest | Pretest | Posttest | Pretest | Posttest | |
| ||||||
Denver (n = 21) | 100% | 100% | 86% | 95% | 67% | 86% |
Portland (n = 18) | 83% | 94% | 67% | 78% | 67% | 78% |
Boston (n = 11) | 91% | 82% | 91% | 82% | 91% | 82% |
Average | 91% | 94% | 81% | 85% | 72% | 82% |
P value | .28 | .18 | .06 |
On the knowledge‐based questions, the overall average test score was 48% on the pretest and 63% on the posttest (P < .0001), with scores varying by question (Table 2). For example, knowledge of quality improvement initiatives/HFMEA was low (an average of 10% correct on the pretest, 48% on the posttest) compared with knowledge about the key prevention strategies from the Campaign to Prevent Antimicrobial Resistance (average of 94% correct on the pretest, 98% on the posttest). Furthermore, scores also varied by workshop location. On the pretest, participants in Boston and Portland scored higher (both 53%) than Denver participants (40%). On the posttest, Portland participants scored the highest (78%) followed by Boston participants (64%) and then Denver participants (50%). Boston and Denver participants differed significantly on pretest knowledge score (P = .04) and Portland and Denver participants differed significantly on posttest knowledge score (P < .0001).
Question Topic | Pretest average | Posttest average | Percent difference (P value)* |
---|---|---|---|
| |||
Quality improvement initiatives/HFMEA Which quality improvement initiative(s) must be performed yearly by all hospitals (JCAHO accreditation requirement)? | 10% | 48% | 38% (P < .0001) |
Prevention of central venous catheter‐associated bloodstream infections: Key prevention steps for preventing central venous catheter‐associated bloodstream infections include all of the following except: | 62% | 88% | 26% (P = .0001) |
RCA Which of the following is NOT true about root cause analysis? | 20% | 38% | 18% (P = .01) |
Campaign to Prevent Antimicrobial Resistance The key prevention strategies from the Campaign to Prevent Antimicrobial Resistance include all of the following except: | 94% | 98% | 4% (P = .32) |
Common body sites for healthcare‐associated infection: The most common site of hospital‐acquired (nosocomial) infection is: | 52% | 44% | 8% (P = .29) |
Overall average | 48% | 63% | 15% (P < .0001) |
Overall, 43 participants (85%) rated the workshop as either very good or excellent. All but 1 participant (n = 49, 98%) would encourage a colleague to attend the workshop, giving reasons such as that the workshop outlined a major program in delivering good and safe care, offered great information on antimicrobial resistance and methods of quality improvement systems implementation, assisted in find[ing] new tools for improving hospital practice, and addressed a significant factor in hospitals related to morbidity [and] mortality. When asked for general comments about the workshop and suggestions for future improvements, participants requested more direction, more detail, more discussion, specific examples of antimicrobial resistance, and protocols and processes for implementing quality improvement programs. On a scale from 1 (not useful) to 5 (essential), participants rated the usefulness of each workshop segment: intravascular catheter‐related infections lecture and case study (x̄ = 4.3, range = 3‐5), quality improvement initiatives lecture (x̄ = 4.1, range = 2‐5), background on antimicrobial resistance (x̄ = 3.9, range = 2‐5), RCA lecture (x̄ = 3.9, range = 2‐5), HFMEA lecture (x̄ = 3.8, range = 2‐5), and small‐group discussion (x̄ = 3.4, range = 2‐5). These ratings did not vary significantly between the 3 groups.
CONCLUSIONS
To address antimicrobial resistance and health careassociated infections in the hospital setting, the SHM and CDC developed a tool kit and presented a quality improvement workshop to hospitalists in 3 U.S. cities. Overall, the participants scored significantly higher on the knowledge‐based questions on the posttest than on the pretest, indicating that knowledge improved as a result of the workshop. By providing a format that combined didactic lectures with case‐based education, small‐group activities, and discussion, the SHM workshop may have optimized its ability to increase knowledge, similar to the findings in previous research.2021
There were no significant differences between the 3 groups in years of practice, perceptions of the problem, and overall evaluation of the workshop. However, differences were found in knowledge gained as a result of the workshop. For example, the Denver group scored lower on the knowledge‐based questions than did the Boston group on the pretest and the Portland group on the posttest, indicating that knowledge and learning styles may differ by location. These differences may be attributed to variations in hospital environments, hospital‐based educational programs, or medical school and residency training. Differences like these may impact the effectiveness of a program and should be a consideration in the program development process, especially when a program is national in scope, like the CDC's Campaign to Prevent Antimicrobial Resistance in Healthcare Settings. In addition, more than 90% of participants correctly identified key prevention strategies of the Campaign, whereas only 34% were familiar with the Campaign itself prior to the workshop. This result may be a result of the key prevention strategies of the Campaign being derived from well‐established and ‐recognized evidence‐based best practices for patient safety and care.
Although knowledge changed as a result of the workshop, overall perceptions of the problem of antimicrobial resistance did not change significantly from pretest to posttest. It is possible this is because changes in perception require a different or more intensive educational approach. This result also may reflect the initial levels of agreement on the pretest, the measurement instrument itself, and/or the inability to detect differences because of the small number of participants.
Difference did exist in perceptions of the problem of antimicrobial resistance at the national, institutional, and practice levels. Antimicrobial resistance was perceived to be a greater problem on the national level than on the institutional and practice levels. Other studies also have found that clinicians more strongly agree that antimicrobial resistance is a problem nationally than within their institutions and practices.2224 When antimicrobial resistance is not perceived as a problem within institutions and practices, physicians may be less likely to overcome the barriers to following recommended infection prevention guidelines or to implementing quality improvement projects.4 Therefore, educational and intervention efforts like this workshop should address hospitalists' perceptions of the problem of antimicrobial resistance on the individual level as a first step in motivating them to engage in quality improvement.
Although participants' knowledge scores increased from pretest to posttest, gaps in knowledge remained, as indicated by the significantly improved but low overall posttest scores related to RCA and HFMEA. As hospitalists are in a unique position to promote quality improvement programs, these topic areas should be given more attention in future workshops and in training. Furthermore, by adding more specific questions related to each section of the workshop, associations among presentation style, knowledge gained, and perceived usefulness of each section could be evaluated. For example, the participants significantly increased their scores from pretest to posttest on the catheter‐related knowledge‐based question and rated the lecture and case study on intravascular catheter‐related infections as the most useful sections. Future research may explore these possible relationships to better guide selection of presentation styles and topics to ensure that participants gain knowledge and perceive the sections as useful. In addition, by addressing the feedback from participants, such as offering more detail, examples, and discussion, future workshops may have greater perceived usefulness and be better able to increase the knowledge and awareness of quality improvement programs for the prevention of health careassociated infections and antimicrobial resistance.
Although there were 3 workshops conducted in 3 areas across the United States, the sample size at each site was small, and results may not be representative of hospitalists at large. In addition, power calculations should be considered in future studies to increase the ability to better detect differences between and within groups. Another limitation of this study was that the limited data available and participant anonymity meant it was not possible to follow‐up with participants after the workshop to evaluate whether the knowledge they gained was sustained and/or whether they reported changes in practice. However, possession of knowledge and skills to inform practice does not mean that practice will change; therefore, follow‐up is necessary to determine if this workshop was effective in changing behaviors in the long term.25 Although the SHM workshop improved knowledge, more intensive educational strategies may be necessary to affect perceptions and improve the leadership skills required for implementation of quality improvement programs at an institutional level.
Overall, the SHM workshop was found to be a useful tool for increasing knowledge and outlining methods by which hospitalists can lead, coordinate, or participate in measures to prevent infections and improve patient safety. In addition, through the workshop, the SHM and the CDC have provided an example of how professional societies and government agencies can collaborate to address emerging issues in the health care setting.
- Impact of nosocomial infection on cost of illness and length of stay in intensive care units.Infect Control Hosp Epidemiol2005;26:281–287. , , .
- Implementation of strategies to control antimicrobial resistance.Chest.2001;119:405S–411S. .
- Society for Healthcare Epidemiology of America and Infectious Diseases Society of American Joint Committee on the Prevention of Antimicrobial Resistance: guidelines for the prevention of antimicrobial resistance in hospitals.Clin Infect Dis.1997;25:584–599. , , , et al.
- Strategies to prevent and control the emergence and spread of antimicrobial‐resistant microorganisms in hospitals: a challenge to hospital leadership.JAMA.1996;275:234–240. , , , et al.
- Centers for Disease Control and Prevention.Guidelines for hand hygiene in health‐care settings: recommendations of the Healthcare Infection Control Practices Advisory Committee and the HICPAC/SHEA/APIC/IDSA Hand Hygiene Task Force.MMWR Recomm Rep.2002;51:1–44.
- Hospital Infection Control Practices Advisory Committee.Guideline for isolation precautions in hospitals.Infect Control Hosp Epidemiol.1996;17:53–80. .
- SHEA guideline for prevention nosocomial transmission of multidrug‐resistant strains of Staphylococcus aureus and Enterococcus.Infect Control Hosp Epidemiol.2003;24:362–386. , , , et al.
- Improving adherence to hand hygiene practice: a multidisciplinary approach.Emerg Infect Dis.2001;7:234–240. .
- Alcohol‐based handrub improves compliance with hand hygiene in intensive care units.Arch Intern Med.2002;162:1037–1043. , , .
- An organizational climate intervention associated with increased handwashing and decreased nosocomial infections.Behav Med.2000;26:14–22. , , , et al.
- Getting to the root of the matter.AHRQ Web M 29:319–330. , .
- The Basics of FMEA.New York:Quality Resources;1996. , , .
- The hospitalist model of care: A positive influence on efficiency, quality of care, and outcomes.Crit Path Cardiol.2004;3:S5–S7. .
- An introduction to the hospitalist model.Ann Intern Med.1999;130:338–342. .
- The impact of hospitalists on medical education and the academic health systems.Ann Intern Med.1999;130:364–367. .
- Hospitalists' perceptions of their residency training needs: Results of a national survey.Am J Med.2001;111:247–254. , , , et al.
- Preventing the emergence of antimicrobial resistance: A call for action by clinicians, public health officials and patients.JAMA1997;278:944–945. , , .
- Centers for Disease Control and Prevention. Campaign to Prevent Antimicrobial Resistance in Healthcare Settings. 2005. Available at: URL: http://www.cdc.gov/drugresistance/healthcare/default.htm. Accessed November 8,2005.
- Impact of formal continuing medical education: Do conferences, workshops, rounds and other traditional continuing education activities change physician behavior or health care outcomes?JAMA.1999;282:867–874. , , , et al.
- Physician preferences for continuing medical education with a focus on the topic of antimicrobial resistance: Society for Healthcare Epidemiology of America.Infect Control Hosp Epidemiol.2001;22:656–660. , , , et al.
- Clinicians' perceptions of the problem of antimicrobial resistance in health care facilities.Arch Intern Med.2004;164:1662–1668. , , , et al.
- Antibiotic resistance: a survey of physician perceptions.Arch Intern Med.2002;162:2210–2216. , , , et al.
- Assessing motivation for physicians to prevent antimicrobial resistance in hospitalized children using the health belief model as a framework.Am J Infect Control.2004;33:175–181. , , , et al.
- Educational theory into practice: Development of an infection control link nurse programme.Nurs Ed Pract.2001;1:35–41. .
- Impact of nosocomial infection on cost of illness and length of stay in intensive care units.Infect Control Hosp Epidemiol2005;26:281–287. , , .
- Implementation of strategies to control antimicrobial resistance.Chest.2001;119:405S–411S. .
- Society for Healthcare Epidemiology of America and Infectious Diseases Society of American Joint Committee on the Prevention of Antimicrobial Resistance: guidelines for the prevention of antimicrobial resistance in hospitals.Clin Infect Dis.1997;25:584–599. , , , et al.
- Strategies to prevent and control the emergence and spread of antimicrobial‐resistant microorganisms in hospitals: a challenge to hospital leadership.JAMA.1996;275:234–240. , , , et al.
- Centers for Disease Control and Prevention.Guidelines for hand hygiene in health‐care settings: recommendations of the Healthcare Infection Control Practices Advisory Committee and the HICPAC/SHEA/APIC/IDSA Hand Hygiene Task Force.MMWR Recomm Rep.2002;51:1–44.
- Hospital Infection Control Practices Advisory Committee.Guideline for isolation precautions in hospitals.Infect Control Hosp Epidemiol.1996;17:53–80. .
- SHEA guideline for prevention nosocomial transmission of multidrug‐resistant strains of Staphylococcus aureus and Enterococcus.Infect Control Hosp Epidemiol.2003;24:362–386. , , , et al.
- Improving adherence to hand hygiene practice: a multidisciplinary approach.Emerg Infect Dis.2001;7:234–240. .
- Alcohol‐based handrub improves compliance with hand hygiene in intensive care units.Arch Intern Med.2002;162:1037–1043. , , .
- An organizational climate intervention associated with increased handwashing and decreased nosocomial infections.Behav Med.2000;26:14–22. , , , et al.
- Getting to the root of the matter.AHRQ Web M 29:319–330. , .
- The Basics of FMEA.New York:Quality Resources;1996. , , .
- The hospitalist model of care: A positive influence on efficiency, quality of care, and outcomes.Crit Path Cardiol.2004;3:S5–S7. .
- An introduction to the hospitalist model.Ann Intern Med.1999;130:338–342. .
- The impact of hospitalists on medical education and the academic health systems.Ann Intern Med.1999;130:364–367. .
- Hospitalists' perceptions of their residency training needs: Results of a national survey.Am J Med.2001;111:247–254. , , , et al.
- Preventing the emergence of antimicrobial resistance: A call for action by clinicians, public health officials and patients.JAMA1997;278:944–945. , , .
- Centers for Disease Control and Prevention. Campaign to Prevent Antimicrobial Resistance in Healthcare Settings. 2005. Available at: URL: http://www.cdc.gov/drugresistance/healthcare/default.htm. Accessed November 8,2005.
- Impact of formal continuing medical education: Do conferences, workshops, rounds and other traditional continuing education activities change physician behavior or health care outcomes?JAMA.1999;282:867–874. , , , et al.
- Physician preferences for continuing medical education with a focus on the topic of antimicrobial resistance: Society for Healthcare Epidemiology of America.Infect Control Hosp Epidemiol.2001;22:656–660. , , , et al.
- Clinicians' perceptions of the problem of antimicrobial resistance in health care facilities.Arch Intern Med.2004;164:1662–1668. , , , et al.
- Antibiotic resistance: a survey of physician perceptions.Arch Intern Med.2002;162:2210–2216. , , , et al.
- Assessing motivation for physicians to prevent antimicrobial resistance in hospitalized children using the health belief model as a framework.Am J Infect Control.2004;33:175–181. , , , et al.
- Educational theory into practice: Development of an infection control link nurse programme.Nurs Ed Pract.2001;1:35–41. .
Copyright © 2007 Society of Hospital Medicine
The Venous Thromboembolism Quality Improvement Resource Room
The goal of this article is to explain how the first in a series of online resource rooms provides trainees and hospitalists with quality improvement tools that can be applied locally to improve inpatient care.1 During the emergence and explosive growth of hospital medicine, the SHM recognized the need to revise training relating to inpatient care and hospital process design to meet the evolving expectation of hospitalists that their performance will be measured, to actively set quality parameters, and to lead multidisciplinary teams to improve hospital performance.2 Armed with the appropriate skill set, hospitalists would be uniquely situated to lead and manage improvements in processes in the hospitals in which they work.
The content of the first Society of Hospital Medicine (SHM) Quality Improvement Resource Room (QI RR) supports hospitalists leading a multidisciplinary team dedicated to improving inpatient outcomes by preventing hospital‐acquired venous thromboembolism (VTE), a common cause of morbidity and mortality in hospitalized patients.3 The SHM developed this educational resource in the context of numerous reports on the incidence of medical errors in US hospitals and calls for action to improve the quality of health care.'47 Hospital report cards on quality measures are now public record, and hospitals will require uniformity in practice among physicians. Hospitalists are increasingly expected to lead initiatives that will implement national standards in key practices such as VTE prophylaxis2.
The QI RRs of the SHM are a collection of electronic tools accessible through the SHM Web site. They are designed to enhance the readiness of hospitalists and members of the multidisciplinary inpatient team to redesign care at the institutional level. Although all performance improvement is ultimately occurs locally, many QI methods and tools transcend hospital geography and disease topic. Leveraging a Web‐based platform, the SHM QI RRs present hospitalists with a general approach to QI, enriched by customizable workbooks that can be downloaded to best meet user needs. This resource is an innovation in practice‐based learning, quality improvement, and systems‐based practice.
METHODS
Development of the first QI RR followed a series of steps described in Curriculum Development for Medical Education8 (for process and timeline, see Table 1). Inadequate VTE prophylaxis was identified as an ongoing widespread problem of health care underutilization despite randomized clinical trials supporting the efficacy of prophylaxis.9, 10 Mirroring the AHRQ's assessment of underutilization of VTE prophylaxis as the single most important safety priority,6 the first QI RR focused on VTE, with plans to cover additional clinical conditions over time. As experts in the care of inpatients, hospitalists should be able to take custody of predictable complications of serious illness, identify and lower barriers to prevention, critically review prophylaxis options, utilize hospital‐specific data, and devise strategies to bridge the gap between knowledge and practice. Already leaders of multidisciplinary care teams, hospitalists are primed to lead multidisciplinary improvement teams as well.
Phase 1 (January 2005April 2005): Executing the educational strategy |
---|
One‐hour conference calls |
Curricular, clinical, technical, and creative aspects of production |
Additional communication between members of working group between calls |
Development of questionnaire for SHM membership, board, education, and hospital quality patient safety (HQPS) committees |
Content freeze: fourth month of development |
Implementation of revisions prior to April 2005 SHM Annual Meeting |
Phase 2 (April 2005August 2005): revision based on feedback |
Analysis of formative evaluation from Phase 1 |
Launch of the VTE QI RR August 2005 |
Secondary phases and venues for implementation |
Workshops at hospital medicine educational events |
SHM Quality course |
Formal recognition of the learning, experience, or proficiency acquired by users |
The working editorial team for the first resource room |
Dedicated project manager (SHM staff) |
Senior adviser for planning and development (SHM staff) |
Senior adviser for education (SHM staff) |
Content expert |
Education editor |
Hospital quality editor |
Managing editor |
Available data on the demographics of hospitalists and feedback from the SHM membership, leadership, and committees indicated that most learners would have minimal previous exposure to QI concepts and only a few years of management experience. Any previous quality improvement initiatives would tend to have been isolated, experimental, or smaller in scale. The resource rooms are designed to facilitate quality improvement learning among hospitalists that is practice‐based and immediately relevant to patient care. Measurable improvement in particular care processes or outcomes should correlate with actual learning.
The educational strategy of the SHM was predicated on ensuring that a quality and patient safety curriculum would retain clinical applicability in the hospital setting. This approach, grounded in adult learning principles and common to medical education, teaches general principles by framing the learning experience as problem centered.11 Several domains were identified as universally important to any quality improvement effort: raising awareness of a local performance gap, applying the best current evidence to practice, tapping the experience of others leading QI efforts, and using measurements derived from rapid‐cycle tests of change. Such a template delineates the components of successful QI planning, implementation, and evaluation and provides users with a familiar RR format applicable to improving any care process, not just VTE.
The Internet was chosen as the mechanism for delivering training on the basis of previous surveys of the SHM membership in which members expressed a preference for electronic and Web‐based forms of educational content delivery. Drawing from the example of other organizations teaching quality improvement, including the Institute for Healthcare Improvement and Intermountain Health Care, the SHM valued the ubiquity of a Web‐based educational resource. To facilitate on‐the‐job training, the first SHM QI RR provides a comprehensive tool kit to guide hospitalists through the process of advocating, developing, implementing, and evaluating a QI initiative for VTE.
Prior to launching the resource room, formative input was collected from SHM leaders, a panel of education and QI experts, and attendees of the society's annual meetings. Such input followed each significant step in the development of the RR curricula. For example, visitors at a kiosk at the 2005 SHM annual meeting completed surveys as they navigated through the VTE QI RR. This focused feedback shaped prelaunch development. The ultimate performance evaluation and feedback for the QI RR curricula will be gauged by user reports of measurable improvement in specific hospital process or outcomes measures. The VTE QI RR was launched in August 2005 and promoted at the SHM Web site.
RESULTS
The content and layout of the VTE QI RR are depicted in Figure 1. The self‐directed learner may navigate through the entire resource room or just select areas for study. Those likely to visit only a single area are individuals looking for guidance to support discrete roles on the improvement team: champion, clinical leader, facilitator of the QI process, or educator of staff or patient audiences (see Figure 2).
Why Should You Act?
The visual center of the QI RR layout presents sobering statisticsalthough pulmonary embolism from deep vein thrombosis is the most common cause of preventable hospital death, most hospitalized medical patients at risk do not receive appropriate prophylaxisand then encourages hospitalist‐led action to reduce hospital‐acquired VTE. The role of the hospitalist is extracted from the competencies articulated in the Venous Thromboembolism, Quality Improvement, and Hospitalist as Teacher chapters of The Core Competencies in Hospital Medicine.2
Awareness
In the Awareness area of the VTE QI RR, materials to raise clinician, hospital staff, and patient awareness are suggested and made available. Through the SHM's lead sponsorship of the national DVT Awareness Month campaign, suggested Steps to Action depict exactly how a hospital medicine service can use the campaign's materials to raise institutional support for tackling this preventable problem.
Evidence
The Evidence section aggregates a list of the most pertinent VTE prophylaxis literature to help ground any QI effort firmly in the evidence base. Through an agreement with the American College of Physicians (ACP), VTE prophylaxis articles reviewed in the ACP Journal Club are presented here.12 Although the listed literature focuses on prophylaxis, plans are in place to include references on diagnosis and treatment.
Experience
Resource room visitors interested in tapping into the experience of hospitalists and other leaders of QI efforts can navigate directly to this area. Interactive resources here include downloadable and adaptable protocols for VTE prophylaxis and, most importantly, improvement stories profiling actual QI successes. The Experience section features comments from an author of a seminal trial that studied computer alerts for high‐risk patients not receiving prophylaxis.10 The educational goal of this section of the QI RR is to provide opportunities to learn from successful QI projects, from the composition of the improvement team to the relevant metrics, implementation plan, and next steps.
Ask the Expert
The most interactive part of the resource room, the Ask the Expert forum, provides a hybrid of experience and evidence. A visitor who posts a clinical or improvement question to this discussion community receives a multidisciplinary response. For each question posted, a hospitalist moderator collects and aggregates responses from a panel of VTE experts, QI experts, hospitalist teachers, and pharmacists. The online exchange permitted by this forum promotes wider debate and learning. The questions and responses are archived and thus are available for subsequent users to read.
Improve
This area features the focal point of the entire resource room, the VTE QI workbook, which was written and designed to provide action‐oriented learning in quality improvement. The workbook is a downloadable project outline to guide and document efforts aimed at reducing rates of hospital‐acquired VTE. Hospitalists who complete the workbook should have acquired familiarity with and a working proficiency in leading system‐level efforts to drive better patient care. Users new to the theory and practice of QI can also review key concepts from a slide presentation in this part of the resource room.
Educate
This content area profiles the hospital medicine core competencies that relate to VTE and QI while also offering teaching materials and advice for teachers of VTE or QI. Teaching resources for clinician educators include online CME and an up‐to‐date slide lecture about VTE prophylaxis. The lecture presentation can be downloaded and customized to serve the needs of the speaker and the audience, whether students, residents, or other hospital staff. Clinician educators can also share or review teaching pearls used by hospitalist colleagues who serve as ward attendings.
DISCUSSION
A case example, shown in Figure 3, demonstrates how content accessible through the SHM VTE QI RR may be used to catalyze a local quality improvement effort.
Hospitals will be measured on rates of VTE prophylaxis on medical and surgical services. Failure to standardize prophylaxis among different physician groups may adversely affect overall performance, with implications for both patient care and accreditation. The lack of a agreed‐on gold standard of what constitutes appropriate prophylaxis for a given patient does not absolve an institution of the duty to implement its own standards. The challenge of achieving local consensus on appropriate prophylaxis should not outweigh the urgency to address preventable in‐hospital deaths. In caring for increasing numbers of general medical and surgical patients, hospitalists are likely to be asked to develop and implement a protocol for VTE prophylaxis that can be used hospitalwide. In many instances hospitalists will accept this charge in the aftermath of previous hospital failures in which admission order sets or VTE assessment protocols were launched but never widely implemented. As the National Quality Forum or JCAHO regulations for uniformity among hospitals shift VTE prophylaxis from being voluntary to compulsory, hospitalists will need to develop improvement strategies that have greater reliability.
Hospitalists with no formal training in either vascular medicine or quality improvement may not be able to immediately cite the most current data about VTE prophylaxis rates and regimens and may not have the time to enroll in a training course on quality improvement. How would hospitalists determine baseline rates of appropriate VTE prophylaxis? How can medical education be used to build consensus and recruit support from other physicians? What should be the scope of the QI initiative, and what patient population should be targeted for intervention?
The goal of the SHM QI RR is to provide the tools and the framework to help hospitalists develop, implement, and manage a VTE prophylaxis quality improvement initiative. Suggested Steps to Action in the Awareness section depict exactly how a hospital medicine service can use the campaign's materials to raise institutional support for tackling this preventable problem. Hospital quality officers can direct the hospital's public relations department to the Awareness section for DVT Awareness Month materials, including public service announcements in audio, visual, and print formats. The hold music at the hospital can be temporarily replaced, television kiosks can be set up to run video loops, and banners can be printed and hung in central locations, all to get out the message simultaneously to patients and medical staff.
The Evidence section of the VTE QI RR references a key benchmark study, the DVT‐Free Prospective Registry.9 This study reported that at 183 sites in North America and Europe, more than twice as many medical patients as surgical patients failed to receive prophylaxis. The Evidence section includes the 7th American College of Chest Physicians Consensus Conference on Antithrombotic and Thrombolytic Therapy and also highlights 3 randomized placebo‐controlled clinical trials (MEDENOX 1999, ARTEMIS 2003, and PREVENT 2004) that have reported significant reduction of risk of VTE (50%‐60%) from pharmacologic prophylaxis in moderate‐risk medical inpatients.1315 Review of the data helps to determine which patient population to study first, which prophylaxis options a hospital could deploy appropriately, and the expected magnitude of the effect. Because the literature has already been narrowed and is kept current, hospitalists can save time in answering a range of questions, from the most commonly agreed‐on factors to stratify risk to which populations require alternative interventions.
The Experience section references the first clinical trial demonstrating improved patient outcomes from a quality improvement initiative aimed at improving utilization of VTE prophylaxis.10 At the large teaching hospital where the electronic alerts were studied, a preexisting wealth of educational information on the hospital Web site, in the form of multiple seminars and lectures on VTE prophylaxis by opinion leaders and international experts, had little impact on practice. For this reason, the investigators implemented a trial of how to change physician behavior by introducing a point‐of‐care intervention, the computer alerts. Clinicians prompted by an electronic alert to consider DVT prophylaxis for at‐risk patients employed nearly double the rate of pharmacologic prophylaxis and reduced the incidence of DVT or pulmonary embolism (PE) by 41%. This study suggests that a change introduced to the clinical workflow can improve evidence‐based VTE prophylaxis and also can reduce the incidence of VTE in acutely ill hospitalized patients.
We believe that if hospitalists use the current evidence and experience assembled in the VTE QI RR, they could develop and lead a systematic approach to improving utilization of VTE prophylaxis. Although there is no gold standard method for integrating VTE risk assessment into clinical workflow, the VTE QI RR presents key lessons both from the literature and real world experiences. The crucial take‐home message is that hospitalists can facilitate implementation of VTE risk assessments if they stress simplicity (ie, the sick, old, surgery benefit), link the risk assessment to a menu of evidence‐based prophylaxis options, and require assessment of VTE risk as part of a regular routine (on admission and at regular intervals). Although many hospitals do not yet have computerized entry of physician orders, the simple 4‐point VTE risk assessment described by Kucher et al might be applied to other hospitals.10 The 4‐point system would identify the patients at highest risk, a reasonable starting point for a QI initiative. Whatever the modelCPOE alerts of very high‐risk patients, CPOE‐forced VTE risk assessments, nursing assessments, or paper‐based order setsregular VTE risk assessment can be incorporated into the daily routine of hospital care.
The QI workbook sequences the steps of a multidisciplinary improvement team and prompts users to set specific goals, collect practical metrics, and conduct plan‐do‐study‐act (PDSA) cycles of learning and action (Figure 4). Hospitalists and other team members can use the information in the workbook to estimate the prevalence of use of the appropriate VTE prophylaxis and the incidence of hospital‐acquired VTE at their medical centers, develop a suitable VTE risk assessment model, and plan interventions. Starting with all patients admitted to one nurse on one unit, then expanding to an entire nursing unit, an improvement team could implement rapid PDSA cycles to iron out the wrinkles of a risk assessment protocol. After demonstrating a measurable benefit for the patients at highest risk, the team would then be expected to capture more patients at risk for VTE by modifying the risk assessment protocol to identify moderate‐risk patients (hospitalized patients with one risk factor), as in the MEDENOX, ARTEMIS, and PREVENT clinical trials. Within the first several months, the QI intervention could be expanded to more nursing units. An improvement report profiling a clinically important increase in the rate of appropriate VTE prophylaxis would advocate for additional local resources and projects.
As questions arise in assembling an improvement team, setting useful aims and metrics, choosing interventions, implementing and studying change, or collecting performance data, hospitalists can review answers to questions already posted and post their own questions in the Ask the Expert area. For example, one user asked whether there was a standard risk assessment tool for identifying patients at high risk of VTE. Another asked about the use of unfractionated heparin as a low‐cost alternative to low‐molecular‐weight heparin. Both these questions were answered within 24 hours by the content editor of the VTE QI RR and, for one question, also by 2 pharmacists and an international expert in VTE.
As other hospitalists begin de novo efforts of their own, success stories and strategies posted in the online forums of the VTE QI RR will be an evolving resource for basic know‐how and innovation.
Suggestions from a community of resource room users will be solicited, evaluated, and incorporated into the QI RR in order to improve its educational value and utility. The curricula could also be adapted or refined by others with an interest in systems‐based care or practice‐based learning, such as directors of residency training programs.
CONCLUSIONS
The QI RRs bring QI theory and practice to the hospitalist, when and wherever it is wanted, minimizing time away from patient care. The workbook links theory to practice and can be used to launch, sustain, and document a local VTE‐specific QI initiative. A range of experience is accommodated. Content is provided in a way that enables the user to immediately apply and adapt it to a local contextusers can access and download the subset of tools that best meet their needs. For practicing hospitalists, this QI resource offers an opportunity to bridge the training gap in systems‐based hospital care and should increase the quality and quantity of and support for opportunities to lead successful QI projects.
The Accreditation Council of Graduate Medical Education (ACGME) now requires education in health care systems, a requirement not previously mandated for traditional medical residency programs.17 Because the resource rooms should increase the number of hospitalists competently leading local efforts that achieve measurable gains in hospital outcomes, a wider potential constituency also includes residency program directors, internal medicine residents, physician assistants and nurse‐practitioners, nurses, hospital quality officers, and hospital medicine practice leaders.
Further research is needed to determine the clinical impact of the VTE QI workbook on outcomes for hospitalized patients. The effectiveness of such an educational method should be evaluated, at least in part, by documenting changes in clinically important process and outcome measures, in this case those specific to hospital‐acquired VTE. Investigation also will need to generate an impact assessment to see if the curricula are effective in meeting the strategic educational goals of the Society of Hospital Medicine. Further investigation will examine whether this resource can help residency training programs achieve ACGME goals for practice‐based learning and systems‐based care.
- Society of Hospital Medicine Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=Quality_Improvement_Resource_Rooms1(suppl 1).
- Physician practices in the prevention of venous thromboembolism.Arch Intern Med.1991;151:933–938. , , , , , .
- Kohn LT,Corrigan JM,Donaldson MS, eds.To Err Is Human.Washington, DC:National Academy Press;2000.
- Institute of Medicinehttp://www.iom.edu/CMS/3718.aspx
- Shojania KG,Duncan BW,McDonald KM,Wachter RM, eds.Making health care safer: a critical analysis of patient safety practices.Agency for Healthcare Research and Quality, Publication 01‐E058;2001.
- Joint Commission on the Accreditation of Health Care Organizations. Public policy initiatives. Available at: http://www.jcaho.org/about+us/public+policy+initiatives/pay_for_performance.htm
- Curriculum Development for Medical Education: A Six‐Step Approach.Baltimore, Md:Johns Hopkins University Press;1998. .
- DVT FREE Steering Committee.A prospective registry of 5,451 patients with ultrasound‐confirmed deep vein thrombosis.Am J Cardiol.2004;93:259. , ;
- Electronic alerts to prevent venous thromboembolism among hospitalized patients.N Engl J Med.2005;352:969. , , , et al.
- Teaching the Case Method.3rd ed.Cambridge, Mass :Harvard Business School. , , .
- American College of Physicians. Available at: http://www.acpjc.org/?hp
- MEDENOX trial.N Engl J Med.1999;341:793–800. , , , et al.
- Fondaparinux versus placebo for the prevention of VTE in acutely ill medical patients (ARTEMIS).J Thromb Haemost.2003;1(suppl 1):2046. , , .
- PREVENT Medical Thromboprophylaxis Study Group.Circulation.2004;110:874–879. , , , , , .
- Comparing the costs, risks and benefits of competing strategies for the primary prevention of VTE.Circulation.2004;110:IV25–IV32. , .
- Accreditation Council for Graduate Medical Education. Available at: http://www.acgme.org/acWebsite/programDir/pd_index.asp.
The goal of this article is to explain how the first in a series of online resource rooms provides trainees and hospitalists with quality improvement tools that can be applied locally to improve inpatient care.1 During the emergence and explosive growth of hospital medicine, the SHM recognized the need to revise training relating to inpatient care and hospital process design to meet the evolving expectation of hospitalists that their performance will be measured, to actively set quality parameters, and to lead multidisciplinary teams to improve hospital performance.2 Armed with the appropriate skill set, hospitalists would be uniquely situated to lead and manage improvements in processes in the hospitals in which they work.
The content of the first Society of Hospital Medicine (SHM) Quality Improvement Resource Room (QI RR) supports hospitalists leading a multidisciplinary team dedicated to improving inpatient outcomes by preventing hospital‐acquired venous thromboembolism (VTE), a common cause of morbidity and mortality in hospitalized patients.3 The SHM developed this educational resource in the context of numerous reports on the incidence of medical errors in US hospitals and calls for action to improve the quality of health care.'47 Hospital report cards on quality measures are now public record, and hospitals will require uniformity in practice among physicians. Hospitalists are increasingly expected to lead initiatives that will implement national standards in key practices such as VTE prophylaxis2.
The QI RRs of the SHM are a collection of electronic tools accessible through the SHM Web site. They are designed to enhance the readiness of hospitalists and members of the multidisciplinary inpatient team to redesign care at the institutional level. Although all performance improvement is ultimately occurs locally, many QI methods and tools transcend hospital geography and disease topic. Leveraging a Web‐based platform, the SHM QI RRs present hospitalists with a general approach to QI, enriched by customizable workbooks that can be downloaded to best meet user needs. This resource is an innovation in practice‐based learning, quality improvement, and systems‐based practice.
METHODS
Development of the first QI RR followed a series of steps described in Curriculum Development for Medical Education8 (for process and timeline, see Table 1). Inadequate VTE prophylaxis was identified as an ongoing widespread problem of health care underutilization despite randomized clinical trials supporting the efficacy of prophylaxis.9, 10 Mirroring the AHRQ's assessment of underutilization of VTE prophylaxis as the single most important safety priority,6 the first QI RR focused on VTE, with plans to cover additional clinical conditions over time. As experts in the care of inpatients, hospitalists should be able to take custody of predictable complications of serious illness, identify and lower barriers to prevention, critically review prophylaxis options, utilize hospital‐specific data, and devise strategies to bridge the gap between knowledge and practice. Already leaders of multidisciplinary care teams, hospitalists are primed to lead multidisciplinary improvement teams as well.
Phase 1 (January 2005April 2005): Executing the educational strategy |
---|
One‐hour conference calls |
Curricular, clinical, technical, and creative aspects of production |
Additional communication between members of working group between calls |
Development of questionnaire for SHM membership, board, education, and hospital quality patient safety (HQPS) committees |
Content freeze: fourth month of development |
Implementation of revisions prior to April 2005 SHM Annual Meeting |
Phase 2 (April 2005August 2005): revision based on feedback |
Analysis of formative evaluation from Phase 1 |
Launch of the VTE QI RR August 2005 |
Secondary phases and venues for implementation |
Workshops at hospital medicine educational events |
SHM Quality course |
Formal recognition of the learning, experience, or proficiency acquired by users |
The working editorial team for the first resource room |
Dedicated project manager (SHM staff) |
Senior adviser for planning and development (SHM staff) |
Senior adviser for education (SHM staff) |
Content expert |
Education editor |
Hospital quality editor |
Managing editor |
Available data on the demographics of hospitalists and feedback from the SHM membership, leadership, and committees indicated that most learners would have minimal previous exposure to QI concepts and only a few years of management experience. Any previous quality improvement initiatives would tend to have been isolated, experimental, or smaller in scale. The resource rooms are designed to facilitate quality improvement learning among hospitalists that is practice‐based and immediately relevant to patient care. Measurable improvement in particular care processes or outcomes should correlate with actual learning.
The educational strategy of the SHM was predicated on ensuring that a quality and patient safety curriculum would retain clinical applicability in the hospital setting. This approach, grounded in adult learning principles and common to medical education, teaches general principles by framing the learning experience as problem centered.11 Several domains were identified as universally important to any quality improvement effort: raising awareness of a local performance gap, applying the best current evidence to practice, tapping the experience of others leading QI efforts, and using measurements derived from rapid‐cycle tests of change. Such a template delineates the components of successful QI planning, implementation, and evaluation and provides users with a familiar RR format applicable to improving any care process, not just VTE.
The Internet was chosen as the mechanism for delivering training on the basis of previous surveys of the SHM membership in which members expressed a preference for electronic and Web‐based forms of educational content delivery. Drawing from the example of other organizations teaching quality improvement, including the Institute for Healthcare Improvement and Intermountain Health Care, the SHM valued the ubiquity of a Web‐based educational resource. To facilitate on‐the‐job training, the first SHM QI RR provides a comprehensive tool kit to guide hospitalists through the process of advocating, developing, implementing, and evaluating a QI initiative for VTE.
Prior to launching the resource room, formative input was collected from SHM leaders, a panel of education and QI experts, and attendees of the society's annual meetings. Such input followed each significant step in the development of the RR curricula. For example, visitors at a kiosk at the 2005 SHM annual meeting completed surveys as they navigated through the VTE QI RR. This focused feedback shaped prelaunch development. The ultimate performance evaluation and feedback for the QI RR curricula will be gauged by user reports of measurable improvement in specific hospital process or outcomes measures. The VTE QI RR was launched in August 2005 and promoted at the SHM Web site.
RESULTS
The content and layout of the VTE QI RR are depicted in Figure 1. The self‐directed learner may navigate through the entire resource room or just select areas for study. Those likely to visit only a single area are individuals looking for guidance to support discrete roles on the improvement team: champion, clinical leader, facilitator of the QI process, or educator of staff or patient audiences (see Figure 2).
Why Should You Act?
The visual center of the QI RR layout presents sobering statisticsalthough pulmonary embolism from deep vein thrombosis is the most common cause of preventable hospital death, most hospitalized medical patients at risk do not receive appropriate prophylaxisand then encourages hospitalist‐led action to reduce hospital‐acquired VTE. The role of the hospitalist is extracted from the competencies articulated in the Venous Thromboembolism, Quality Improvement, and Hospitalist as Teacher chapters of The Core Competencies in Hospital Medicine.2
Awareness
In the Awareness area of the VTE QI RR, materials to raise clinician, hospital staff, and patient awareness are suggested and made available. Through the SHM's lead sponsorship of the national DVT Awareness Month campaign, suggested Steps to Action depict exactly how a hospital medicine service can use the campaign's materials to raise institutional support for tackling this preventable problem.
Evidence
The Evidence section aggregates a list of the most pertinent VTE prophylaxis literature to help ground any QI effort firmly in the evidence base. Through an agreement with the American College of Physicians (ACP), VTE prophylaxis articles reviewed in the ACP Journal Club are presented here.12 Although the listed literature focuses on prophylaxis, plans are in place to include references on diagnosis and treatment.
Experience
Resource room visitors interested in tapping into the experience of hospitalists and other leaders of QI efforts can navigate directly to this area. Interactive resources here include downloadable and adaptable protocols for VTE prophylaxis and, most importantly, improvement stories profiling actual QI successes. The Experience section features comments from an author of a seminal trial that studied computer alerts for high‐risk patients not receiving prophylaxis.10 The educational goal of this section of the QI RR is to provide opportunities to learn from successful QI projects, from the composition of the improvement team to the relevant metrics, implementation plan, and next steps.
Ask the Expert
The most interactive part of the resource room, the Ask the Expert forum, provides a hybrid of experience and evidence. A visitor who posts a clinical or improvement question to this discussion community receives a multidisciplinary response. For each question posted, a hospitalist moderator collects and aggregates responses from a panel of VTE experts, QI experts, hospitalist teachers, and pharmacists. The online exchange permitted by this forum promotes wider debate and learning. The questions and responses are archived and thus are available for subsequent users to read.
Improve
This area features the focal point of the entire resource room, the VTE QI workbook, which was written and designed to provide action‐oriented learning in quality improvement. The workbook is a downloadable project outline to guide and document efforts aimed at reducing rates of hospital‐acquired VTE. Hospitalists who complete the workbook should have acquired familiarity with and a working proficiency in leading system‐level efforts to drive better patient care. Users new to the theory and practice of QI can also review key concepts from a slide presentation in this part of the resource room.
Educate
This content area profiles the hospital medicine core competencies that relate to VTE and QI while also offering teaching materials and advice for teachers of VTE or QI. Teaching resources for clinician educators include online CME and an up‐to‐date slide lecture about VTE prophylaxis. The lecture presentation can be downloaded and customized to serve the needs of the speaker and the audience, whether students, residents, or other hospital staff. Clinician educators can also share or review teaching pearls used by hospitalist colleagues who serve as ward attendings.
DISCUSSION
A case example, shown in Figure 3, demonstrates how content accessible through the SHM VTE QI RR may be used to catalyze a local quality improvement effort.
Hospitals will be measured on rates of VTE prophylaxis on medical and surgical services. Failure to standardize prophylaxis among different physician groups may adversely affect overall performance, with implications for both patient care and accreditation. The lack of a agreed‐on gold standard of what constitutes appropriate prophylaxis for a given patient does not absolve an institution of the duty to implement its own standards. The challenge of achieving local consensus on appropriate prophylaxis should not outweigh the urgency to address preventable in‐hospital deaths. In caring for increasing numbers of general medical and surgical patients, hospitalists are likely to be asked to develop and implement a protocol for VTE prophylaxis that can be used hospitalwide. In many instances hospitalists will accept this charge in the aftermath of previous hospital failures in which admission order sets or VTE assessment protocols were launched but never widely implemented. As the National Quality Forum or JCAHO regulations for uniformity among hospitals shift VTE prophylaxis from being voluntary to compulsory, hospitalists will need to develop improvement strategies that have greater reliability.
Hospitalists with no formal training in either vascular medicine or quality improvement may not be able to immediately cite the most current data about VTE prophylaxis rates and regimens and may not have the time to enroll in a training course on quality improvement. How would hospitalists determine baseline rates of appropriate VTE prophylaxis? How can medical education be used to build consensus and recruit support from other physicians? What should be the scope of the QI initiative, and what patient population should be targeted for intervention?
The goal of the SHM QI RR is to provide the tools and the framework to help hospitalists develop, implement, and manage a VTE prophylaxis quality improvement initiative. Suggested Steps to Action in the Awareness section depict exactly how a hospital medicine service can use the campaign's materials to raise institutional support for tackling this preventable problem. Hospital quality officers can direct the hospital's public relations department to the Awareness section for DVT Awareness Month materials, including public service announcements in audio, visual, and print formats. The hold music at the hospital can be temporarily replaced, television kiosks can be set up to run video loops, and banners can be printed and hung in central locations, all to get out the message simultaneously to patients and medical staff.
The Evidence section of the VTE QI RR references a key benchmark study, the DVT‐Free Prospective Registry.9 This study reported that at 183 sites in North America and Europe, more than twice as many medical patients as surgical patients failed to receive prophylaxis. The Evidence section includes the 7th American College of Chest Physicians Consensus Conference on Antithrombotic and Thrombolytic Therapy and also highlights 3 randomized placebo‐controlled clinical trials (MEDENOX 1999, ARTEMIS 2003, and PREVENT 2004) that have reported significant reduction of risk of VTE (50%‐60%) from pharmacologic prophylaxis in moderate‐risk medical inpatients.1315 Review of the data helps to determine which patient population to study first, which prophylaxis options a hospital could deploy appropriately, and the expected magnitude of the effect. Because the literature has already been narrowed and is kept current, hospitalists can save time in answering a range of questions, from the most commonly agreed‐on factors to stratify risk to which populations require alternative interventions.
The Experience section references the first clinical trial demonstrating improved patient outcomes from a quality improvement initiative aimed at improving utilization of VTE prophylaxis.10 At the large teaching hospital where the electronic alerts were studied, a preexisting wealth of educational information on the hospital Web site, in the form of multiple seminars and lectures on VTE prophylaxis by opinion leaders and international experts, had little impact on practice. For this reason, the investigators implemented a trial of how to change physician behavior by introducing a point‐of‐care intervention, the computer alerts. Clinicians prompted by an electronic alert to consider DVT prophylaxis for at‐risk patients employed nearly double the rate of pharmacologic prophylaxis and reduced the incidence of DVT or pulmonary embolism (PE) by 41%. This study suggests that a change introduced to the clinical workflow can improve evidence‐based VTE prophylaxis and also can reduce the incidence of VTE in acutely ill hospitalized patients.
We believe that if hospitalists use the current evidence and experience assembled in the VTE QI RR, they could develop and lead a systematic approach to improving utilization of VTE prophylaxis. Although there is no gold standard method for integrating VTE risk assessment into clinical workflow, the VTE QI RR presents key lessons both from the literature and real world experiences. The crucial take‐home message is that hospitalists can facilitate implementation of VTE risk assessments if they stress simplicity (ie, the sick, old, surgery benefit), link the risk assessment to a menu of evidence‐based prophylaxis options, and require assessment of VTE risk as part of a regular routine (on admission and at regular intervals). Although many hospitals do not yet have computerized entry of physician orders, the simple 4‐point VTE risk assessment described by Kucher et al might be applied to other hospitals.10 The 4‐point system would identify the patients at highest risk, a reasonable starting point for a QI initiative. Whatever the modelCPOE alerts of very high‐risk patients, CPOE‐forced VTE risk assessments, nursing assessments, or paper‐based order setsregular VTE risk assessment can be incorporated into the daily routine of hospital care.
The QI workbook sequences the steps of a multidisciplinary improvement team and prompts users to set specific goals, collect practical metrics, and conduct plan‐do‐study‐act (PDSA) cycles of learning and action (Figure 4). Hospitalists and other team members can use the information in the workbook to estimate the prevalence of use of the appropriate VTE prophylaxis and the incidence of hospital‐acquired VTE at their medical centers, develop a suitable VTE risk assessment model, and plan interventions. Starting with all patients admitted to one nurse on one unit, then expanding to an entire nursing unit, an improvement team could implement rapid PDSA cycles to iron out the wrinkles of a risk assessment protocol. After demonstrating a measurable benefit for the patients at highest risk, the team would then be expected to capture more patients at risk for VTE by modifying the risk assessment protocol to identify moderate‐risk patients (hospitalized patients with one risk factor), as in the MEDENOX, ARTEMIS, and PREVENT clinical trials. Within the first several months, the QI intervention could be expanded to more nursing units. An improvement report profiling a clinically important increase in the rate of appropriate VTE prophylaxis would advocate for additional local resources and projects.
As questions arise in assembling an improvement team, setting useful aims and metrics, choosing interventions, implementing and studying change, or collecting performance data, hospitalists can review answers to questions already posted and post their own questions in the Ask the Expert area. For example, one user asked whether there was a standard risk assessment tool for identifying patients at high risk of VTE. Another asked about the use of unfractionated heparin as a low‐cost alternative to low‐molecular‐weight heparin. Both these questions were answered within 24 hours by the content editor of the VTE QI RR and, for one question, also by 2 pharmacists and an international expert in VTE.
As other hospitalists begin de novo efforts of their own, success stories and strategies posted in the online forums of the VTE QI RR will be an evolving resource for basic know‐how and innovation.
Suggestions from a community of resource room users will be solicited, evaluated, and incorporated into the QI RR in order to improve its educational value and utility. The curricula could also be adapted or refined by others with an interest in systems‐based care or practice‐based learning, such as directors of residency training programs.
CONCLUSIONS
The QI RRs bring QI theory and practice to the hospitalist, when and wherever it is wanted, minimizing time away from patient care. The workbook links theory to practice and can be used to launch, sustain, and document a local VTE‐specific QI initiative. A range of experience is accommodated. Content is provided in a way that enables the user to immediately apply and adapt it to a local contextusers can access and download the subset of tools that best meet their needs. For practicing hospitalists, this QI resource offers an opportunity to bridge the training gap in systems‐based hospital care and should increase the quality and quantity of and support for opportunities to lead successful QI projects.
The Accreditation Council of Graduate Medical Education (ACGME) now requires education in health care systems, a requirement not previously mandated for traditional medical residency programs.17 Because the resource rooms should increase the number of hospitalists competently leading local efforts that achieve measurable gains in hospital outcomes, a wider potential constituency also includes residency program directors, internal medicine residents, physician assistants and nurse‐practitioners, nurses, hospital quality officers, and hospital medicine practice leaders.
Further research is needed to determine the clinical impact of the VTE QI workbook on outcomes for hospitalized patients. The effectiveness of such an educational method should be evaluated, at least in part, by documenting changes in clinically important process and outcome measures, in this case those specific to hospital‐acquired VTE. Investigation also will need to generate an impact assessment to see if the curricula are effective in meeting the strategic educational goals of the Society of Hospital Medicine. Further investigation will examine whether this resource can help residency training programs achieve ACGME goals for practice‐based learning and systems‐based care.
The goal of this article is to explain how the first in a series of online resource rooms provides trainees and hospitalists with quality improvement tools that can be applied locally to improve inpatient care.1 During the emergence and explosive growth of hospital medicine, the SHM recognized the need to revise training relating to inpatient care and hospital process design to meet the evolving expectation of hospitalists that their performance will be measured, to actively set quality parameters, and to lead multidisciplinary teams to improve hospital performance.2 Armed with the appropriate skill set, hospitalists would be uniquely situated to lead and manage improvements in processes in the hospitals in which they work.
The content of the first Society of Hospital Medicine (SHM) Quality Improvement Resource Room (QI RR) supports hospitalists leading a multidisciplinary team dedicated to improving inpatient outcomes by preventing hospital‐acquired venous thromboembolism (VTE), a common cause of morbidity and mortality in hospitalized patients.3 The SHM developed this educational resource in the context of numerous reports on the incidence of medical errors in US hospitals and calls for action to improve the quality of health care.'47 Hospital report cards on quality measures are now public record, and hospitals will require uniformity in practice among physicians. Hospitalists are increasingly expected to lead initiatives that will implement national standards in key practices such as VTE prophylaxis2.
The QI RRs of the SHM are a collection of electronic tools accessible through the SHM Web site. They are designed to enhance the readiness of hospitalists and members of the multidisciplinary inpatient team to redesign care at the institutional level. Although all performance improvement is ultimately occurs locally, many QI methods and tools transcend hospital geography and disease topic. Leveraging a Web‐based platform, the SHM QI RRs present hospitalists with a general approach to QI, enriched by customizable workbooks that can be downloaded to best meet user needs. This resource is an innovation in practice‐based learning, quality improvement, and systems‐based practice.
METHODS
Development of the first QI RR followed a series of steps described in Curriculum Development for Medical Education8 (for process and timeline, see Table 1). Inadequate VTE prophylaxis was identified as an ongoing widespread problem of health care underutilization despite randomized clinical trials supporting the efficacy of prophylaxis.9, 10 Mirroring the AHRQ's assessment of underutilization of VTE prophylaxis as the single most important safety priority,6 the first QI RR focused on VTE, with plans to cover additional clinical conditions over time. As experts in the care of inpatients, hospitalists should be able to take custody of predictable complications of serious illness, identify and lower barriers to prevention, critically review prophylaxis options, utilize hospital‐specific data, and devise strategies to bridge the gap between knowledge and practice. Already leaders of multidisciplinary care teams, hospitalists are primed to lead multidisciplinary improvement teams as well.
Phase 1 (January 2005April 2005): Executing the educational strategy |
---|
One‐hour conference calls |
Curricular, clinical, technical, and creative aspects of production |
Additional communication between members of working group between calls |
Development of questionnaire for SHM membership, board, education, and hospital quality patient safety (HQPS) committees |
Content freeze: fourth month of development |
Implementation of revisions prior to April 2005 SHM Annual Meeting |
Phase 2 (April 2005August 2005): revision based on feedback |
Analysis of formative evaluation from Phase 1 |
Launch of the VTE QI RR August 2005 |
Secondary phases and venues for implementation |
Workshops at hospital medicine educational events |
SHM Quality course |
Formal recognition of the learning, experience, or proficiency acquired by users |
The working editorial team for the first resource room |
Dedicated project manager (SHM staff) |
Senior adviser for planning and development (SHM staff) |
Senior adviser for education (SHM staff) |
Content expert |
Education editor |
Hospital quality editor |
Managing editor |
Available data on the demographics of hospitalists and feedback from the SHM membership, leadership, and committees indicated that most learners would have minimal previous exposure to QI concepts and only a few years of management experience. Any previous quality improvement initiatives would tend to have been isolated, experimental, or smaller in scale. The resource rooms are designed to facilitate quality improvement learning among hospitalists that is practice‐based and immediately relevant to patient care. Measurable improvement in particular care processes or outcomes should correlate with actual learning.
The educational strategy of the SHM was predicated on ensuring that a quality and patient safety curriculum would retain clinical applicability in the hospital setting. This approach, grounded in adult learning principles and common to medical education, teaches general principles by framing the learning experience as problem centered.11 Several domains were identified as universally important to any quality improvement effort: raising awareness of a local performance gap, applying the best current evidence to practice, tapping the experience of others leading QI efforts, and using measurements derived from rapid‐cycle tests of change. Such a template delineates the components of successful QI planning, implementation, and evaluation and provides users with a familiar RR format applicable to improving any care process, not just VTE.
The Internet was chosen as the mechanism for delivering training on the basis of previous surveys of the SHM membership in which members expressed a preference for electronic and Web‐based forms of educational content delivery. Drawing from the example of other organizations teaching quality improvement, including the Institute for Healthcare Improvement and Intermountain Health Care, the SHM valued the ubiquity of a Web‐based educational resource. To facilitate on‐the‐job training, the first SHM QI RR provides a comprehensive tool kit to guide hospitalists through the process of advocating, developing, implementing, and evaluating a QI initiative for VTE.
Prior to launching the resource room, formative input was collected from SHM leaders, a panel of education and QI experts, and attendees of the society's annual meetings. Such input followed each significant step in the development of the RR curricula. For example, visitors at a kiosk at the 2005 SHM annual meeting completed surveys as they navigated through the VTE QI RR. This focused feedback shaped prelaunch development. The ultimate performance evaluation and feedback for the QI RR curricula will be gauged by user reports of measurable improvement in specific hospital process or outcomes measures. The VTE QI RR was launched in August 2005 and promoted at the SHM Web site.
RESULTS
The content and layout of the VTE QI RR are depicted in Figure 1. The self‐directed learner may navigate through the entire resource room or just select areas for study. Those likely to visit only a single area are individuals looking for guidance to support discrete roles on the improvement team: champion, clinical leader, facilitator of the QI process, or educator of staff or patient audiences (see Figure 2).
Why Should You Act?
The visual center of the QI RR layout presents sobering statisticsalthough pulmonary embolism from deep vein thrombosis is the most common cause of preventable hospital death, most hospitalized medical patients at risk do not receive appropriate prophylaxisand then encourages hospitalist‐led action to reduce hospital‐acquired VTE. The role of the hospitalist is extracted from the competencies articulated in the Venous Thromboembolism, Quality Improvement, and Hospitalist as Teacher chapters of The Core Competencies in Hospital Medicine.2
Awareness
In the Awareness area of the VTE QI RR, materials to raise clinician, hospital staff, and patient awareness are suggested and made available. Through the SHM's lead sponsorship of the national DVT Awareness Month campaign, suggested Steps to Action depict exactly how a hospital medicine service can use the campaign's materials to raise institutional support for tackling this preventable problem.
Evidence
The Evidence section aggregates a list of the most pertinent VTE prophylaxis literature to help ground any QI effort firmly in the evidence base. Through an agreement with the American College of Physicians (ACP), VTE prophylaxis articles reviewed in the ACP Journal Club are presented here.12 Although the listed literature focuses on prophylaxis, plans are in place to include references on diagnosis and treatment.
Experience
Resource room visitors interested in tapping into the experience of hospitalists and other leaders of QI efforts can navigate directly to this area. Interactive resources here include downloadable and adaptable protocols for VTE prophylaxis and, most importantly, improvement stories profiling actual QI successes. The Experience section features comments from an author of a seminal trial that studied computer alerts for high‐risk patients not receiving prophylaxis.10 The educational goal of this section of the QI RR is to provide opportunities to learn from successful QI projects, from the composition of the improvement team to the relevant metrics, implementation plan, and next steps.
Ask the Expert
The most interactive part of the resource room, the Ask the Expert forum, provides a hybrid of experience and evidence. A visitor who posts a clinical or improvement question to this discussion community receives a multidisciplinary response. For each question posted, a hospitalist moderator collects and aggregates responses from a panel of VTE experts, QI experts, hospitalist teachers, and pharmacists. The online exchange permitted by this forum promotes wider debate and learning. The questions and responses are archived and thus are available for subsequent users to read.
Improve
This area features the focal point of the entire resource room, the VTE QI workbook, which was written and designed to provide action‐oriented learning in quality improvement. The workbook is a downloadable project outline to guide and document efforts aimed at reducing rates of hospital‐acquired VTE. Hospitalists who complete the workbook should have acquired familiarity with and a working proficiency in leading system‐level efforts to drive better patient care. Users new to the theory and practice of QI can also review key concepts from a slide presentation in this part of the resource room.
Educate
This content area profiles the hospital medicine core competencies that relate to VTE and QI while also offering teaching materials and advice for teachers of VTE or QI. Teaching resources for clinician educators include online CME and an up‐to‐date slide lecture about VTE prophylaxis. The lecture presentation can be downloaded and customized to serve the needs of the speaker and the audience, whether students, residents, or other hospital staff. Clinician educators can also share or review teaching pearls used by hospitalist colleagues who serve as ward attendings.
DISCUSSION
A case example, shown in Figure 3, demonstrates how content accessible through the SHM VTE QI RR may be used to catalyze a local quality improvement effort.
Hospitals will be measured on rates of VTE prophylaxis on medical and surgical services. Failure to standardize prophylaxis among different physician groups may adversely affect overall performance, with implications for both patient care and accreditation. The lack of a agreed‐on gold standard of what constitutes appropriate prophylaxis for a given patient does not absolve an institution of the duty to implement its own standards. The challenge of achieving local consensus on appropriate prophylaxis should not outweigh the urgency to address preventable in‐hospital deaths. In caring for increasing numbers of general medical and surgical patients, hospitalists are likely to be asked to develop and implement a protocol for VTE prophylaxis that can be used hospitalwide. In many instances hospitalists will accept this charge in the aftermath of previous hospital failures in which admission order sets or VTE assessment protocols were launched but never widely implemented. As the National Quality Forum or JCAHO regulations for uniformity among hospitals shift VTE prophylaxis from being voluntary to compulsory, hospitalists will need to develop improvement strategies that have greater reliability.
Hospitalists with no formal training in either vascular medicine or quality improvement may not be able to immediately cite the most current data about VTE prophylaxis rates and regimens and may not have the time to enroll in a training course on quality improvement. How would hospitalists determine baseline rates of appropriate VTE prophylaxis? How can medical education be used to build consensus and recruit support from other physicians? What should be the scope of the QI initiative, and what patient population should be targeted for intervention?
The goal of the SHM QI RR is to provide the tools and the framework to help hospitalists develop, implement, and manage a VTE prophylaxis quality improvement initiative. Suggested Steps to Action in the Awareness section depict exactly how a hospital medicine service can use the campaign's materials to raise institutional support for tackling this preventable problem. Hospital quality officers can direct the hospital's public relations department to the Awareness section for DVT Awareness Month materials, including public service announcements in audio, visual, and print formats. The hold music at the hospital can be temporarily replaced, television kiosks can be set up to run video loops, and banners can be printed and hung in central locations, all to get out the message simultaneously to patients and medical staff.
The Evidence section of the VTE QI RR references a key benchmark study, the DVT‐Free Prospective Registry.9 This study reported that at 183 sites in North America and Europe, more than twice as many medical patients as surgical patients failed to receive prophylaxis. The Evidence section includes the 7th American College of Chest Physicians Consensus Conference on Antithrombotic and Thrombolytic Therapy and also highlights 3 randomized placebo‐controlled clinical trials (MEDENOX 1999, ARTEMIS 2003, and PREVENT 2004) that have reported significant reduction of risk of VTE (50%‐60%) from pharmacologic prophylaxis in moderate‐risk medical inpatients.1315 Review of the data helps to determine which patient population to study first, which prophylaxis options a hospital could deploy appropriately, and the expected magnitude of the effect. Because the literature has already been narrowed and is kept current, hospitalists can save time in answering a range of questions, from the most commonly agreed‐on factors to stratify risk to which populations require alternative interventions.
The Experience section references the first clinical trial demonstrating improved patient outcomes from a quality improvement initiative aimed at improving utilization of VTE prophylaxis.10 At the large teaching hospital where the electronic alerts were studied, a preexisting wealth of educational information on the hospital Web site, in the form of multiple seminars and lectures on VTE prophylaxis by opinion leaders and international experts, had little impact on practice. For this reason, the investigators implemented a trial of how to change physician behavior by introducing a point‐of‐care intervention, the computer alerts. Clinicians prompted by an electronic alert to consider DVT prophylaxis for at‐risk patients employed nearly double the rate of pharmacologic prophylaxis and reduced the incidence of DVT or pulmonary embolism (PE) by 41%. This study suggests that a change introduced to the clinical workflow can improve evidence‐based VTE prophylaxis and also can reduce the incidence of VTE in acutely ill hospitalized patients.
We believe that if hospitalists use the current evidence and experience assembled in the VTE QI RR, they could develop and lead a systematic approach to improving utilization of VTE prophylaxis. Although there is no gold standard method for integrating VTE risk assessment into clinical workflow, the VTE QI RR presents key lessons both from the literature and real world experiences. The crucial take‐home message is that hospitalists can facilitate implementation of VTE risk assessments if they stress simplicity (ie, the sick, old, surgery benefit), link the risk assessment to a menu of evidence‐based prophylaxis options, and require assessment of VTE risk as part of a regular routine (on admission and at regular intervals). Although many hospitals do not yet have computerized entry of physician orders, the simple 4‐point VTE risk assessment described by Kucher et al might be applied to other hospitals.10 The 4‐point system would identify the patients at highest risk, a reasonable starting point for a QI initiative. Whatever the modelCPOE alerts of very high‐risk patients, CPOE‐forced VTE risk assessments, nursing assessments, or paper‐based order setsregular VTE risk assessment can be incorporated into the daily routine of hospital care.
The QI workbook sequences the steps of a multidisciplinary improvement team and prompts users to set specific goals, collect practical metrics, and conduct plan‐do‐study‐act (PDSA) cycles of learning and action (Figure 4). Hospitalists and other team members can use the information in the workbook to estimate the prevalence of use of the appropriate VTE prophylaxis and the incidence of hospital‐acquired VTE at their medical centers, develop a suitable VTE risk assessment model, and plan interventions. Starting with all patients admitted to one nurse on one unit, then expanding to an entire nursing unit, an improvement team could implement rapid PDSA cycles to iron out the wrinkles of a risk assessment protocol. After demonstrating a measurable benefit for the patients at highest risk, the team would then be expected to capture more patients at risk for VTE by modifying the risk assessment protocol to identify moderate‐risk patients (hospitalized patients with one risk factor), as in the MEDENOX, ARTEMIS, and PREVENT clinical trials. Within the first several months, the QI intervention could be expanded to more nursing units. An improvement report profiling a clinically important increase in the rate of appropriate VTE prophylaxis would advocate for additional local resources and projects.
As questions arise in assembling an improvement team, setting useful aims and metrics, choosing interventions, implementing and studying change, or collecting performance data, hospitalists can review answers to questions already posted and post their own questions in the Ask the Expert area. For example, one user asked whether there was a standard risk assessment tool for identifying patients at high risk of VTE. Another asked about the use of unfractionated heparin as a low‐cost alternative to low‐molecular‐weight heparin. Both these questions were answered within 24 hours by the content editor of the VTE QI RR and, for one question, also by 2 pharmacists and an international expert in VTE.
As other hospitalists begin de novo efforts of their own, success stories and strategies posted in the online forums of the VTE QI RR will be an evolving resource for basic know‐how and innovation.
Suggestions from a community of resource room users will be solicited, evaluated, and incorporated into the QI RR in order to improve its educational value and utility. The curricula could also be adapted or refined by others with an interest in systems‐based care or practice‐based learning, such as directors of residency training programs.
CONCLUSIONS
The QI RRs bring QI theory and practice to the hospitalist, when and wherever it is wanted, minimizing time away from patient care. The workbook links theory to practice and can be used to launch, sustain, and document a local VTE‐specific QI initiative. A range of experience is accommodated. Content is provided in a way that enables the user to immediately apply and adapt it to a local contextusers can access and download the subset of tools that best meet their needs. For practicing hospitalists, this QI resource offers an opportunity to bridge the training gap in systems‐based hospital care and should increase the quality and quantity of and support for opportunities to lead successful QI projects.
The Accreditation Council of Graduate Medical Education (ACGME) now requires education in health care systems, a requirement not previously mandated for traditional medical residency programs.17 Because the resource rooms should increase the number of hospitalists competently leading local efforts that achieve measurable gains in hospital outcomes, a wider potential constituency also includes residency program directors, internal medicine residents, physician assistants and nurse‐practitioners, nurses, hospital quality officers, and hospital medicine practice leaders.
Further research is needed to determine the clinical impact of the VTE QI workbook on outcomes for hospitalized patients. The effectiveness of such an educational method should be evaluated, at least in part, by documenting changes in clinically important process and outcome measures, in this case those specific to hospital‐acquired VTE. Investigation also will need to generate an impact assessment to see if the curricula are effective in meeting the strategic educational goals of the Society of Hospital Medicine. Further investigation will examine whether this resource can help residency training programs achieve ACGME goals for practice‐based learning and systems‐based care.
- Society of Hospital Medicine Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=Quality_Improvement_Resource_Rooms1(suppl 1).
- Physician practices in the prevention of venous thromboembolism.Arch Intern Med.1991;151:933–938. , , , , , .
- Kohn LT,Corrigan JM,Donaldson MS, eds.To Err Is Human.Washington, DC:National Academy Press;2000.
- Institute of Medicinehttp://www.iom.edu/CMS/3718.aspx
- Shojania KG,Duncan BW,McDonald KM,Wachter RM, eds.Making health care safer: a critical analysis of patient safety practices.Agency for Healthcare Research and Quality, Publication 01‐E058;2001.
- Joint Commission on the Accreditation of Health Care Organizations. Public policy initiatives. Available at: http://www.jcaho.org/about+us/public+policy+initiatives/pay_for_performance.htm
- Curriculum Development for Medical Education: A Six‐Step Approach.Baltimore, Md:Johns Hopkins University Press;1998. .
- DVT FREE Steering Committee.A prospective registry of 5,451 patients with ultrasound‐confirmed deep vein thrombosis.Am J Cardiol.2004;93:259. , ;
- Electronic alerts to prevent venous thromboembolism among hospitalized patients.N Engl J Med.2005;352:969. , , , et al.
- Teaching the Case Method.3rd ed.Cambridge, Mass :Harvard Business School. , , .
- American College of Physicians. Available at: http://www.acpjc.org/?hp
- MEDENOX trial.N Engl J Med.1999;341:793–800. , , , et al.
- Fondaparinux versus placebo for the prevention of VTE in acutely ill medical patients (ARTEMIS).J Thromb Haemost.2003;1(suppl 1):2046. , , .
- PREVENT Medical Thromboprophylaxis Study Group.Circulation.2004;110:874–879. , , , , , .
- Comparing the costs, risks and benefits of competing strategies for the primary prevention of VTE.Circulation.2004;110:IV25–IV32. , .
- Accreditation Council for Graduate Medical Education. Available at: http://www.acgme.org/acWebsite/programDir/pd_index.asp.
- Society of Hospital Medicine Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=Quality_Improvement_Resource_Rooms1(suppl 1).
- Physician practices in the prevention of venous thromboembolism.Arch Intern Med.1991;151:933–938. , , , , , .
- Kohn LT,Corrigan JM,Donaldson MS, eds.To Err Is Human.Washington, DC:National Academy Press;2000.
- Institute of Medicinehttp://www.iom.edu/CMS/3718.aspx
- Shojania KG,Duncan BW,McDonald KM,Wachter RM, eds.Making health care safer: a critical analysis of patient safety practices.Agency for Healthcare Research and Quality, Publication 01‐E058;2001.
- Joint Commission on the Accreditation of Health Care Organizations. Public policy initiatives. Available at: http://www.jcaho.org/about+us/public+policy+initiatives/pay_for_performance.htm
- Curriculum Development for Medical Education: A Six‐Step Approach.Baltimore, Md:Johns Hopkins University Press;1998. .
- DVT FREE Steering Committee.A prospective registry of 5,451 patients with ultrasound‐confirmed deep vein thrombosis.Am J Cardiol.2004;93:259. , ;
- Electronic alerts to prevent venous thromboembolism among hospitalized patients.N Engl J Med.2005;352:969. , , , et al.
- Teaching the Case Method.3rd ed.Cambridge, Mass :Harvard Business School. , , .
- American College of Physicians. Available at: http://www.acpjc.org/?hp
- MEDENOX trial.N Engl J Med.1999;341:793–800. , , , et al.
- Fondaparinux versus placebo for the prevention of VTE in acutely ill medical patients (ARTEMIS).J Thromb Haemost.2003;1(suppl 1):2046. , , .
- PREVENT Medical Thromboprophylaxis Study Group.Circulation.2004;110:874–879. , , , , , .
- Comparing the costs, risks and benefits of competing strategies for the primary prevention of VTE.Circulation.2004;110:IV25–IV32. , .
- Accreditation Council for Graduate Medical Education. Available at: http://www.acgme.org/acWebsite/programDir/pd_index.asp.
Copyright © 2006 Society of Hospital Medicine
First Ever SHM Leadership Academy— A Rousing Success
What would prompt someone to exclaim “fantastic, inspirational, and motivational,” and trigger adults to hug each other when it is time to say goodbye? The first-ever SHM Leadership Academy welcomed 110 hospitalist leaders to the Westin La Paloma Resort in Tucson, AZ on January 10–13, 2005. A resounding success, the Leadership Academy offered instruction in leading change, communicating effectively, handling conflict and negotiation, strategic planning, and interpreting hospital business drivers. Two years in the making, this course combined an outstanding national faculty with small group learning exercises to begin the process of training hospitalists who will lead important initiatives as we shape the hospital of the future.
At the 2003 SHM Annual Meeting in San Diego, a standing room crowd of about 200 hospitalists at the Leadership Forum expressed their need for advanced leadership training. Responding to obvious demand, SHM developed a successful, soldout 1-day Leadership Pre-Course held in New Orleans at the 2004 SHM Annual Meeting. Building on this pre-course and again reacting to the requests of SHM members, Co-Directors Russell Holman and Mark Williams designed the Leadership Academy to provide more in-depth training over 4 days. Assisted by Tina Budnitz, SHM Senior Advisor for Planning and Development, this course was developed to address the leadership training needs of hospitalists. As an example of its resounding success, one participant made the following comment. “Even with 18 years of clinical/administrative experience as well as an MBA, this course was a learning experience and I gained and reinforced critical areas of thinking and actions.”
Credit for this success deservedly should be attributed to the outstanding faculty. SHM’s CEO Larry Wellikson led the first day, eloquently delineating the leadership challenges in hospital medicine. The audience appreciated how hospital medicine is evolving rapidly, still defining itself, and how hospitalists will be developing metrics for success. The remainder of the first day allowed participants to evaluate their own strengths and assess how their unique styles impact interactions with others. Using the Strength Deployment Inventory®, David Javitch, PhD explored how “reds,” “greens,” and “blues” approach situations and communicate with their colleagues. Javitch, an organizational psychologist from Harvard, demonstrated how individual styles of communication and interaction influence success at management and leadership. SHM Member Eric Howell, MD moderated a discussion featuring a movie capturing common hospital-based examples of conflict and led participants through techniques for conflict resolution and negotiation.
On the second day, Michael Guthrie, MD, MBA identified business drivers for hospital survival and success. Guthrie currently serves as a senior executive for a large national health alliance and has experience as a health system CEO, medical director, and consultant on performance improvement. Guthrie finished the morning by helping attendees interpret hospital performance reports and associated metrics and determine how such measures should guide leadership planning and decision making.
The next day was highlighted by sessions led by Jack Silversin, DMD, DrPH from Harvard, using table exercises and real world examples to demonstrate how to lead change. A nationally recognized expert in change management and co-author of “Leading Physicians through Change: How to Achieve and Sustain Results,” Dr. Silversin actively stimulated attendees to appraise their situations at home. He showed participants how to develop shared organizational vision, strengthen leadership, and accelerate implementation of change. Afterwards, Holman and Williams coordinated a series of sessions on strategic planning. They used multiple examples and exercises to aid attendees in developing vision and mission statements, as well as “SMART” goals.
The final day focused on communication. An experienced educator, Kathleen Miner, PhD, MPH, MEd, reviewed communication theory and how it applies to our everyday conversations and interactions. Miner, an Associate Dean for Public Health at Emory University, brought decades of experience to her presentation. The course ended with Holman recapping how to use what we learned to achieve success as a leader.
Overall, the course was structured to facilitate interaction and small group exercises. The interactive sessions provided opportunities for participants to apply concepts.
Use of facilitators greatly augmented the impact of this training. Participants in the course sat ten to a table, and each table was led by an experienced hospitalist leader trained to be a facilitator. We were extraordinarily fortunate to have leaders in hospital medicine as facilitators including: Mary Jo Gorman, Bill Atchley, Pat Cawley, Lisa Kettering, Alpesh Amin, Ron Greeno, Burke Kealey, Eric Siegal, Stacy Goldsholl, and Eric Howell.
The impact of the meeting was powerfully described by a facilitator, “I’ve never before experienced such sustained energy and enthusiasm at a meeting. People literally spent hours after the didactic sessions talking, sharing ideas, and commiserating. Speaks to the pent-up need for this, and the effectiveness of the curriculum in galvanizing the group.”
No meeting can be such a success without tremendous support from SHM staff. Angela Musial and Erica Pearson deserve our sincere thanks for handling all the logistical issues and guaranteeing a terrific time for everyone who attended. They ensured that everything worked without a hitch including two wonderful receptions, which fostered networking and opportunities to share challenges and success stories.
The Society of Hospital Medicine will hold another Leadership Academy this Fall: September 12–15 in Vail, Colorado. The learning objectives for the Leadership Academy highlight the skills hospitalists can gain by attending.
- Evaluate personal leadership strengths and weaknesses and apply them to everyday leadership and management challenges
- Effectively advocate the value of their Hospital Medicine program
- Predict and plan for the near-term challenges affecting the viability of their Hospital Medicine program
- Improve patient outcomes through successful planning, allocation of resources, collaboration, teamwork, and execution
- Create and execute a communication strategy for all key constituencies
- Interpret key hospital drivers
- Examine how hospital performance metrics are derived and how hospital medicine practices can influence and impact these metrics
- Implement methods of effective change through leadership, shared vision, and managing the organizational culture
- Utilize strategic planning to define a vision for their program, prioritize efforts, and achieve designated goals
Registration will again be limited to 100 hospitalist leaders and we expect this to fill quickly. The first Leadership Academy was sold out months before it was held, and interest in the September 2005 Leadership Academy in Vail is equally as strong after the rousing success of the January meeting in Tucson. If you are interested in attending, registration information can be found on page 19, at the SHM Web site at www.hospitalmedicine.org, or by calling SHM at 800-843-3360. We look forward to seeing you there.
What would prompt someone to exclaim “fantastic, inspirational, and motivational,” and trigger adults to hug each other when it is time to say goodbye? The first-ever SHM Leadership Academy welcomed 110 hospitalist leaders to the Westin La Paloma Resort in Tucson, AZ on January 10–13, 2005. A resounding success, the Leadership Academy offered instruction in leading change, communicating effectively, handling conflict and negotiation, strategic planning, and interpreting hospital business drivers. Two years in the making, this course combined an outstanding national faculty with small group learning exercises to begin the process of training hospitalists who will lead important initiatives as we shape the hospital of the future.
At the 2003 SHM Annual Meeting in San Diego, a standing room crowd of about 200 hospitalists at the Leadership Forum expressed their need for advanced leadership training. Responding to obvious demand, SHM developed a successful, soldout 1-day Leadership Pre-Course held in New Orleans at the 2004 SHM Annual Meeting. Building on this pre-course and again reacting to the requests of SHM members, Co-Directors Russell Holman and Mark Williams designed the Leadership Academy to provide more in-depth training over 4 days. Assisted by Tina Budnitz, SHM Senior Advisor for Planning and Development, this course was developed to address the leadership training needs of hospitalists. As an example of its resounding success, one participant made the following comment. “Even with 18 years of clinical/administrative experience as well as an MBA, this course was a learning experience and I gained and reinforced critical areas of thinking and actions.”
Credit for this success deservedly should be attributed to the outstanding faculty. SHM’s CEO Larry Wellikson led the first day, eloquently delineating the leadership challenges in hospital medicine. The audience appreciated how hospital medicine is evolving rapidly, still defining itself, and how hospitalists will be developing metrics for success. The remainder of the first day allowed participants to evaluate their own strengths and assess how their unique styles impact interactions with others. Using the Strength Deployment Inventory®, David Javitch, PhD explored how “reds,” “greens,” and “blues” approach situations and communicate with their colleagues. Javitch, an organizational psychologist from Harvard, demonstrated how individual styles of communication and interaction influence success at management and leadership. SHM Member Eric Howell, MD moderated a discussion featuring a movie capturing common hospital-based examples of conflict and led participants through techniques for conflict resolution and negotiation.
On the second day, Michael Guthrie, MD, MBA identified business drivers for hospital survival and success. Guthrie currently serves as a senior executive for a large national health alliance and has experience as a health system CEO, medical director, and consultant on performance improvement. Guthrie finished the morning by helping attendees interpret hospital performance reports and associated metrics and determine how such measures should guide leadership planning and decision making.
The next day was highlighted by sessions led by Jack Silversin, DMD, DrPH from Harvard, using table exercises and real world examples to demonstrate how to lead change. A nationally recognized expert in change management and co-author of “Leading Physicians through Change: How to Achieve and Sustain Results,” Dr. Silversin actively stimulated attendees to appraise their situations at home. He showed participants how to develop shared organizational vision, strengthen leadership, and accelerate implementation of change. Afterwards, Holman and Williams coordinated a series of sessions on strategic planning. They used multiple examples and exercises to aid attendees in developing vision and mission statements, as well as “SMART” goals.
The final day focused on communication. An experienced educator, Kathleen Miner, PhD, MPH, MEd, reviewed communication theory and how it applies to our everyday conversations and interactions. Miner, an Associate Dean for Public Health at Emory University, brought decades of experience to her presentation. The course ended with Holman recapping how to use what we learned to achieve success as a leader.
Overall, the course was structured to facilitate interaction and small group exercises. The interactive sessions provided opportunities for participants to apply concepts.
Use of facilitators greatly augmented the impact of this training. Participants in the course sat ten to a table, and each table was led by an experienced hospitalist leader trained to be a facilitator. We were extraordinarily fortunate to have leaders in hospital medicine as facilitators including: Mary Jo Gorman, Bill Atchley, Pat Cawley, Lisa Kettering, Alpesh Amin, Ron Greeno, Burke Kealey, Eric Siegal, Stacy Goldsholl, and Eric Howell.
The impact of the meeting was powerfully described by a facilitator, “I’ve never before experienced such sustained energy and enthusiasm at a meeting. People literally spent hours after the didactic sessions talking, sharing ideas, and commiserating. Speaks to the pent-up need for this, and the effectiveness of the curriculum in galvanizing the group.”
No meeting can be such a success without tremendous support from SHM staff. Angela Musial and Erica Pearson deserve our sincere thanks for handling all the logistical issues and guaranteeing a terrific time for everyone who attended. They ensured that everything worked without a hitch including two wonderful receptions, which fostered networking and opportunities to share challenges and success stories.
The Society of Hospital Medicine will hold another Leadership Academy this Fall: September 12–15 in Vail, Colorado. The learning objectives for the Leadership Academy highlight the skills hospitalists can gain by attending.
- Evaluate personal leadership strengths and weaknesses and apply them to everyday leadership and management challenges
- Effectively advocate the value of their Hospital Medicine program
- Predict and plan for the near-term challenges affecting the viability of their Hospital Medicine program
- Improve patient outcomes through successful planning, allocation of resources, collaboration, teamwork, and execution
- Create and execute a communication strategy for all key constituencies
- Interpret key hospital drivers
- Examine how hospital performance metrics are derived and how hospital medicine practices can influence and impact these metrics
- Implement methods of effective change through leadership, shared vision, and managing the organizational culture
- Utilize strategic planning to define a vision for their program, prioritize efforts, and achieve designated goals
Registration will again be limited to 100 hospitalist leaders and we expect this to fill quickly. The first Leadership Academy was sold out months before it was held, and interest in the September 2005 Leadership Academy in Vail is equally as strong after the rousing success of the January meeting in Tucson. If you are interested in attending, registration information can be found on page 19, at the SHM Web site at www.hospitalmedicine.org, or by calling SHM at 800-843-3360. We look forward to seeing you there.
What would prompt someone to exclaim “fantastic, inspirational, and motivational,” and trigger adults to hug each other when it is time to say goodbye? The first-ever SHM Leadership Academy welcomed 110 hospitalist leaders to the Westin La Paloma Resort in Tucson, AZ on January 10–13, 2005. A resounding success, the Leadership Academy offered instruction in leading change, communicating effectively, handling conflict and negotiation, strategic planning, and interpreting hospital business drivers. Two years in the making, this course combined an outstanding national faculty with small group learning exercises to begin the process of training hospitalists who will lead important initiatives as we shape the hospital of the future.
At the 2003 SHM Annual Meeting in San Diego, a standing room crowd of about 200 hospitalists at the Leadership Forum expressed their need for advanced leadership training. Responding to obvious demand, SHM developed a successful, soldout 1-day Leadership Pre-Course held in New Orleans at the 2004 SHM Annual Meeting. Building on this pre-course and again reacting to the requests of SHM members, Co-Directors Russell Holman and Mark Williams designed the Leadership Academy to provide more in-depth training over 4 days. Assisted by Tina Budnitz, SHM Senior Advisor for Planning and Development, this course was developed to address the leadership training needs of hospitalists. As an example of its resounding success, one participant made the following comment. “Even with 18 years of clinical/administrative experience as well as an MBA, this course was a learning experience and I gained and reinforced critical areas of thinking and actions.”
Credit for this success deservedly should be attributed to the outstanding faculty. SHM’s CEO Larry Wellikson led the first day, eloquently delineating the leadership challenges in hospital medicine. The audience appreciated how hospital medicine is evolving rapidly, still defining itself, and how hospitalists will be developing metrics for success. The remainder of the first day allowed participants to evaluate their own strengths and assess how their unique styles impact interactions with others. Using the Strength Deployment Inventory®, David Javitch, PhD explored how “reds,” “greens,” and “blues” approach situations and communicate with their colleagues. Javitch, an organizational psychologist from Harvard, demonstrated how individual styles of communication and interaction influence success at management and leadership. SHM Member Eric Howell, MD moderated a discussion featuring a movie capturing common hospital-based examples of conflict and led participants through techniques for conflict resolution and negotiation.
On the second day, Michael Guthrie, MD, MBA identified business drivers for hospital survival and success. Guthrie currently serves as a senior executive for a large national health alliance and has experience as a health system CEO, medical director, and consultant on performance improvement. Guthrie finished the morning by helping attendees interpret hospital performance reports and associated metrics and determine how such measures should guide leadership planning and decision making.
The next day was highlighted by sessions led by Jack Silversin, DMD, DrPH from Harvard, using table exercises and real world examples to demonstrate how to lead change. A nationally recognized expert in change management and co-author of “Leading Physicians through Change: How to Achieve and Sustain Results,” Dr. Silversin actively stimulated attendees to appraise their situations at home. He showed participants how to develop shared organizational vision, strengthen leadership, and accelerate implementation of change. Afterwards, Holman and Williams coordinated a series of sessions on strategic planning. They used multiple examples and exercises to aid attendees in developing vision and mission statements, as well as “SMART” goals.
The final day focused on communication. An experienced educator, Kathleen Miner, PhD, MPH, MEd, reviewed communication theory and how it applies to our everyday conversations and interactions. Miner, an Associate Dean for Public Health at Emory University, brought decades of experience to her presentation. The course ended with Holman recapping how to use what we learned to achieve success as a leader.
Overall, the course was structured to facilitate interaction and small group exercises. The interactive sessions provided opportunities for participants to apply concepts.
Use of facilitators greatly augmented the impact of this training. Participants in the course sat ten to a table, and each table was led by an experienced hospitalist leader trained to be a facilitator. We were extraordinarily fortunate to have leaders in hospital medicine as facilitators including: Mary Jo Gorman, Bill Atchley, Pat Cawley, Lisa Kettering, Alpesh Amin, Ron Greeno, Burke Kealey, Eric Siegal, Stacy Goldsholl, and Eric Howell.
The impact of the meeting was powerfully described by a facilitator, “I’ve never before experienced such sustained energy and enthusiasm at a meeting. People literally spent hours after the didactic sessions talking, sharing ideas, and commiserating. Speaks to the pent-up need for this, and the effectiveness of the curriculum in galvanizing the group.”
No meeting can be such a success without tremendous support from SHM staff. Angela Musial and Erica Pearson deserve our sincere thanks for handling all the logistical issues and guaranteeing a terrific time for everyone who attended. They ensured that everything worked without a hitch including two wonderful receptions, which fostered networking and opportunities to share challenges and success stories.
The Society of Hospital Medicine will hold another Leadership Academy this Fall: September 12–15 in Vail, Colorado. The learning objectives for the Leadership Academy highlight the skills hospitalists can gain by attending.
- Evaluate personal leadership strengths and weaknesses and apply them to everyday leadership and management challenges
- Effectively advocate the value of their Hospital Medicine program
- Predict and plan for the near-term challenges affecting the viability of their Hospital Medicine program
- Improve patient outcomes through successful planning, allocation of resources, collaboration, teamwork, and execution
- Create and execute a communication strategy for all key constituencies
- Interpret key hospital drivers
- Examine how hospital performance metrics are derived and how hospital medicine practices can influence and impact these metrics
- Implement methods of effective change through leadership, shared vision, and managing the organizational culture
- Utilize strategic planning to define a vision for their program, prioritize efforts, and achieve designated goals
Registration will again be limited to 100 hospitalist leaders and we expect this to fill quickly. The first Leadership Academy was sold out months before it was held, and interest in the September 2005 Leadership Academy in Vail is equally as strong after the rousing success of the January meeting in Tucson. If you are interested in attending, registration information can be found on page 19, at the SHM Web site at www.hospitalmedicine.org, or by calling SHM at 800-843-3360. We look forward to seeing you there.