Given name(s)
Benjamin S.
Family name
Abella
Degrees
MD, MPhil

In‐hospital CPR Practices

Article Type
Changed
Sun, 05/21/2017 - 14:13
Display Headline
Hospital cardiac arrest resuscitation practice in the United States: A nationally representative survey

An estimated 200,000 adult patients suffer cardiac arrest in US hospitals each year, of which <20% survive to hospital discharge.[1, 2] Patient survival from in‐hospital cardiac arrest (IHCA), however, varies widely across hospitals, and may be partly attributed to differences in hospital practices.[3, 4, 5] Although there are data to support specific patient‐level practices in the hospital, such as delivery of electrical shock for ventricular fibrillation within 2 minutes of onset of the lethal rhythm,[6] little is known about in‐hospital systems‐level factors. Similar to patient‐level practices, some organizational and systems level practices are supported by international consensus and guideline recommendations.[7, 8] However, the adoption of these practices is poorly understood. As such, we sought to gain a better understanding of current US hospital practices with regard to IHCA and resuscitation with the hopes of identifying potential targets for improvement in quality and outcomes.

METHODS

We conducted a nationally representative mail survey between May 2011 and November 2011, targeting a stratified random sample of 1000 hospitals. We utilized the US Acute‐Care Hospitals (FY2008) database from the American Hospital Association to determine the total population of 3809 community hospitals (ie, nonfederal government, nonpsychiatric, and nonlong‐term care hospitals).[9] This included general medical and surgical, surgical, cancer, heart, orthopedic, and children's hospitals. These hospitals were stratified into tertiles by annual in‐patient days and teaching status (major, minor, nonteaching), from which our sample was randomly selected (Table 1). We identified each hospital's cardiopulmonary resuscitation (CPR) committee (sometimes known as code committee, code blue committee, or cardiac arrest committee) chair or chief medical/quality officer, to whom the paper‐based survey was addressed, with instructions to forward to the most appropriate person if someone other than the recipient. This study was evaluated by the University of Chicago institutional review board and deemed exempt from further review.

Figure 1
Hospital responders to in‐hospital resuscitations by institution type and level of participation. Bars represent the percent of hospitals reporting usual resuscitation responders in their hospitals, stratified by the teaching status of the hospital. Each bar is further subdivided by the likelihood of that provider to lead the resuscitation.

Survey

The survey content was developed by the study investigators and iteratively adapted by consensus and beta testing to require approximately 10 minutes to complete. Questions were edited and formatted by the University of Chicago Survey Lab (Chicago, IL) to be more precise and generalizable. Surveys were mailed in May 2011 and resent twice to nonresponders. A $10 incentive was included in the second mailing. When more than 1 response from a hospital was received, the more complete survey was used, or if equally complete, the responses were combined. All printing, mailing, receipt control, and data entry were performed by the University of Chicago Survey Lab, and data entry was double‐keyed to ensure accuracy.

Response rate was calculated based on the American Association for Public Opinion Research standard response rate formula.[10] It was assumed that the portion of nonresponding cases were ineligible at the same rate of cases for which eligibility was determined. A survey was considered complete if at least 75% of individual questions contained a valid response, partially complete if at least 40% but less than 75% of questions contained a valid response, and a nonresponse if less than 40% was completed. Nonresponses were excluded from the analysis.

Statistical Analysis

Analyses were performed using a statistical software application (Stata version 11.0; StataCorp, College Station, TX). Descriptive statistics were calculated and presented as number (%) or median (interquartile range). A [2] statistic was used to assess bias in response rate. We determined a priori 2 indicators of resource allocation (availability of a CPR committee and dedicated personnel for resuscitation quality improvement) and tested their association with quality improvement initiatives, using logistic regression to adjust for hospital teaching status and number of admissions as potential confounders. All tests of significance used a 2‐sided P<0.05.

RESULTS

Responses were received from 439 hospitals (425 complete and 14 partially complete), yielding a response rate of 44%. One subject ID was removed from the survey and could not be identified, so it was excluded from any analyses. Hospital demographics were similar between responders and nonresponders (P=0.50) (Table 1). Respondents who filled out the surveys included chief medical/quality officers (n=143 [33%]), chairs of CPR committees (n=64 [15%]), members of CPR committees (n=29 [7%]), chiefs of staff (n=33 [8%]), resuscitation officers/nurses (n=27 [6%]), chief nursing officers (n=13 [3%]), and others (n=131 [30%]).

Stratified Response Rates by Hospital Volume and Teaching Status
Teaching StatusAnnual Inpatient DaysTotal
<17,69517,695‐52,500>52,500
  • NOTE: Results are shown as number of respondents/total sampled (%).

Major1/2 (50)1/8 (13)40/82 (49)42/92 (46)
Minor13/39 (33)40/89 (45)62/133 (47)115/261 (44)
Nonteaching141/293 (48)100/236 (42)40/118 (34)281/647 (43)
Total156/335 (47)143/335 (43)145/336 (43)438/1,000 (44)

Table 2 summarizes structure, equipment, quality improvement, and pre‐ and postarrest practices across the hospitals. Of note, 77% of hospitals (n=334) reported having a predesignated, dedicated code team, and 66% (n=281) reported standardized defibrillator make and model throughout their hospital. However, less than one‐third of hospitals utilized any CPR assist technology (eg, CPR quality sensor or mechanical CPR device). The majority of hospitals reported having a rapid response team (RRT) (n=391 [91%]). Although a therapeutic hypothermia protocol for postarrest care was in place in over half of hospitals (n=252 [58%]), utilization of hypothermia for patients with return of spontaneous circulation was infrequent.

In‐hospital Resuscitation Structure and Practices
 Value2010 AHA Guidelines
  • NOTE: Results are shown as total (%) unless otherwise indicated. Percentages were adjusted by excluding missing responses. Abbreviations: AED, automatic external defibrillator; AHA, American Heart Association; CPR, cardiopulmonary resuscitation; IHCA, in‐hospital cardiac arrest; IQR, interquartile range; LOE, level of evidence; PA, public address; QI, quality improvement; ROSC, return of spontaneous circulation; RRT, rapid response team; TH, therapeutic hypothermia

  • These categories are not mutually exclusive

  • Recommended or supported in 2005 guidelines

  • May be considered for use in specific settings by properly trained personnel

  • Supported in the guidelines without official class recommendation.

Structure  
Existing CPR committee270 (66) 
CPR chair  
Physician only129 (48) 
Nurse only90 (34) 
Nurse/physician co‐chair31 (12) 
Other17 (6) 
Clinical specialty of chaira  
Pulmonary/critical care79 (35) 
Emergency medicine71 (31) 
Anesthesia/critical care43 (19) 
Cardiology38 (17) 
Other32 (14) 
Hospital medicine23 (10) 
Predetermined cardiac arrest team structure334 (77) 
Notifications of respondersa  
Hospital‐wide PA system406 (93) 
Pager/calls to individuals230 (53) 
Local alarm49 (11) 
Equipment  
AEDs used as primary defibrillator by location  
High‐acuity inpatient areas69 (16) 
Low‐acuity inpatient areas109 (26) 
Outpatient areas206 (51)Class IIb, LOE Cb
Public areas263 (78)Class IIb, LOE Cb
Defibrillator throughout hospital  
Same brand and model281 (66) 
Same brand, different models93 (22) 
Different brands54 (13) 
CPR assist technology useda  
None291 (70) 
Capnography106 (25)Class IIb, LOE Cb
Mechanical CPR25 (6)Class IIb, LOE B/Cbc
Feedback device17 (4)Class IIa, LOE B
Quality improvement  
IHCA tracked336 (82)Supportedbd
Data reviewed Supportedbd
Data not tracked/never reviewed85 (20) 
Intermittently53 (12) 
Routinely287 (68) 
Routine cardiac arrest case reviews/debriefing149 (34)Class IIa, LOE C
Dedicated staff to resuscitation QI196 (49) 
Full‐time equivalent staffing, median (IQR)0.5 (0.251.2) 
Routine simulated resuscitation training268 (62) 
Pre‐ and postarrest measures  
Hospitals with RRT391 (91)Class I, LOE Cb
Formal RRT‐specific training  
Never50 (14) 
Once110 (30) 
Recurrent163 (45) 
TH protocol/order set in place252 (58) 
Percent of patients with ROSC receiving TH Class IIb, LOE Bb
<5%309 (74) 
5%25%68 (16) 
26%50%11 (3) 
51%75%10 (2) 
>75%18 (4) 

Hospitals reported that routine responders to IHCA events included respiratory therapists (n=414 [95%]), critical care nurses (n=406 [93%]), floor nurses (n=396 [90%]), attending physicians (n=392 [89%]), physician trainees (n=162 [37%]), and pharmacists (n=210 [48%]). Figure 1 shows the distribution of responders and team leaders by hospital type. Of the nonteaching hospitals, attending‐level physicians were likely to respond at 94% (265/281) and routinely lead the resuscitations at 84% (236/281), whereas, of major teaching hospitals, attending physicians were only likely to respond at 71% (30/42) and routinely lead at 19% (8/42).

Two‐thirds of the hospitals had a CPR committee (n=270 [66%]), and 196 (49%) had some staff time dedicated to resuscitation quality improvement. Hospitals with a specific committee dedicated to resuscitation and/or dedicated staff for resuscitation quality improvement were more likely to routinely track cardiac arrest data (odds ratio [OR]: 3.64, 95% confidence interval [CI]: 2.056.47 and OR: 2.02, 95% CI: 1.16‐3.54, respectively) and review the data (OR: 2.67, 95% CI: 1.45‐4.92 and OR: 2.18, 95% CI: 1.22‐3.89, respectively), after adjusting for teaching status and hospital size. These hospitals were also more likely to engage in simulation training and debriefing (Table 3).

Correlation Between Resource Availability and Quality Improvement Practices
 CPR Committee, n=406Dedicated QI Staff, n=398
  • NOTE: Logistic regression adjusting for hospital size and teaching status was performed. All results are shown as odds ratio (95% confidence interval)

  • Abbreviations: CPR, cardiopulmonary resuscitation; IHCA, in‐hospital cardiac arrest; QI, quality improvement.

IHCA tracking3.64 (2.056.47)2.02 (1.16‐3.54)
Routinely review2.67 (1.45‐4.92)2.18 (1.22‐3.89)
Simulation training2.63 (1.66‐4.18)1.89 (1.24‐2.89)
Debriefing3.19 (1.89‐5.36)2.14 (1.39‐3.32)

Ninety percent (n=391) of respondents agreed that there is room for improvement in resuscitation practice at my hospital, and 70% (n=302) agreed that improved resuscitation would translate into improved patient outcomes. Overall, 78% (n=338) cited at least 1 barrier to improved resuscitation quality, of which the lack of adequate training (n=233 [54%]) and the lack of an appropriate champion (n=230 [53%]) were the most common. In subgroup analysis, nonteaching hospitals were significantly more likely to report the lack of a champion than their teaching counterparts (P=0.001) (Figure 2). In addition, we analyzed the data by hospitals that reported lack of a champion was not a barrier and compared them to those for whom it was, and found significantly higher adherence across all the measures in Table 2 supported by the 2010 guidelines, with the exception of real‐time feedback (data not shown).

Figure 2
Barriers to resuscitation quality improvement by institution type. Bars represent the percent of responders reporting specific perceived barriers to resuscitation quality improvement at their hospital, stratified by the teaching status of the hospital.

DISCUSSION

In this nationally representative sample of hospitals, we found considerable variability in cardiac arrest and resuscitation structures and processes, suggesting potential areas to target for improvement. Some practices, including use of RRTs and defibrillator standardization, were fairly routine, whereas others, such as therapeutic hypothermia and CPR assist technology, were rarely utilized. Quality initiatives, such as data tracking and review, simulation training, and debriefing were variable.

Several factors likely contribute to the variable implementation of evidence‐based practices. Guidelines alone have been shown to have little impact on practice by physicians in general.[11] This is supported by the lack of correlation we found between the presence, absence or strength of specific American Heart Association (AHA) emergency cardiovascular care treatment recommendations and the percent of hospitals reporting performing that measure. It is possible that other factors, such as a lack of familiarity or agreement with those guidelines, or the presence of external barriers, may be contributing.[12, 13] Specifically, the importance of a clinical champion was supported by our finding that hospitals reporting lack of a champion as a barrier were less likely to be adherent with guidelines. However, because the study did not directly test the impact of a champion, we wanted to be careful to avoid overstating or editorializing our results.

Some of the variability may also be related to the resource intensiveness of the practice. Routine simulation training and debriefing interventions, for example, are time intensive and require trained personnel to institute. That may explain the correlation we noted between these practices and the presence of CPR committee and dedicated personnel. The use of dedicated personnel was rare in this study, with less than half of respondents reporting any dedicated staff and a median of 0.5 full‐time equivalents for those reporting positively. This is in stark contrast to the routine use of resuscitation officers (primarily nurses dedicated to overseeing resuscitation practices and education at the hospital) in the United Kingdom.[14] Such a resuscitation officer model adopted by US hospitals could improve the quality and intensity of resuscitation care approaches.

Particularly surprising was the high rate of respondents (70%) reporting that they do not utilize any CPR assist technology. In the patient who does not have an arterial line, use of quantitative capnography is the best measure of cardiac output during cardiac arrest, yet only one‐quarter of hospitals reported using it, with no discrepancy between hospital type or size. A recent summit of national resuscitation experts expounded on the AHA guidelines suggesting that end‐tidal carbon dioxide should be used in all arrests to guide the quality of CPR with a goal value of >20.[8] Similarly, CPR feedback devices have an even higher level of evidence recommendation in the 2010 AHA guidelines than capnography, yet only 4% of hospitals reported utilizing them. Although it is true that introducing these CPR assist technologies into a hospital would require some effort on the part of hospital leadership, it is important to recognize the potential role such devices might play in the larger context of a resuscitation quality program to optimize clinical outcomes from IHCA.

Several differences were noted between hospitals based on teaching status. Although all hospitals were more likely to rely on physicians to lead resuscitations, nonteaching hospitals were more likely to report routine leadership by nurses and pharmacists. Nonteaching hospitals were also less likely to have a CPR committee, even after adjusting for hospital size. In addition, these hospitals were also more likely to report the lack of a clinical champion as a barrier to quality improvement.

There were several limitations to this study. First, this was a descriptive survey that was not tied to outcomes. As such, we are unable to draw conclusions about which practices correlate with decreased incidence of cardiac arrest and improved survival. Second, this was an optional survey with a somewhat limited response rate. Even though the characteristics of the nonresponding hospitals were similar to the responding hospitals, we cannot rule out the possibility that a selection bias was introduced, which would likely overestimate adherence to the guidelines. Self‐reported responses may have introduced additional errors. Finally, the short interval between the release of the 2010 guidelines and the administration of the first survey may have contributed to the variability in implementation of some practices, but many of the recommendations had been previously included in the 2005 guidelines.

We conclude that there is wide variability between hospitals and within practices for resuscitation care. Future work should seek to understand which practices are associated with improved patient outcomes and how best to implement these practices in a more uniform fashion.

Acknowledgements

The authors thank Nancy Hinckley, who championed the study; David Chearo, Christelle Marpaud, and Martha Van Haitsma of the University of Chicago Survey Lab for their assistance in formulating and distributing the survey; and JoAnne Resnic, Nicole Twu, and Frank Zadravecz for administrative support.

Disclosures: This study was supported by the Society of Hospital Medicine with a grant from Philips Healthcare (Andover, MA). Dr. Edelson is supported by a career development award from the National Heart, Lung, and Blood Institute (K23 HL097157). In addition, she has received research support and honoraria from Philips Healthcare (Andover, MA), research support from the American Heart Association (Dallas, TX) and Laerdal Medical (Stavanger, Norway), and an honorarium from Early Sense (Tel Aviv, Israel). Dr. Hunt has received research support from the Laerdal Foundation for Acute Medicine (Stavanger, Norway), the Hartwell Foundation (Memphis, TN), and the Arthur Vining Davis Foundation (Jacksonville, FL), and honoraria from the Kansas University Endowment (Kansas City, KS), JCCC (Overland Park, KS), and the UVA School of Medicine (Charlottesville, VA) and the European School of Management (Berlin, Germany). Dr. Mancini is supported in part by an Agency for Healthcare Research and Quality grant (R18HS020416). In addition, she has received research support from the American Heart Association (Dallas, TX) and Laerdal Medical (Stavanger, Norway), and honoraria from Sotera Wireless, Inc. (San Diego, CA). Dr. Abella has received research support from the National Institutes of Health (NIH), Medtronic Foundation (Minneapolis, MN), and Philips Healthcare (Andover, MA); has volunteered with the American Heart Association; and received honoraria from Heartsine (Belfast, Ireland), Velomedix (Menlo Park, CA), and Stryker (Kalamazoo, MI). Mr. Miller is employed by the Society of Hospital Medicine.

Files
References
  1. Girotra S, Nallamothu BK, Spertus JA, Li Y, Krumholz HM, Chan PS. Trends in survival after in‐hospital cardiac arrest. N Engl J Med. 2012;367(20):19121920.
  2. Merchant RM, Yang L, Becker LB, et al. Incidence of treated cardiac arrest in hospitalized patients in the United States. Crit Care Med. 2011;39(11):24012406.
  3. Chan PS, Nichol G, Krumholz HM, et al. Racial differences in survival after in‐hospital cardiac arrest. JAMA. 2009;302(11):11951201.
  4. Chan PS, Nichol G, Krumholz HM, Spertus JA, Nallamothu BK. Hospital variation in time to defibrillation after in‐hospital cardiac arrest. Arch Intern Med. 2009;169(14):12651273.
  5. Goldberger ZD, Chan PS, Berg RA, et al. Duration of resuscitation efforts and survival after in‐hospital cardiac arrest: an observational study. Lancet. 2012;380(9852):14731481.
  6. Chan PS, Krumholz HM, Nichol G, Nallamothu BK. Delayed time to defibrillation after in‐hospital cardiac arrest. N Engl J Med. 2008;358(1):917.
  7. 2010 American Heart Association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care science. Circulation. 2010;122(18 suppl 3):S640S946.
  8. Meaney PA, Bobrow BJ, Mancini ME, et al. Cardiopulmonary resuscitation quality: improving cardiac resuscitation outcomes both inside and outside the hospital: a consensus statement from the American Heart Association. Circulation. 2013;128(4):417435.
  9. American Hospital Association. 2008 AHA annual survey. AHA data viewer: survey instruments. 2012; Available at: http://www.ahadataviewer.com/about/hospital‐database. Accessed October 11, 2013.
  10. The American Association for Public Opinion Research. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 7th ed. Deerfield, IL: AAPOR; 2011.
  11. Lomas J, Anderson GM, Domnick‐Pierre K, Vayda E, Enkin MW, Hannah WJ. Do practice guidelines guide practice? The effect of a consensus statement on the practice of physicians. N Engl J Med. 1989;321(19):13061311.
  12. Cabana MD, Rand CS, Powe NR, et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282(15):14581465.
  13. Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients' care. Lancet. 2003;362(9391):12251230.
  14. Gabbott D, Smith G, Mitchell S, et al. Cardiopulmonary resuscitation standards for clinical practice and training in the UK. Accid Emerg Nurs. 2005;13(3):171179.
Article PDF
Issue
Journal of Hospital Medicine - 9(6)
Publications
Page Number
353-357
Sections
Files
Files
Article PDF
Article PDF

An estimated 200,000 adult patients suffer cardiac arrest in US hospitals each year, of which <20% survive to hospital discharge.[1, 2] Patient survival from in‐hospital cardiac arrest (IHCA), however, varies widely across hospitals, and may be partly attributed to differences in hospital practices.[3, 4, 5] Although there are data to support specific patient‐level practices in the hospital, such as delivery of electrical shock for ventricular fibrillation within 2 minutes of onset of the lethal rhythm,[6] little is known about in‐hospital systems‐level factors. Similar to patient‐level practices, some organizational and systems level practices are supported by international consensus and guideline recommendations.[7, 8] However, the adoption of these practices is poorly understood. As such, we sought to gain a better understanding of current US hospital practices with regard to IHCA and resuscitation with the hopes of identifying potential targets for improvement in quality and outcomes.

METHODS

We conducted a nationally representative mail survey between May 2011 and November 2011, targeting a stratified random sample of 1000 hospitals. We utilized the US Acute‐Care Hospitals (FY2008) database from the American Hospital Association to determine the total population of 3809 community hospitals (ie, nonfederal government, nonpsychiatric, and nonlong‐term care hospitals).[9] This included general medical and surgical, surgical, cancer, heart, orthopedic, and children's hospitals. These hospitals were stratified into tertiles by annual in‐patient days and teaching status (major, minor, nonteaching), from which our sample was randomly selected (Table 1). We identified each hospital's cardiopulmonary resuscitation (CPR) committee (sometimes known as code committee, code blue committee, or cardiac arrest committee) chair or chief medical/quality officer, to whom the paper‐based survey was addressed, with instructions to forward to the most appropriate person if someone other than the recipient. This study was evaluated by the University of Chicago institutional review board and deemed exempt from further review.

Figure 1
Hospital responders to in‐hospital resuscitations by institution type and level of participation. Bars represent the percent of hospitals reporting usual resuscitation responders in their hospitals, stratified by the teaching status of the hospital. Each bar is further subdivided by the likelihood of that provider to lead the resuscitation.

Survey

The survey content was developed by the study investigators and iteratively adapted by consensus and beta testing to require approximately 10 minutes to complete. Questions were edited and formatted by the University of Chicago Survey Lab (Chicago, IL) to be more precise and generalizable. Surveys were mailed in May 2011 and resent twice to nonresponders. A $10 incentive was included in the second mailing. When more than 1 response from a hospital was received, the more complete survey was used, or if equally complete, the responses were combined. All printing, mailing, receipt control, and data entry were performed by the University of Chicago Survey Lab, and data entry was double‐keyed to ensure accuracy.

Response rate was calculated based on the American Association for Public Opinion Research standard response rate formula.[10] It was assumed that the portion of nonresponding cases were ineligible at the same rate of cases for which eligibility was determined. A survey was considered complete if at least 75% of individual questions contained a valid response, partially complete if at least 40% but less than 75% of questions contained a valid response, and a nonresponse if less than 40% was completed. Nonresponses were excluded from the analysis.

Statistical Analysis

Analyses were performed using a statistical software application (Stata version 11.0; StataCorp, College Station, TX). Descriptive statistics were calculated and presented as number (%) or median (interquartile range). A [2] statistic was used to assess bias in response rate. We determined a priori 2 indicators of resource allocation (availability of a CPR committee and dedicated personnel for resuscitation quality improvement) and tested their association with quality improvement initiatives, using logistic regression to adjust for hospital teaching status and number of admissions as potential confounders. All tests of significance used a 2‐sided P<0.05.

RESULTS

Responses were received from 439 hospitals (425 complete and 14 partially complete), yielding a response rate of 44%. One subject ID was removed from the survey and could not be identified, so it was excluded from any analyses. Hospital demographics were similar between responders and nonresponders (P=0.50) (Table 1). Respondents who filled out the surveys included chief medical/quality officers (n=143 [33%]), chairs of CPR committees (n=64 [15%]), members of CPR committees (n=29 [7%]), chiefs of staff (n=33 [8%]), resuscitation officers/nurses (n=27 [6%]), chief nursing officers (n=13 [3%]), and others (n=131 [30%]).

Stratified Response Rates by Hospital Volume and Teaching Status
Teaching StatusAnnual Inpatient DaysTotal
<17,69517,695‐52,500>52,500
  • NOTE: Results are shown as number of respondents/total sampled (%).

Major1/2 (50)1/8 (13)40/82 (49)42/92 (46)
Minor13/39 (33)40/89 (45)62/133 (47)115/261 (44)
Nonteaching141/293 (48)100/236 (42)40/118 (34)281/647 (43)
Total156/335 (47)143/335 (43)145/336 (43)438/1,000 (44)

Table 2 summarizes structure, equipment, quality improvement, and pre‐ and postarrest practices across the hospitals. Of note, 77% of hospitals (n=334) reported having a predesignated, dedicated code team, and 66% (n=281) reported standardized defibrillator make and model throughout their hospital. However, less than one‐third of hospitals utilized any CPR assist technology (eg, CPR quality sensor or mechanical CPR device). The majority of hospitals reported having a rapid response team (RRT) (n=391 [91%]). Although a therapeutic hypothermia protocol for postarrest care was in place in over half of hospitals (n=252 [58%]), utilization of hypothermia for patients with return of spontaneous circulation was infrequent.

In‐hospital Resuscitation Structure and Practices
 Value2010 AHA Guidelines
  • NOTE: Results are shown as total (%) unless otherwise indicated. Percentages were adjusted by excluding missing responses. Abbreviations: AED, automatic external defibrillator; AHA, American Heart Association; CPR, cardiopulmonary resuscitation; IHCA, in‐hospital cardiac arrest; IQR, interquartile range; LOE, level of evidence; PA, public address; QI, quality improvement; ROSC, return of spontaneous circulation; RRT, rapid response team; TH, therapeutic hypothermia

  • These categories are not mutually exclusive

  • Recommended or supported in 2005 guidelines

  • May be considered for use in specific settings by properly trained personnel

  • Supported in the guidelines without official class recommendation.

Structure  
Existing CPR committee270 (66) 
CPR chair  
Physician only129 (48) 
Nurse only90 (34) 
Nurse/physician co‐chair31 (12) 
Other17 (6) 
Clinical specialty of chaira  
Pulmonary/critical care79 (35) 
Emergency medicine71 (31) 
Anesthesia/critical care43 (19) 
Cardiology38 (17) 
Other32 (14) 
Hospital medicine23 (10) 
Predetermined cardiac arrest team structure334 (77) 
Notifications of respondersa  
Hospital‐wide PA system406 (93) 
Pager/calls to individuals230 (53) 
Local alarm49 (11) 
Equipment  
AEDs used as primary defibrillator by location  
High‐acuity inpatient areas69 (16) 
Low‐acuity inpatient areas109 (26) 
Outpatient areas206 (51)Class IIb, LOE Cb
Public areas263 (78)Class IIb, LOE Cb
Defibrillator throughout hospital  
Same brand and model281 (66) 
Same brand, different models93 (22) 
Different brands54 (13) 
CPR assist technology useda  
None291 (70) 
Capnography106 (25)Class IIb, LOE Cb
Mechanical CPR25 (6)Class IIb, LOE B/Cbc
Feedback device17 (4)Class IIa, LOE B
Quality improvement  
IHCA tracked336 (82)Supportedbd
Data reviewed Supportedbd
Data not tracked/never reviewed85 (20) 
Intermittently53 (12) 
Routinely287 (68) 
Routine cardiac arrest case reviews/debriefing149 (34)Class IIa, LOE C
Dedicated staff to resuscitation QI196 (49) 
Full‐time equivalent staffing, median (IQR)0.5 (0.251.2) 
Routine simulated resuscitation training268 (62) 
Pre‐ and postarrest measures  
Hospitals with RRT391 (91)Class I, LOE Cb
Formal RRT‐specific training  
Never50 (14) 
Once110 (30) 
Recurrent163 (45) 
TH protocol/order set in place252 (58) 
Percent of patients with ROSC receiving TH Class IIb, LOE Bb
<5%309 (74) 
5%25%68 (16) 
26%50%11 (3) 
51%75%10 (2) 
>75%18 (4) 

Hospitals reported that routine responders to IHCA events included respiratory therapists (n=414 [95%]), critical care nurses (n=406 [93%]), floor nurses (n=396 [90%]), attending physicians (n=392 [89%]), physician trainees (n=162 [37%]), and pharmacists (n=210 [48%]). Figure 1 shows the distribution of responders and team leaders by hospital type. Of the nonteaching hospitals, attending‐level physicians were likely to respond at 94% (265/281) and routinely lead the resuscitations at 84% (236/281), whereas, of major teaching hospitals, attending physicians were only likely to respond at 71% (30/42) and routinely lead at 19% (8/42).

Two‐thirds of the hospitals had a CPR committee (n=270 [66%]), and 196 (49%) had some staff time dedicated to resuscitation quality improvement. Hospitals with a specific committee dedicated to resuscitation and/or dedicated staff for resuscitation quality improvement were more likely to routinely track cardiac arrest data (odds ratio [OR]: 3.64, 95% confidence interval [CI]: 2.056.47 and OR: 2.02, 95% CI: 1.16‐3.54, respectively) and review the data (OR: 2.67, 95% CI: 1.45‐4.92 and OR: 2.18, 95% CI: 1.22‐3.89, respectively), after adjusting for teaching status and hospital size. These hospitals were also more likely to engage in simulation training and debriefing (Table 3).

Correlation Between Resource Availability and Quality Improvement Practices
 CPR Committee, n=406Dedicated QI Staff, n=398
  • NOTE: Logistic regression adjusting for hospital size and teaching status was performed. All results are shown as odds ratio (95% confidence interval)

  • Abbreviations: CPR, cardiopulmonary resuscitation; IHCA, in‐hospital cardiac arrest; QI, quality improvement.

IHCA tracking3.64 (2.056.47)2.02 (1.16‐3.54)
Routinely review2.67 (1.45‐4.92)2.18 (1.22‐3.89)
Simulation training2.63 (1.66‐4.18)1.89 (1.24‐2.89)
Debriefing3.19 (1.89‐5.36)2.14 (1.39‐3.32)

Ninety percent (n=391) of respondents agreed that there is room for improvement in resuscitation practice at my hospital, and 70% (n=302) agreed that improved resuscitation would translate into improved patient outcomes. Overall, 78% (n=338) cited at least 1 barrier to improved resuscitation quality, of which the lack of adequate training (n=233 [54%]) and the lack of an appropriate champion (n=230 [53%]) were the most common. In subgroup analysis, nonteaching hospitals were significantly more likely to report the lack of a champion than their teaching counterparts (P=0.001) (Figure 2). In addition, we analyzed the data by hospitals that reported lack of a champion was not a barrier and compared them to those for whom it was, and found significantly higher adherence across all the measures in Table 2 supported by the 2010 guidelines, with the exception of real‐time feedback (data not shown).

Figure 2
Barriers to resuscitation quality improvement by institution type. Bars represent the percent of responders reporting specific perceived barriers to resuscitation quality improvement at their hospital, stratified by the teaching status of the hospital.

DISCUSSION

In this nationally representative sample of hospitals, we found considerable variability in cardiac arrest and resuscitation structures and processes, suggesting potential areas to target for improvement. Some practices, including use of RRTs and defibrillator standardization, were fairly routine, whereas others, such as therapeutic hypothermia and CPR assist technology, were rarely utilized. Quality initiatives, such as data tracking and review, simulation training, and debriefing were variable.

Several factors likely contribute to the variable implementation of evidence‐based practices. Guidelines alone have been shown to have little impact on practice by physicians in general.[11] This is supported by the lack of correlation we found between the presence, absence or strength of specific American Heart Association (AHA) emergency cardiovascular care treatment recommendations and the percent of hospitals reporting performing that measure. It is possible that other factors, such as a lack of familiarity or agreement with those guidelines, or the presence of external barriers, may be contributing.[12, 13] Specifically, the importance of a clinical champion was supported by our finding that hospitals reporting lack of a champion as a barrier were less likely to be adherent with guidelines. However, because the study did not directly test the impact of a champion, we wanted to be careful to avoid overstating or editorializing our results.

Some of the variability may also be related to the resource intensiveness of the practice. Routine simulation training and debriefing interventions, for example, are time intensive and require trained personnel to institute. That may explain the correlation we noted between these practices and the presence of CPR committee and dedicated personnel. The use of dedicated personnel was rare in this study, with less than half of respondents reporting any dedicated staff and a median of 0.5 full‐time equivalents for those reporting positively. This is in stark contrast to the routine use of resuscitation officers (primarily nurses dedicated to overseeing resuscitation practices and education at the hospital) in the United Kingdom.[14] Such a resuscitation officer model adopted by US hospitals could improve the quality and intensity of resuscitation care approaches.

Particularly surprising was the high rate of respondents (70%) reporting that they do not utilize any CPR assist technology. In the patient who does not have an arterial line, use of quantitative capnography is the best measure of cardiac output during cardiac arrest, yet only one‐quarter of hospitals reported using it, with no discrepancy between hospital type or size. A recent summit of national resuscitation experts expounded on the AHA guidelines suggesting that end‐tidal carbon dioxide should be used in all arrests to guide the quality of CPR with a goal value of >20.[8] Similarly, CPR feedback devices have an even higher level of evidence recommendation in the 2010 AHA guidelines than capnography, yet only 4% of hospitals reported utilizing them. Although it is true that introducing these CPR assist technologies into a hospital would require some effort on the part of hospital leadership, it is important to recognize the potential role such devices might play in the larger context of a resuscitation quality program to optimize clinical outcomes from IHCA.

Several differences were noted between hospitals based on teaching status. Although all hospitals were more likely to rely on physicians to lead resuscitations, nonteaching hospitals were more likely to report routine leadership by nurses and pharmacists. Nonteaching hospitals were also less likely to have a CPR committee, even after adjusting for hospital size. In addition, these hospitals were also more likely to report the lack of a clinical champion as a barrier to quality improvement.

There were several limitations to this study. First, this was a descriptive survey that was not tied to outcomes. As such, we are unable to draw conclusions about which practices correlate with decreased incidence of cardiac arrest and improved survival. Second, this was an optional survey with a somewhat limited response rate. Even though the characteristics of the nonresponding hospitals were similar to the responding hospitals, we cannot rule out the possibility that a selection bias was introduced, which would likely overestimate adherence to the guidelines. Self‐reported responses may have introduced additional errors. Finally, the short interval between the release of the 2010 guidelines and the administration of the first survey may have contributed to the variability in implementation of some practices, but many of the recommendations had been previously included in the 2005 guidelines.

We conclude that there is wide variability between hospitals and within practices for resuscitation care. Future work should seek to understand which practices are associated with improved patient outcomes and how best to implement these practices in a more uniform fashion.

Acknowledgements

The authors thank Nancy Hinckley, who championed the study; David Chearo, Christelle Marpaud, and Martha Van Haitsma of the University of Chicago Survey Lab for their assistance in formulating and distributing the survey; and JoAnne Resnic, Nicole Twu, and Frank Zadravecz for administrative support.

Disclosures: This study was supported by the Society of Hospital Medicine with a grant from Philips Healthcare (Andover, MA). Dr. Edelson is supported by a career development award from the National Heart, Lung, and Blood Institute (K23 HL097157). In addition, she has received research support and honoraria from Philips Healthcare (Andover, MA), research support from the American Heart Association (Dallas, TX) and Laerdal Medical (Stavanger, Norway), and an honorarium from Early Sense (Tel Aviv, Israel). Dr. Hunt has received research support from the Laerdal Foundation for Acute Medicine (Stavanger, Norway), the Hartwell Foundation (Memphis, TN), and the Arthur Vining Davis Foundation (Jacksonville, FL), and honoraria from the Kansas University Endowment (Kansas City, KS), JCCC (Overland Park, KS), and the UVA School of Medicine (Charlottesville, VA) and the European School of Management (Berlin, Germany). Dr. Mancini is supported in part by an Agency for Healthcare Research and Quality grant (R18HS020416). In addition, she has received research support from the American Heart Association (Dallas, TX) and Laerdal Medical (Stavanger, Norway), and honoraria from Sotera Wireless, Inc. (San Diego, CA). Dr. Abella has received research support from the National Institutes of Health (NIH), Medtronic Foundation (Minneapolis, MN), and Philips Healthcare (Andover, MA); has volunteered with the American Heart Association; and received honoraria from Heartsine (Belfast, Ireland), Velomedix (Menlo Park, CA), and Stryker (Kalamazoo, MI). Mr. Miller is employed by the Society of Hospital Medicine.

An estimated 200,000 adult patients suffer cardiac arrest in US hospitals each year, of which <20% survive to hospital discharge.[1, 2] Patient survival from in‐hospital cardiac arrest (IHCA), however, varies widely across hospitals, and may be partly attributed to differences in hospital practices.[3, 4, 5] Although there are data to support specific patient‐level practices in the hospital, such as delivery of electrical shock for ventricular fibrillation within 2 minutes of onset of the lethal rhythm,[6] little is known about in‐hospital systems‐level factors. Similar to patient‐level practices, some organizational and systems level practices are supported by international consensus and guideline recommendations.[7, 8] However, the adoption of these practices is poorly understood. As such, we sought to gain a better understanding of current US hospital practices with regard to IHCA and resuscitation with the hopes of identifying potential targets for improvement in quality and outcomes.

METHODS

We conducted a nationally representative mail survey between May 2011 and November 2011, targeting a stratified random sample of 1000 hospitals. We utilized the US Acute‐Care Hospitals (FY2008) database from the American Hospital Association to determine the total population of 3809 community hospitals (ie, nonfederal government, nonpsychiatric, and nonlong‐term care hospitals).[9] This included general medical and surgical, surgical, cancer, heart, orthopedic, and children's hospitals. These hospitals were stratified into tertiles by annual in‐patient days and teaching status (major, minor, nonteaching), from which our sample was randomly selected (Table 1). We identified each hospital's cardiopulmonary resuscitation (CPR) committee (sometimes known as code committee, code blue committee, or cardiac arrest committee) chair or chief medical/quality officer, to whom the paper‐based survey was addressed, with instructions to forward to the most appropriate person if someone other than the recipient. This study was evaluated by the University of Chicago institutional review board and deemed exempt from further review.

Figure 1
Hospital responders to in‐hospital resuscitations by institution type and level of participation. Bars represent the percent of hospitals reporting usual resuscitation responders in their hospitals, stratified by the teaching status of the hospital. Each bar is further subdivided by the likelihood of that provider to lead the resuscitation.

Survey

The survey content was developed by the study investigators and iteratively adapted by consensus and beta testing to require approximately 10 minutes to complete. Questions were edited and formatted by the University of Chicago Survey Lab (Chicago, IL) to be more precise and generalizable. Surveys were mailed in May 2011 and resent twice to nonresponders. A $10 incentive was included in the second mailing. When more than 1 response from a hospital was received, the more complete survey was used, or if equally complete, the responses were combined. All printing, mailing, receipt control, and data entry were performed by the University of Chicago Survey Lab, and data entry was double‐keyed to ensure accuracy.

Response rate was calculated based on the American Association for Public Opinion Research standard response rate formula.[10] It was assumed that the portion of nonresponding cases were ineligible at the same rate of cases for which eligibility was determined. A survey was considered complete if at least 75% of individual questions contained a valid response, partially complete if at least 40% but less than 75% of questions contained a valid response, and a nonresponse if less than 40% was completed. Nonresponses were excluded from the analysis.

Statistical Analysis

Analyses were performed using a statistical software application (Stata version 11.0; StataCorp, College Station, TX). Descriptive statistics were calculated and presented as number (%) or median (interquartile range). A [2] statistic was used to assess bias in response rate. We determined a priori 2 indicators of resource allocation (availability of a CPR committee and dedicated personnel for resuscitation quality improvement) and tested their association with quality improvement initiatives, using logistic regression to adjust for hospital teaching status and number of admissions as potential confounders. All tests of significance used a 2‐sided P<0.05.

RESULTS

Responses were received from 439 hospitals (425 complete and 14 partially complete), yielding a response rate of 44%. One subject ID was removed from the survey and could not be identified, so it was excluded from any analyses. Hospital demographics were similar between responders and nonresponders (P=0.50) (Table 1). Respondents who filled out the surveys included chief medical/quality officers (n=143 [33%]), chairs of CPR committees (n=64 [15%]), members of CPR committees (n=29 [7%]), chiefs of staff (n=33 [8%]), resuscitation officers/nurses (n=27 [6%]), chief nursing officers (n=13 [3%]), and others (n=131 [30%]).

Stratified Response Rates by Hospital Volume and Teaching Status
Teaching StatusAnnual Inpatient DaysTotal
<17,69517,695‐52,500>52,500
  • NOTE: Results are shown as number of respondents/total sampled (%).

Major1/2 (50)1/8 (13)40/82 (49)42/92 (46)
Minor13/39 (33)40/89 (45)62/133 (47)115/261 (44)
Nonteaching141/293 (48)100/236 (42)40/118 (34)281/647 (43)
Total156/335 (47)143/335 (43)145/336 (43)438/1,000 (44)

Table 2 summarizes structure, equipment, quality improvement, and pre‐ and postarrest practices across the hospitals. Of note, 77% of hospitals (n=334) reported having a predesignated, dedicated code team, and 66% (n=281) reported standardized defibrillator make and model throughout their hospital. However, less than one‐third of hospitals utilized any CPR assist technology (eg, CPR quality sensor or mechanical CPR device). The majority of hospitals reported having a rapid response team (RRT) (n=391 [91%]). Although a therapeutic hypothermia protocol for postarrest care was in place in over half of hospitals (n=252 [58%]), utilization of hypothermia for patients with return of spontaneous circulation was infrequent.

In‐hospital Resuscitation Structure and Practices
 Value2010 AHA Guidelines
  • NOTE: Results are shown as total (%) unless otherwise indicated. Percentages were adjusted by excluding missing responses. Abbreviations: AED, automatic external defibrillator; AHA, American Heart Association; CPR, cardiopulmonary resuscitation; IHCA, in‐hospital cardiac arrest; IQR, interquartile range; LOE, level of evidence; PA, public address; QI, quality improvement; ROSC, return of spontaneous circulation; RRT, rapid response team; TH, therapeutic hypothermia

  • These categories are not mutually exclusive

  • Recommended or supported in 2005 guidelines

  • May be considered for use in specific settings by properly trained personnel

  • Supported in the guidelines without official class recommendation.

Structure  
Existing CPR committee270 (66) 
CPR chair  
Physician only129 (48) 
Nurse only90 (34) 
Nurse/physician co‐chair31 (12) 
Other17 (6) 
Clinical specialty of chaira  
Pulmonary/critical care79 (35) 
Emergency medicine71 (31) 
Anesthesia/critical care43 (19) 
Cardiology38 (17) 
Other32 (14) 
Hospital medicine23 (10) 
Predetermined cardiac arrest team structure334 (77) 
Notifications of respondersa  
Hospital‐wide PA system406 (93) 
Pager/calls to individuals230 (53) 
Local alarm49 (11) 
Equipment  
AEDs used as primary defibrillator by location  
High‐acuity inpatient areas69 (16) 
Low‐acuity inpatient areas109 (26) 
Outpatient areas206 (51)Class IIb, LOE Cb
Public areas263 (78)Class IIb, LOE Cb
Defibrillator throughout hospital  
Same brand and model281 (66) 
Same brand, different models93 (22) 
Different brands54 (13) 
CPR assist technology useda  
None291 (70) 
Capnography106 (25)Class IIb, LOE Cb
Mechanical CPR25 (6)Class IIb, LOE B/Cbc
Feedback device17 (4)Class IIa, LOE B
Quality improvement  
IHCA tracked336 (82)Supportedbd
Data reviewed Supportedbd
Data not tracked/never reviewed85 (20) 
Intermittently53 (12) 
Routinely287 (68) 
Routine cardiac arrest case reviews/debriefing149 (34)Class IIa, LOE C
Dedicated staff to resuscitation QI196 (49) 
Full‐time equivalent staffing, median (IQR)0.5 (0.251.2) 
Routine simulated resuscitation training268 (62) 
Pre‐ and postarrest measures  
Hospitals with RRT391 (91)Class I, LOE Cb
Formal RRT‐specific training  
Never50 (14) 
Once110 (30) 
Recurrent163 (45) 
TH protocol/order set in place252 (58) 
Percent of patients with ROSC receiving TH Class IIb, LOE Bb
<5%309 (74) 
5%25%68 (16) 
26%50%11 (3) 
51%75%10 (2) 
>75%18 (4) 

Hospitals reported that routine responders to IHCA events included respiratory therapists (n=414 [95%]), critical care nurses (n=406 [93%]), floor nurses (n=396 [90%]), attending physicians (n=392 [89%]), physician trainees (n=162 [37%]), and pharmacists (n=210 [48%]). Figure 1 shows the distribution of responders and team leaders by hospital type. Of the nonteaching hospitals, attending‐level physicians were likely to respond at 94% (265/281) and routinely lead the resuscitations at 84% (236/281), whereas, of major teaching hospitals, attending physicians were only likely to respond at 71% (30/42) and routinely lead at 19% (8/42).

Two‐thirds of the hospitals had a CPR committee (n=270 [66%]), and 196 (49%) had some staff time dedicated to resuscitation quality improvement. Hospitals with a specific committee dedicated to resuscitation and/or dedicated staff for resuscitation quality improvement were more likely to routinely track cardiac arrest data (odds ratio [OR]: 3.64, 95% confidence interval [CI]: 2.056.47 and OR: 2.02, 95% CI: 1.16‐3.54, respectively) and review the data (OR: 2.67, 95% CI: 1.45‐4.92 and OR: 2.18, 95% CI: 1.22‐3.89, respectively), after adjusting for teaching status and hospital size. These hospitals were also more likely to engage in simulation training and debriefing (Table 3).

Correlation Between Resource Availability and Quality Improvement Practices
 CPR Committee, n=406Dedicated QI Staff, n=398
  • NOTE: Logistic regression adjusting for hospital size and teaching status was performed. All results are shown as odds ratio (95% confidence interval)

  • Abbreviations: CPR, cardiopulmonary resuscitation; IHCA, in‐hospital cardiac arrest; QI, quality improvement.

IHCA tracking3.64 (2.056.47)2.02 (1.16‐3.54)
Routinely review2.67 (1.45‐4.92)2.18 (1.22‐3.89)
Simulation training2.63 (1.66‐4.18)1.89 (1.24‐2.89)
Debriefing3.19 (1.89‐5.36)2.14 (1.39‐3.32)

Ninety percent (n=391) of respondents agreed that there is room for improvement in resuscitation practice at my hospital, and 70% (n=302) agreed that improved resuscitation would translate into improved patient outcomes. Overall, 78% (n=338) cited at least 1 barrier to improved resuscitation quality, of which the lack of adequate training (n=233 [54%]) and the lack of an appropriate champion (n=230 [53%]) were the most common. In subgroup analysis, nonteaching hospitals were significantly more likely to report the lack of a champion than their teaching counterparts (P=0.001) (Figure 2). In addition, we analyzed the data by hospitals that reported lack of a champion was not a barrier and compared them to those for whom it was, and found significantly higher adherence across all the measures in Table 2 supported by the 2010 guidelines, with the exception of real‐time feedback (data not shown).

Figure 2
Barriers to resuscitation quality improvement by institution type. Bars represent the percent of responders reporting specific perceived barriers to resuscitation quality improvement at their hospital, stratified by the teaching status of the hospital.

DISCUSSION

In this nationally representative sample of hospitals, we found considerable variability in cardiac arrest and resuscitation structures and processes, suggesting potential areas to target for improvement. Some practices, including use of RRTs and defibrillator standardization, were fairly routine, whereas others, such as therapeutic hypothermia and CPR assist technology, were rarely utilized. Quality initiatives, such as data tracking and review, simulation training, and debriefing were variable.

Several factors likely contribute to the variable implementation of evidence‐based practices. Guidelines alone have been shown to have little impact on practice by physicians in general.[11] This is supported by the lack of correlation we found between the presence, absence or strength of specific American Heart Association (AHA) emergency cardiovascular care treatment recommendations and the percent of hospitals reporting performing that measure. It is possible that other factors, such as a lack of familiarity or agreement with those guidelines, or the presence of external barriers, may be contributing.[12, 13] Specifically, the importance of a clinical champion was supported by our finding that hospitals reporting lack of a champion as a barrier were less likely to be adherent with guidelines. However, because the study did not directly test the impact of a champion, we wanted to be careful to avoid overstating or editorializing our results.

Some of the variability may also be related to the resource intensiveness of the practice. Routine simulation training and debriefing interventions, for example, are time intensive and require trained personnel to institute. That may explain the correlation we noted between these practices and the presence of CPR committee and dedicated personnel. The use of dedicated personnel was rare in this study, with less than half of respondents reporting any dedicated staff and a median of 0.5 full‐time equivalents for those reporting positively. This is in stark contrast to the routine use of resuscitation officers (primarily nurses dedicated to overseeing resuscitation practices and education at the hospital) in the United Kingdom.[14] Such a resuscitation officer model adopted by US hospitals could improve the quality and intensity of resuscitation care approaches.

Particularly surprising was the high rate of respondents (70%) reporting that they do not utilize any CPR assist technology. In the patient who does not have an arterial line, use of quantitative capnography is the best measure of cardiac output during cardiac arrest, yet only one‐quarter of hospitals reported using it, with no discrepancy between hospital type or size. A recent summit of national resuscitation experts expounded on the AHA guidelines suggesting that end‐tidal carbon dioxide should be used in all arrests to guide the quality of CPR with a goal value of >20.[8] Similarly, CPR feedback devices have an even higher level of evidence recommendation in the 2010 AHA guidelines than capnography, yet only 4% of hospitals reported utilizing them. Although it is true that introducing these CPR assist technologies into a hospital would require some effort on the part of hospital leadership, it is important to recognize the potential role such devices might play in the larger context of a resuscitation quality program to optimize clinical outcomes from IHCA.

Several differences were noted between hospitals based on teaching status. Although all hospitals were more likely to rely on physicians to lead resuscitations, nonteaching hospitals were more likely to report routine leadership by nurses and pharmacists. Nonteaching hospitals were also less likely to have a CPR committee, even after adjusting for hospital size. In addition, these hospitals were also more likely to report the lack of a clinical champion as a barrier to quality improvement.

There were several limitations to this study. First, this was a descriptive survey that was not tied to outcomes. As such, we are unable to draw conclusions about which practices correlate with decreased incidence of cardiac arrest and improved survival. Second, this was an optional survey with a somewhat limited response rate. Even though the characteristics of the nonresponding hospitals were similar to the responding hospitals, we cannot rule out the possibility that a selection bias was introduced, which would likely overestimate adherence to the guidelines. Self‐reported responses may have introduced additional errors. Finally, the short interval between the release of the 2010 guidelines and the administration of the first survey may have contributed to the variability in implementation of some practices, but many of the recommendations had been previously included in the 2005 guidelines.

We conclude that there is wide variability between hospitals and within practices for resuscitation care. Future work should seek to understand which practices are associated with improved patient outcomes and how best to implement these practices in a more uniform fashion.

Acknowledgements

The authors thank Nancy Hinckley, who championed the study; David Chearo, Christelle Marpaud, and Martha Van Haitsma of the University of Chicago Survey Lab for their assistance in formulating and distributing the survey; and JoAnne Resnic, Nicole Twu, and Frank Zadravecz for administrative support.

Disclosures: This study was supported by the Society of Hospital Medicine with a grant from Philips Healthcare (Andover, MA). Dr. Edelson is supported by a career development award from the National Heart, Lung, and Blood Institute (K23 HL097157). In addition, she has received research support and honoraria from Philips Healthcare (Andover, MA), research support from the American Heart Association (Dallas, TX) and Laerdal Medical (Stavanger, Norway), and an honorarium from Early Sense (Tel Aviv, Israel). Dr. Hunt has received research support from the Laerdal Foundation for Acute Medicine (Stavanger, Norway), the Hartwell Foundation (Memphis, TN), and the Arthur Vining Davis Foundation (Jacksonville, FL), and honoraria from the Kansas University Endowment (Kansas City, KS), JCCC (Overland Park, KS), and the UVA School of Medicine (Charlottesville, VA) and the European School of Management (Berlin, Germany). Dr. Mancini is supported in part by an Agency for Healthcare Research and Quality grant (R18HS020416). In addition, she has received research support from the American Heart Association (Dallas, TX) and Laerdal Medical (Stavanger, Norway), and honoraria from Sotera Wireless, Inc. (San Diego, CA). Dr. Abella has received research support from the National Institutes of Health (NIH), Medtronic Foundation (Minneapolis, MN), and Philips Healthcare (Andover, MA); has volunteered with the American Heart Association; and received honoraria from Heartsine (Belfast, Ireland), Velomedix (Menlo Park, CA), and Stryker (Kalamazoo, MI). Mr. Miller is employed by the Society of Hospital Medicine.

References
  1. Girotra S, Nallamothu BK, Spertus JA, Li Y, Krumholz HM, Chan PS. Trends in survival after in‐hospital cardiac arrest. N Engl J Med. 2012;367(20):19121920.
  2. Merchant RM, Yang L, Becker LB, et al. Incidence of treated cardiac arrest in hospitalized patients in the United States. Crit Care Med. 2011;39(11):24012406.
  3. Chan PS, Nichol G, Krumholz HM, et al. Racial differences in survival after in‐hospital cardiac arrest. JAMA. 2009;302(11):11951201.
  4. Chan PS, Nichol G, Krumholz HM, Spertus JA, Nallamothu BK. Hospital variation in time to defibrillation after in‐hospital cardiac arrest. Arch Intern Med. 2009;169(14):12651273.
  5. Goldberger ZD, Chan PS, Berg RA, et al. Duration of resuscitation efforts and survival after in‐hospital cardiac arrest: an observational study. Lancet. 2012;380(9852):14731481.
  6. Chan PS, Krumholz HM, Nichol G, Nallamothu BK. Delayed time to defibrillation after in‐hospital cardiac arrest. N Engl J Med. 2008;358(1):917.
  7. 2010 American Heart Association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care science. Circulation. 2010;122(18 suppl 3):S640S946.
  8. Meaney PA, Bobrow BJ, Mancini ME, et al. Cardiopulmonary resuscitation quality: improving cardiac resuscitation outcomes both inside and outside the hospital: a consensus statement from the American Heart Association. Circulation. 2013;128(4):417435.
  9. American Hospital Association. 2008 AHA annual survey. AHA data viewer: survey instruments. 2012; Available at: http://www.ahadataviewer.com/about/hospital‐database. Accessed October 11, 2013.
  10. The American Association for Public Opinion Research. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 7th ed. Deerfield, IL: AAPOR; 2011.
  11. Lomas J, Anderson GM, Domnick‐Pierre K, Vayda E, Enkin MW, Hannah WJ. Do practice guidelines guide practice? The effect of a consensus statement on the practice of physicians. N Engl J Med. 1989;321(19):13061311.
  12. Cabana MD, Rand CS, Powe NR, et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282(15):14581465.
  13. Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients' care. Lancet. 2003;362(9391):12251230.
  14. Gabbott D, Smith G, Mitchell S, et al. Cardiopulmonary resuscitation standards for clinical practice and training in the UK. Accid Emerg Nurs. 2005;13(3):171179.
References
  1. Girotra S, Nallamothu BK, Spertus JA, Li Y, Krumholz HM, Chan PS. Trends in survival after in‐hospital cardiac arrest. N Engl J Med. 2012;367(20):19121920.
  2. Merchant RM, Yang L, Becker LB, et al. Incidence of treated cardiac arrest in hospitalized patients in the United States. Crit Care Med. 2011;39(11):24012406.
  3. Chan PS, Nichol G, Krumholz HM, et al. Racial differences in survival after in‐hospital cardiac arrest. JAMA. 2009;302(11):11951201.
  4. Chan PS, Nichol G, Krumholz HM, Spertus JA, Nallamothu BK. Hospital variation in time to defibrillation after in‐hospital cardiac arrest. Arch Intern Med. 2009;169(14):12651273.
  5. Goldberger ZD, Chan PS, Berg RA, et al. Duration of resuscitation efforts and survival after in‐hospital cardiac arrest: an observational study. Lancet. 2012;380(9852):14731481.
  6. Chan PS, Krumholz HM, Nichol G, Nallamothu BK. Delayed time to defibrillation after in‐hospital cardiac arrest. N Engl J Med. 2008;358(1):917.
  7. 2010 American Heart Association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care science. Circulation. 2010;122(18 suppl 3):S640S946.
  8. Meaney PA, Bobrow BJ, Mancini ME, et al. Cardiopulmonary resuscitation quality: improving cardiac resuscitation outcomes both inside and outside the hospital: a consensus statement from the American Heart Association. Circulation. 2013;128(4):417435.
  9. American Hospital Association. 2008 AHA annual survey. AHA data viewer: survey instruments. 2012; Available at: http://www.ahadataviewer.com/about/hospital‐database. Accessed October 11, 2013.
  10. The American Association for Public Opinion Research. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 7th ed. Deerfield, IL: AAPOR; 2011.
  11. Lomas J, Anderson GM, Domnick‐Pierre K, Vayda E, Enkin MW, Hannah WJ. Do practice guidelines guide practice? The effect of a consensus statement on the practice of physicians. N Engl J Med. 1989;321(19):13061311.
  12. Cabana MD, Rand CS, Powe NR, et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282(15):14581465.
  13. Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients' care. Lancet. 2003;362(9391):12251230.
  14. Gabbott D, Smith G, Mitchell S, et al. Cardiopulmonary resuscitation standards for clinical practice and training in the UK. Accid Emerg Nurs. 2005;13(3):171179.
Issue
Journal of Hospital Medicine - 9(6)
Issue
Journal of Hospital Medicine - 9(6)
Page Number
353-357
Page Number
353-357
Publications
Publications
Article Type
Display Headline
Hospital cardiac arrest resuscitation practice in the United States: A nationally representative survey
Display Headline
Hospital cardiac arrest resuscitation practice in the United States: A nationally representative survey
Sections
Article Source

© 2014 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Dana P. Edelson, MD, Section of Hospital Medicine, University of Chicago, 5841 S. Maryland Avenue, MC 5000, Chicago, IL 60637; Telephone: 773‐834‐2191; Fax: 773‐795‐7398; E‐mail: dperes@uchicago.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files