Article Type
Changed
Mon, 05/22/2017 - 21:27
Display Headline
Hospital performance trends on national quality measures and the association with joint commission accreditation

The Joint Commission (TJC) currently accredits approximately 4546 acute care, critical access, and specialty hospitals,1 accounting for approximately 82% of U.S. hospitals (representing 92% of hospital beds). Hospitals seeking to earn and maintain accreditation undergo unannounced on‐site visits by a team of Joint Commission surveyors at least once every 3 years. These surveys address a variety of domains, including the environment of care, infection prevention and control, information management, adherence to a series of national patient safety goals, and leadership.1

The survey process has changed markedly in recent years. Since 2002, accredited hospitals have been required to continuously collect and submit selected performance measure data to The Joint Commission throughout the three‐year accreditation cycle. The tracer methodology, an evaluation method in which surveyors select a patient to follow through the organization in order to assess compliance with selected standards, was instituted in 2004. Soon thereafter, on‐site surveys went from announced to unannounced in 2006.

Despite the 50+ year history of hospital accreditation in the United States, there has been surprisingly little research on the link between accreditation status and measures of hospital quality (both processes and outcomes). It is only recently that a growing number of studies have attempted to examine this relationship. Empirical support for the relationship between accreditation and other quality measures is emerging. Accredited hospitals have been shown to provide better emergency response planning2 and training3 compared to non‐accredited hospitals. Accreditation has been observed to be a key predictor of patient safety system implementation4 and the primary driver of hospitals' patient‐safety initiatives.5 Accredited trauma centers have been associated with significant reductions in patient mortality,6 and accreditation has been linked to better compliance with evidence‐based methadone and substance abuse treatment.7, 8 Accredited hospitals have been shown to perform better on measures of hospital quality in acute myocardial infarction (AMI), heart failure, and pneumonia care.9, 10 Similarly, accreditation has been associated with lower risk‐adjusted in‐hospital mortality rates for congestive heart failure (CHF), stroke, and pneumonia.11, 12 The results of such research, however, have not always been consistent. Several studies have been unable to demonstrate a relationship between accreditation and quality measures. A study of financial and cost‐related outcome measures found no relationship to accreditation,13 and a study comparing medication error rates across different types of organizations found no relationship to accreditation status.14 Similarly, a comparison of accredited versus non‐accredited ambulatory surgical organizations found that patients were less likely to be hospitalized when treated at an accredited facility for colonoscopy procedures, but no such relationship was observed for the other 4 procedures studied.15

While the research to date has been generally supportive of the link between accreditation and other measures of health care quality, the studies were typically limited to only a few measures and/or involved relatively small samples of accredited and non‐accredited organizations. Over the last decade, however, changes in the performance measurement landscape have created previously unavailable opportunities to more robustly examine the relationship between accreditation and other indicators of hospital quality.

At about the same time that The Joint Commission's accreditation process was becoming more vigorous, the Centers for Medicare and Medicaid Services (CMS) began a program of publicly reporting quality data (http://www.hospitalcompare.hhs.gov). The alignment of Joint Commission and CMS quality measures establishes a mechanism through which accredited and non‐accredited hospitals can be compared using the same nationally standardized quality measures. Therefore, we took advantage of this unique circumstancea new and more robust TJC accreditation program and the launching of public quality reportingto examine the relationship between Joint Commission accreditation status and publicly reported hospital quality measures. Moreover, by examining trends in these publicly reported measures over five years and incorporating performance data not found in the Hospital Compare Database, we assessed whether accreditation status was also linked to the pace of performance improvement over time.

By using a population of hospitals and a range of standardized quality measures greater than those used in previous studies, we seek to address the following questions: Is Joint Commission accreditation status truly associated with higher quality care? And does accreditation status help identify hospitals that are more likely to improve their quality and safety over time?

METHODS

Performance Measures

Since July 2002, U.S. hospitals have been collecting data on standardized measures of quality developed by The Joint Commission and CMS. These measures have been endorsed by the National Quality Forum16 and adopted by the Hospital Quality Alliance.17 The first peer‐reviewed reports using The Joint Commission/CMS measure data confirmed that the measures could successfully monitor and track hospital improvement and identify disparities in performance,18, 19 as called for by the Institute of Medicine's (IOM) landmark 2001 report, Crossing the Quality Chasm.20

In order to promote transparency in health care, both CMSthrough the efforts of the Hospital Quality Allianceand The Joint Commission began publicly reporting measure rates in 2004 using identical measure and data element specifications. It is important to note that during the five‐year span covered by this study, both The Joint Commission and CMS emphasized the reporting of performance measure data. While performance improvement has been the clear objective of these efforts, neither organization established targets for measure rates or set benchmarks for performance improvement. Similarly, while Joint Commission‐accredited hospitals were required to submit performance measure data as a condition of accreditation, their actual performance on the measure rates did not factor into the accreditation decision. In the absence of such direct leverage, it is interesting to note that several studies have demonstrated the positive impact of public reporting on hospital performance,21 and on providing useful information to the general public and health care professionals regarding hospital quality.22

The 16 measures used in this study address hospital compliance with evidence‐based processes of care recommended by the clinical treatment guidelines of respected professional societies.23 Process of care measures are particularly well suited for quality improvement purposes, as they can identify deficiencies which can be immediately addressed by hospitals and do not require risk‐adjustment, as opposed to outcome measures, which do not necessarily directly identify obvious performance improvement opportunities.2426 The measures were also implemented in sets in order to provide hospitals with a more complete portrayal of quality than might be provided using unrelated individual measures. Research has demonstrated that greater collective performance on these process measures is associated with improved one‐year survival after heart failure hospitalization27 and inpatient mortality for those Medicare patients discharged with acute myocardial infarction, heart failure, and pneumonia,28 while other research has shown little association with short‐term outcomes.29

Using the Specifications Manual for National Hospital Inpatient Quality Measures,16 hospitals identify the initial measure populations through International Classification of Diseases (ICD‐CM‐9) codes and patient age obtained through administrative data. Trained abstractors then collect the data for measure‐specific data elements through medical record review on the identified measure population or a sample of this population. Measure algorithms then identify patients in the numerator and denominator of each measure.

Process measure rates reflect the number of times a hospital treated a patient in a manner consistent with specific evidence‐based clinical practice guidelines (numerator cases), divided by the number of patients who were eligible to receive such care (denominator cases). Because precise measure specifications permit the exclusion of patients contraindicated to receive the specific process of care for the measure, ideal performance should be characterized by measure rates that approach 100% (although rare or unpredictable situations, and the reality that no measure is perfect in its design, make consistent performance at 100% improbable). Accuracy of the measure data, as measured by data element agreement rates on reabstraction, has been reported to exceed 90%.30

In addition to the individual performance measures, hospital performance was assessed using 3 condition‐specific summary scores, one for each of the 3 clinical areas: acute myocardial infarction, heart failure, and pneumonia. The summary scores are a weighted average of the individual measure rates in the clinical area, where the weights are the sample sizes for each of the measures.31 A summary score was also calculated based on all 16 measures as a summary measure of overall compliance with recommended care.

One way of studying performance measurement in a way that relates to standards is to evaluate whether a hospital achieves a high rate of performance, where high is defined as a performance rate of 90% or more. In this context, measures were created from each of the 2004 and 2008 hospital performance rates by dichotomizing them as being either less than 90%, or greater than or equal to 90%.32

Data Sources

The data for the measures included in the study are available on the CMS Hospital Compare public databases or The Joint Commission for discharges in 2004 and 2008.33 These 16 measures, active for all 5 years of the study period, include: 7 measures related to acute myocardial infarction care; 4 measures related to heart failure care; and 5 measures related to pneumonia care. The majority of the performance data for the study were obtained from the yearly CMS Hospital Compare public download databases (http://www.medicare.gov/Download/DownloadDB.asp). When hospitals only reported to The Joint Commission (154 hospitals; of which 118 are Veterans Administration and 30 are Department of Defense hospitals), data were obtained from The Joint Commission's ORYX database, which is available for public download on The Joint Commission's Quality Check web site.23 Most accredited hospitals participated in Hospital Compare (95.5% of accredited hospitals in 2004 and 93.3% in 2008).

Hospital Characteristics

We then linked the CMS performance data, augmented by The Joint Commission performance data when necessary, to hospital characteristics data in the American Hospital Association (AHA) Annual Survey with respect to profit status, number of beds (<100 beds, 100299 beds, 300+ beds), rural status, geographic region, and whether or not the hospital was a critical access hospital. (Teaching status, although available in the AHA database, was not used in the analysis, as almost all teaching hospitals are Joint Commission accredited.) These characteristics were chosen since previous research has identified them as being associated with hospital quality.9, 19, 3437 Data on accreditation status were obtained from The Joint Commission's hospital accreditation database. Hospitals were grouped into 3 hospital accreditation strata based on longitudinal hospital accreditation status between 2004 and 2008: 1) hospitals not accredited in the study period; 2) hospitals accredited between one to four years; and 3) hospitals accredited for the entire study period. Analyses of this middle group (those hospitals accredited for part of the study period; n = 212, 5.4% of the whole sample) led to no significant change in our findings (their performance tended to be midway between always accredited and never‐accredited hospitals) and are thus omitted from our results. Instead, we present only hospitals who were never accredited (n = 762) and those who were accredited through the entire study period (n = 2917).

Statistical Analysis

We assessed the relationship between hospital characteristics and 2004 performance of Joint Commission‐accredited hospitals with hospitals that were not Joint Commission accredited using 2 tests for categorical variables and t tests for continuous variables. Linear regression was used to estimate the five‐year change in performance at each hospital as a function of accreditation group, controlling for hospital characteristics. Baseline hospital performance was also included in the regression models to control for ceiling effects for those hospitals with high baseline performance. To summarize the results, we used the regression models to calculate adjusted change in performance for each accreditation group, and calculated a 95% confidence interval and P value for the difference between the adjusted change scores, using bootstrap methods.38

Next we analyzed the association between accreditation and the likelihood of high 2008 hospital performance by dichotomizing the hospital rates, using a 90% cut point, and using logistic regression to estimate the probability of high performance as a function of accreditation group, controlling for hospital characteristics and baseline hospital performance. The logistic models were then used to calculate adjusted rates of high performance for each accreditation group in presenting the results.

We used two‐sided tests for significance; P < 0.05 was considered statistically significant. This study had no external funding source.

RESULTS

For the 16 individual measures used in this study, a total of 4798 hospitals participated in Hospital Compare or reported data to The Joint Commission in 2004 or 2008. Of these, 907 were excluded because the performance data were not available for either 2004 (576 hospitals) or 2008 (331 hospitals) resulting in a missing value for the change in performance score. Therefore, 3891 hospitals (81%) were included in the final analyses. The 907 excluded hospitals were more likely to be rural (50.8% vs 17.5%), be critical access hospitals (53.9% vs 13.9%), have less than 100 beds (77.4% vs 37.6%), be government owned (34.6% vs 22.1%), be for profit (61.4% vs 49.5%), or be unaccredited (79.8% vs 45.8% in 2004; 75.6% vs 12.8% in 2008), compared with the included hospitals (P < 0.001 for all comparisons).

Hospital Performance at Baseline

Joint Commission‐accredited hospitals were more likely to be large, for profit, or urban, and less likely to be government owned, from the Midwest, or critical access (Table 1). Non‐accredited hospitals performed more poorly than accredited hospitals on most of the publicly reported measures in 2004; the only exception is the timing of initial antibiotic therapy measure for pneumonia (Table 2).

Hospital Characteristics in 2004 Stratified by Joint Commission Accreditation Status
CharacteristicNon‐Accredited (n = 786)Accredited (n = 3105)P Value*
  • P values based on 2 for categorical variables.

Profit status, No. (%)  <0.001
For profit60 (7.6)586 (18.9) 
Government289 (36.8)569 (18.3) 
Not for profit437 (55.6)1,950 (62.8) 
Census region, No. (%)  <0.001
Northeast72 (9.2)497 (16.0) 
Midwest345 (43.9)716 (23.1) 
South248 (31.6)1,291 (41.6) 
West121 (15.4)601 (19.4) 
Rural setting, No. (%)  <0.001
Rural495 (63.0)833 (26.8) 
Urban291 (37.0)2,272 (73.2) 
Bed size  <0.001
<100 beds603 (76.7)861 (27.7) 
100299 beds158 (20.1)1,444 (46.5) 
300+ beds25 (3.2)800 (25.8) 
Critical access hospital status, No. (%)  <0.001
Critical access hospital376 (47.8)164 (5.3) 
Acute care hospital410 (52.2)2,941 (94.7) 
Hospital Raw Performance in 2004 and 2008, Stratified by Joint Commission Accreditation Status
Quality Measure, Mean (SD)*20042008
Non‐AccreditedAccreditedP ValueNon‐AccreditedAccreditedP Value
(n = 786)(n = 3105)(n = 950)(n = 2,941)
  • Abbreviations: ACE, angiotensin‐converting enzyme; AMI, acute myocardial infarction; LV, left ventricular; PCI, percutaneous coronary intervention.

  • Calculated as the proportion of all eligible patients who received the indicated care.

  • P values based on t tests.

AMI      
Aspirin at admission87.1 (20.0)92.6 (9.4)<0.00188.6 (22.1)96.0 (8.6)<0.001
Aspirin at discharge81.2 (26.1)88.5 (14.9)<0.00187.8 (22.7)94.8 (10.1)<0.001
ACE inhibitor for LV dysfunction72.1 (33.4)76.7 (22.9)0.01083.2 (30.5)92.1 (14.8)<0.001
Beta blocker at discharge78.2 (27.9)87.0 (16.2)<0.00187.4 (23.4)95.5 (9.9)<0.001
Smoking cessation advice59.6 (40.8)74.5 (29.9)<0.00187.2 (29.5)97.2 (11.3)<0.001
PCI received within 90 min60.3 (26.2)60.6 (23.8)0.94670.1 (24.8)77.7 (19.2)0.006
Thrombolytic agent within 30 min27.9 (35.5)32.1 (32.8)0.15231.4 (40.7)43.7 (40.2)0.008
Composite AMI score80.6 (20.3)87.7 (10.4)<0.00185.8 (20.0)94.6 (8.1)<0.001
Heart failure      
Discharge instructions36.8 (32.3)49.7 (28.2)<0.00167.4 (29.6)82.3 (16.4)<0.001
Assessment of LV function63.3 (27.6)83.6 (14.9)<0.00179.6 (24.4)95.6 (8.1)<0.001
ACE inhibitor for LV dysfunction70.8 (27.6)75.7 (16.3)<0.00182.5 (22.7)91.5 (9.7)<0.001
Smoking cessation advice57.1 (36.4)68.6 (26.2)<0.00181.5 (29.9)96.1 (10.7)<0.001
Composite heart failure score56.3 (24.1)71.2 (15.6)<0.00175.4 (22.3)90.4 (9.4)<0.001
Pneumonia      
Oxygenation assessment97.4 (7.3)98.4 (4.0)<0.00199.0 (3.2)99.7 (1.2)<0.001
Pneumococcal vaccination45.5 (29.0)48.7 (26.2)0.00779.9 (21.3)87.9 (12.9)<0.001
Timing of initial antibiotic therapy80.6 (13.1)70.9 (14.0)<0.00193.4 (9.2)93.6 (6.1)0.525
Smoking cessation advice56.6 (33.1)65.7 (24.8)<0.00181.6 (25.1)94.4 (11.4)<0.001
Initial antibiotic selection73.6 (19.6)74.1 (13.4)0.50886.1 (13.8)88.6 (8.7)<0.001
Composite pneumonia score77.2 (10.2)76.6 (8.2)0.11990.0 (9.6)93.6 (4.9)<0.001
Overall composite73.7 (10.6)78.0 (8.7)<0.00186.8 (11.1)93.3 (5.0)<0.001

Five‐Year Changes in Hospital Performance

Between 2004 and 2008, Joint Commission‐accredited hospitals improved their performance more than did non‐accredited hospitals (Table 3). After adjustment for baseline characteristics previously shown to be associated with performance, the overall relative (absolute) difference in improvement was 26% (4.2%) (AMI score difference 67% [3.9%], CHF 48% [10.1%], and pneumonia 21% [3.7%]). Accredited hospitals improved their performance significantly more than non‐accredited for 13 of the 16 individual performance measures.

Performance Change and Difference in Performance Change From 2004 to 2008 by Joint Commission Accreditation Status
CharacteristicChange in Performance*Absolute Difference, Always vs Never (95% CI)Relative Difference, % Always vs NeverP Value
Never Accredited (n = 762)Always Accredited (n = 2,917)
  • Abbreviations: ACE angiotensin‐converting enzyme; AMI, acute myocardial infarction; CI, confidence interval; LV, left ventricular; PCI, percutaneous coronary intervention.

  • Performance calculated as the proportion of all eligible patients who received the indicated care. Change in performance estimated based on multivariate regression adjusting for baseline performance, profit status, bed size, rural setting, critical access hospital status, and region except for PCI received within 90 minutes and thrombolytic agent within 30 minutes which did not adjust for critical access hospital status.

  • P values and CIs calculated based on bootstrapped standard errors.

AMI     
Aspirin at admission1.12.03.2 (1.25.2)1600.001
Aspirin at discharge4.78.03.2 (1.45.1)400.008
ACE inhibitor for LV dysfunction8.515.97.4 (3.711.5)47<0.001
Beta blocker at discharge4.48.44.0 (2.06.0)48<0.001
Smoking cessation advice18.622.43.7 (1.16.9)170.012
PCI received within 90 min6.313.06.7 (0.314.2)520.070
Thrombolytic agent within 30 min0.65.46.1 (9.520.4)1130.421
Composite AMI score2.05.83.9 (2.25.5)67<0.001
Heart failure     
Discharge instructions24.235.611.4 (8.714.0)32<0.001
Assessment of LV function4.612.88.3 (6.610.0)65<0.001
ACE inhibitor for LV dysfunction10.115.25.1 (3.56.8)34<0.001
Smoking cessation advice20.526.46.0 (3.38.7)23<0.001
Composite heart failure score10.820.910.1 (8.312.0)48<0.001
Pneumonia     
Oxygenation assessment0.91.40.6 (0.30.9)43<0.001
Pneumococcal vaccination33.440.97.5 (5.69.4)18<0.001
Timing of initial antibiotic therapy19.221.11.9 (1.12.7)9<0.001
Smoking cessation advice21.827.96.0 (3.88.3)22<0.001
Initial antibiotic selection13.614.30.7 (0.51.9)50.293
Composite pneumonia score13.717.53.7 (2.84.6)21<0.001
Overall composite12.016.14.2 (3.25.1)26<0.001

High Performing Hospitals in 2008

The likelihood that a hospital was a high performer in 2008 was significantly associated with Joint Commission accreditation status, with a higher proportion of accredited hospitals reaching the 90% threshold compared to never‐accredited hospitals (Table 4). Accredited hospitals attained the 90% threshold significantly more often for 13 of the 16 performance measures and all four summary scores, compared to non‐accredited hospitals. In 2008, 82% of Joint Commission‐accredited hospitals demonstrated greater than 90% on the overall summary score, compared to 48% of never‐accredited hospitals. Even after adjusting for differences among hospitals, including performance at baseline, Joint Commission‐accredited hospitals were more likely than never‐accredited hospitals to exceed 90% performance in 2008 (84% vs 69%).

Percent of Hospitals With High Performance* in 2008 by Joint Commission Accreditation Status
CharacteristicPercent of Hospitals with Performance Over 90% Adjusted (Actual)Odds Ratio, Always vs Never (95% CI)P Value
Never Accredited (n = 762)Always Accredited (n = 2,917)
  • Abbreviations: ACE angiotensin‐converting enzyme; AMI, acute myocardial infarction; CI, confidence interval; LV, left ventricular; PCI, percutaneous coronary intervention.

  • High performance defined as performance rates of 90% or more.

  • Performance calculated as the proportion of all eligible patients who received the indicated care. Percent of hospitals with performance over 90% estimated based on multivariate logistic regression adjusting for baseline performance, profit status, bed size, rural setting, critical access hospital status, and region except for PCI received within 90 minutes and thrombolytic agent within 30 minutes which did not adjust for critical access hospital status. Odds ratios, CIs, and P values based on the logistic regression analysis.

AMI    
Aspirin at admission91.8 (71.8)93.9 (90.7)1.38 (1.001.89)0.049
Aspirin at discharge83.7 (69.2)88.2 (85.1)1.45 (1.081.94)0.013
ACE inhibitor for LV dysfunction65.1 (65.8)77.2 (76.5)1.81 (1.322.50)<0.001
Beta blocker at discharge84.7 (69.4)90.9 (88.4)1.80 (1.332.44)<0.001
Smoking cessation advice91.1 (81.3)95.9 (94.1)2.29 (1.314.01)0.004
PCI received within 90 min21.5 (16.2)29.9 (29.8)1.56 (0.713.40)0.265
Thrombolytic agent within 30 min21.4 (21.3)22.7 (23.6)1.08 (0.422.74)0.879
Composite AMI score80.5 (56.6)88.2 (85.9)1.82 (1.372.41)<0.001
Heart failure    
Discharge instructions27.0 (26.3)38.9 (39.3)1.72 (1.302.27)<0.001
Assessment of LV function76.2 (45.0)89.1 (88.8)2.54 (1.953.31)<0.001
ACE inhibitor for LV dysfunction58.0 (51.4)67.8 (68.5)1.52 (1.211.92)<0.001
Smoking cessation advice84.2 (62.3)90.3 (89.2)1.76 (1.282.43)<0.001
Composite heart failure score38.2 (27.6)61.5 (64.6)2.57 (2.033.26)<0.001
Pneumonia    
Oxygenation assessment100 (98.2)100 (99.8)4.38 (1.201.32)0.025
Pneumococcal vaccination44.1 (40.3)57.3 (58.2)1.70 (1.362.12)<0.001
Timing of initial antibiotic therapy74.3 (79.1)84.2 (82.7)1.85 (1.402.46)<0.001
Smoking cessation advice76.2 (54.6)85.8 (84.2)1.89 (1.422.51)<0.001
Initial antibiotic selection51.8 (47.4)51.0 (51.8)0.97 (0.761.25)0.826
Composite pneumonia score69.3 (59.4)85.3 (83.9)2.58 (2.013.31)<0.001
Overall composite69.0 (47.5)83.8 (82.0)2.32 (1.763.06)<0.001

DISCUSSION

While accreditation has face validity and is desired by key stakeholders, it is expensive and time consuming. Stakeholders thus are justified in seeking evidence that accreditation is associated with better quality and safety. Ideally, not only would it be associated with better performance at a single point in time, it would also be associated with the pace of improvement over time.

Our study is the first, to our knowledge, to show the association of accreditation status with improvement in the trajectory of performance over a five‐year period. Taking advantage of the fact that the accreditation process changed substantially at about the same time that TJC and CMS began requiring public reporting of evidence‐based quality measures, we found that hospitals accredited by The Joint Commission had had larger improvements in hospital performance from 2004 to 2008 than non‐accredited hospitals, even though the former started with higher baseline performance levels. This accelerated improvement was broad‐based: Accredited hospitals were more likely to achieve superior performance (greater than 90% adherence to quality measures) in 2008 on 13 of 16 nationally standardized quality‐of‐care measures, three clinical area summary scores, and an overall score compared to hospitals that were not accredited. These results are consistent with other studies that have looked at both process and outcome measures and accreditation.912

It is important to note that the observed accreditation effect reflects a difference between hospitals that have elected to seek one particular self‐regulatory alternative to the more restrictive and extensive public regulatory or licensure requirements with those that have not.39 The non‐accredited hospitals that were included in this study are not considered to be sub‐standard hospitals. In fact, hospitals not accredited by The Joint Commission have also met the standards set by Medicare in the Conditions of Participation, and our study demonstrates that these hospitals achieved reasonably strong performance on publicly reported quality measures (86.8% adherence on the composite measure in 2008) and considerable improvement over the 5 years of public reporting (average improvement on composite measure from 2004 to 2008 of 11.8%). Moreover, there are many paths to improvement, and some non‐accredited hospitals achieve stellar performance on quality measures, perhaps by embracing other methods to catalyze improvement.

That said, our data demonstrate that, on average, accredited hospitals achieve superior performance on these evidence‐based quality measures, and their performance improved more strikingly over time. In interpreting these results, it is important to recognize that, while Joint Commission‐accredited hospitals must report quality data, performance on these measures is not directly factored into the accreditation decision; if this were not so, one could argue that this association is a statistical tautology. As it is, we believe that the 2 measures (accreditation and publicly reported quality measures) are two independent assessments of the quality of an organization, and, while the performance measures may not be a gold standard, a measure of their association does provide useful information about the degree to which accreditation is linked to organizational quality.

There are several potential limitations of the current study. First, while we adjusted for most of the known hospital demographic and organizational factors associated with performance, there may be unidentified factors that are associated with both accreditation and performance. This may not be relevant to a patient or payer choosing a hospital based on accreditation status (who may not care whether accreditation is simply associated with higher quality or actually helps produce such quality), but it is relevant to policy‐makers, who may weigh the value of embracing accreditation versus other maneuvers (such as pay for performance or new educational requirements) as a vehicle to promote high‐quality care.

A second limitation is that the specification of the measures can change over time due to the acquisition of new clinical knowledge, which makes longitudinal comparison and tracking of results over time difficult. There were two measures that had definitional changes that had noticeable impact on longitudinal trends: the AMI measure Primary Percutaneous Coronary Intervention (PCI) Received within 90 Minutes of Hospital Arrival (which in 2004 and 2005 used 120 minutes as the threshold), and the pneumonia measure Antibiotic Within 4 Hours of Arrival (which in 2007 changed the threshold to six hours). Other changes included adding angiotensin‐receptor blocker therapy (ARB) as an alternative to angiotensin‐converting enzyme inhibitor (ACEI) therapy in 2005 to the AMI and heart failure measures ACEI or ARB for left ventricular dysfunction. Other less significant changes have been made to the data collection methods for other measures, which could impact the interpretation of changes in performance over time. That said, these changes influenced both accredited and non‐accredited hospitals equally, and we cannot think of reasons that they would have created differential impacts.

Another limitation is that the 16 process measures provide a limited picture of hospital performance. Although the three conditions in the study account for over 15% of Medicare admissions,19 it is possible that non‐accredited hospitals performed as well as accredited hospitals on other measures of quality that were not captured by the 16 measures. As more standardized measures are added to The Joint Commission and CMS databases, it will be possible to use the same study methodology to incorporate these additional domains.

From the original cohort of 4798 hospitals reporting in 2004 or 2008, 19% were not included in the study due to missing data in either 2004 or 2008. Almost two‐thirds of the hospitals excluded from the study were missing 2004 data and, of these, 77% were critical access hospitals. The majority of these critical access hospitals (97%) were non‐accredited. This is in contrast to the hospitals missing 2008 data, of which only 13% were critical access. Since reporting of data to Hospital Compare was voluntary in 2004, it appears that critical access hospitals chose to wait later to report data to Hospital Compare, compared to acute care hospitals. Since critical access hospitals tended to have lower rates, smaller sample sizes, and be non‐accredited, the results of the study would be expected to slightly underestimate the difference between accredited and non‐accredited hospitals.

Finally, while we have argued that the publicly reported quality measures and TJC accreditation decisions provide different lenses into the quality of a given hospital, we cannot entirely exclude the possibility that there are subtle relationships between these two methods that might be partly responsible for our findings. For example, while performance measure rates do not factor directly into the accreditation decision, it is possible that Joint Commission surveyors may be influenced by their knowledge of these rates and biased in their scoring of unrelated standards during the survey process. While we cannot rule out such biases, we are aware of no research on the subject, and have no reason to believe that such biases may have confounded the analysis.

In summary, we found that Joint Commission‐accredited hospitals outperformed non‐accredited hospitals on nationally standardized quality measures of AMI, heart failure, and pneumonia. The performance gap between Joint Commission‐accredited and non‐accredited hospitals increased over the five years of the study. Future studies should incorporate more robust and varied measures of quality as outcomes, and seek to examine the nature of the observed relationship (ie, whether accreditation is simply a marker of higher quality and more rapid improvement, or the accreditation process actually helps create these salutary outcomes).

Acknowledgements

The authors thank Barbara Braun, PhD and Nicole Wineman, MPH, MBA for their literature review on the impact of accreditation, and Barbara Braun, PhD for her thoughtful review of the manuscript.

Files
References
  1. The Joint Commission. Facts About Hospital Accreditation. Available at: http://www.jointcommission.org/assets/1/18/Hospital_Accreditation_1_31_11.pdf. Accessed on Feb 16, 2011.
  2. Niska RW,Burt CW.Emergency Response Planning in Hospitals, United States: 2003–2004. Advance Data from Vital and Health Statistics; No. 391.Hattsville, MD:National Center for Health Statistics;2007.
  3. Niska RW,Burt CW.Training for Terrorism‐Related Conditions in Hospitals, United States: 2003–2004. Advance Data from Vital and Health Statistics; No. 380.Hattsville, MD:National Center for Health Statistics;2006.
  4. Longo DR,Hewett JE,Ge B,Shubert S.Hospital patient safety: characteristics of best‐performing hospitals.J Healthcare Manag.2007;52 (3):188205.
  5. Devers KJ,Pham HH,Liu G.What is driving hospitals' patient‐safety efforts?Health Aff.2004;23(2):103115.
  6. DeBritz JN,Pollak AN.The impact of trauma centre accreditation on patient outcome.Injury.2006;37(12):11661171.
  7. Lemak CH,Alexander JA.Factors that influence staffing of outpatient substance abuse treatment programs.Psychiatr Serv.2005;56(8)934939.
  8. D'Aunno T,Pollack HA.Changes in methadone treatment practices. Results from a national panel study, 1988–2000.JAMA.2002;288:850856.
  9. Landon BE,Normand ST,Lesser A, et al.Quality of care for the treatment of acute medical conditions in US hospitals.Arch Intern Med.2006;166:25112517.
  10. Chen J,Rathore S,Radford M,Krumholz H.JCAHO accreditation and quality of care for acute myocardial infarction.Health Aff.2003;22(2):243254.
  11. Morlock L,Pronovost P,Engineer L, et al.Is JCAHO Accreditation Associated with Better Patient Outcomes in Rural Hospitals? Academy Health Annual Meeting; Boston, MA; June2005.
  12. Joshi MS.Hospital quality of care: the link between accreditation and mortality.J Clin Outcomes Manag.2003;10(9):473480.
  13. Griffith JR, Knutzen SR, Alexander JA. Structural versus outcome measures in hospitals: A comparison of Joint Commission and medicare outcome scores in hospitals. Qual Manage Health Care. 2002;10(2): 2938.
  14. Barker KN,Flynn EA,Pepper GA,Bates D,Mikeal RL.Medication errors observed in 36 health care facilities.Arch Intern Med.2002;162:18971903.
  15. Menachemi N,Chukmaitov A,Brown LS,Saunders C,Brooks RG.Quality of care in accredited and non‐accredited ambulatory surgical centers.Jt Comm J Qual Patient Saf.2008;34(9):546551.
  16. Joint Commission on Accreditation of Healthcare Organizations. Specification Manual for National Hospital Quality Measures 2009. Available at: http://www.jointcommission.org/PerformanceMeasurement/PerformanceMeasurement/Current+NHQM+Manual.htm. Accessed May 21,2009.
  17. Hospital Quality Alliance Homepage. Available at: http://www.hospitalqualityalliance.org/hospitalqualityalliance/index.html. Accessed May 6,2010
  18. Williams SC,Schmaltz SP,Morton DJ,Koss RG,Loeb JM.Quality of care in U.S. hospitals as reflected by standardized measures, 2002–2004.N Engl J Med.2005;353(3):255264.
  19. Jha AK,Li Z,Orav EJ,Epstein AM.Care in U.S. hospitals—the Hospital Quality Alliance Program.N Engl J Med.2005;353:265274.
  20. Institute of Medicine, Committee on Quality Health Care in America.Crossing the Quality Chasm: A New Health System for the 21st Century.Washington, DC:The National Academy Press;2001.
  21. Hibbard JH,Stockard J,Tusler M.Does publicizing hospital performance stimulate quality improvement efforts?Health Aff.2003;22(2):8494.
  22. Williams SC,Morton DJ,Koss RG,Loeb JM.Performance of top ranked heart care hospitals on evidence‐based process measures.Circulation.2006;114:558564.
  23. The Joint Commission Performance Measure Initiatives Homepage. Available at: http://www.jointcommission.org/PerformanceMeasurement/PerformanceMeasurement/default.htm. Accessed on July 27,2010.
  24. Palmer RH.Using health outcomes data to compare plans, networks and providers.Int J Qual Health Care.1998;10(6):477483.
  25. Mant J.Process versus outcome indicators in the assessment of quality of health care.Int J Qual Health Care.2001;13:475480.
  26. Chassin MR.Does paying for performance improve the quality of health care?Med Care Res Rev.2006;63(1):122S125S.
  27. Kfoury AG,French TK,Horne BD, et al.Incremental survival benefit with adherence to standardized health failure core measures: a performance evaluation study of 2958 patients.J Card Fail.2008;14(2):95102.
  28. Jha AK,Orav EJ,Li Z,Epstein AM.The inverse relationship between mortality rates and performance in the hospital quality alliance measures.Health Aff.2007;26(4):11041110.
  29. Bradley EH,Herrin J,Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006:296(1):7278.
  30. Williams SC,Watt A,Schmaltz SP,Koss RG,Loeb JM.Assessing the reliability of standardized performance measures.Int J Qual Health Care.2006;18:246255.
  31. Centers for Medicare and Medicaid Services (CMS). CMS HQI Demonstration Project‐Composite Quality Score Methodology Overview. Available at: http://www.cms.hhs.gov/HospitalQualityInits/downloads/HospitalCompositeQualityScoreMethodologyOverview.pdf. Accessed March 8,2010.
  32. Normand SLT,Wolf RE,Ayanian JZ,McNeil BJ.Assessing the accuracy of hospital performance measures.Med Decis Making.2007;27:920.
  33. Quality Check Data Download Website. Available at: http://www.healthcarequalitydata.org. Accessed May 21,2009.
  34. Hartz AJ,Krakauer H,Kuhn EM.Hospital characteristics and mortality rates.N Engl J Med.1989;321(25):17201725.
  35. Goldman LE,Dudley RA.United States rural hospital quality in the Hospital Compare Database—accounting for hospital characteristics.Health Policy.2008;87:112127.
  36. Lehrman WG,Elliott MN,Goldstein E,Beckett MK,Klein DJ,Giordano LA.Characteristics of hospitals demonstrating superior performance in patient experience and clinical process measures of care.Med Care Res Rev.2010;67(1):3855.
  37. Werner RM,Goldman LE,Dudley RA.Comparison of change in quality of care between safety‐net and non‐safety‐net hospitals.JAMA.2008;299(18):21802187.
  38. Davison AC,Hinkley DV.Bootstrap Methods and Their Application.New York:Cambridge;1997:chap 6.
  39. Pawlson LF,Torda P,Roski J,O'Kane ME.The role of accreditation in an era of market‐driven accountability.Am J Manag Care.2005;11(5):290293.
Article PDF
Issue
Journal of Hospital Medicine - 6(8)
Publications
Page Number
454-461
Sections
Files
Files
Article PDF
Article PDF

The Joint Commission (TJC) currently accredits approximately 4546 acute care, critical access, and specialty hospitals,1 accounting for approximately 82% of U.S. hospitals (representing 92% of hospital beds). Hospitals seeking to earn and maintain accreditation undergo unannounced on‐site visits by a team of Joint Commission surveyors at least once every 3 years. These surveys address a variety of domains, including the environment of care, infection prevention and control, information management, adherence to a series of national patient safety goals, and leadership.1

The survey process has changed markedly in recent years. Since 2002, accredited hospitals have been required to continuously collect and submit selected performance measure data to The Joint Commission throughout the three‐year accreditation cycle. The tracer methodology, an evaluation method in which surveyors select a patient to follow through the organization in order to assess compliance with selected standards, was instituted in 2004. Soon thereafter, on‐site surveys went from announced to unannounced in 2006.

Despite the 50+ year history of hospital accreditation in the United States, there has been surprisingly little research on the link between accreditation status and measures of hospital quality (both processes and outcomes). It is only recently that a growing number of studies have attempted to examine this relationship. Empirical support for the relationship between accreditation and other quality measures is emerging. Accredited hospitals have been shown to provide better emergency response planning2 and training3 compared to non‐accredited hospitals. Accreditation has been observed to be a key predictor of patient safety system implementation4 and the primary driver of hospitals' patient‐safety initiatives.5 Accredited trauma centers have been associated with significant reductions in patient mortality,6 and accreditation has been linked to better compliance with evidence‐based methadone and substance abuse treatment.7, 8 Accredited hospitals have been shown to perform better on measures of hospital quality in acute myocardial infarction (AMI), heart failure, and pneumonia care.9, 10 Similarly, accreditation has been associated with lower risk‐adjusted in‐hospital mortality rates for congestive heart failure (CHF), stroke, and pneumonia.11, 12 The results of such research, however, have not always been consistent. Several studies have been unable to demonstrate a relationship between accreditation and quality measures. A study of financial and cost‐related outcome measures found no relationship to accreditation,13 and a study comparing medication error rates across different types of organizations found no relationship to accreditation status.14 Similarly, a comparison of accredited versus non‐accredited ambulatory surgical organizations found that patients were less likely to be hospitalized when treated at an accredited facility for colonoscopy procedures, but no such relationship was observed for the other 4 procedures studied.15

While the research to date has been generally supportive of the link between accreditation and other measures of health care quality, the studies were typically limited to only a few measures and/or involved relatively small samples of accredited and non‐accredited organizations. Over the last decade, however, changes in the performance measurement landscape have created previously unavailable opportunities to more robustly examine the relationship between accreditation and other indicators of hospital quality.

At about the same time that The Joint Commission's accreditation process was becoming more vigorous, the Centers for Medicare and Medicaid Services (CMS) began a program of publicly reporting quality data (http://www.hospitalcompare.hhs.gov). The alignment of Joint Commission and CMS quality measures establishes a mechanism through which accredited and non‐accredited hospitals can be compared using the same nationally standardized quality measures. Therefore, we took advantage of this unique circumstancea new and more robust TJC accreditation program and the launching of public quality reportingto examine the relationship between Joint Commission accreditation status and publicly reported hospital quality measures. Moreover, by examining trends in these publicly reported measures over five years and incorporating performance data not found in the Hospital Compare Database, we assessed whether accreditation status was also linked to the pace of performance improvement over time.

By using a population of hospitals and a range of standardized quality measures greater than those used in previous studies, we seek to address the following questions: Is Joint Commission accreditation status truly associated with higher quality care? And does accreditation status help identify hospitals that are more likely to improve their quality and safety over time?

METHODS

Performance Measures

Since July 2002, U.S. hospitals have been collecting data on standardized measures of quality developed by The Joint Commission and CMS. These measures have been endorsed by the National Quality Forum16 and adopted by the Hospital Quality Alliance.17 The first peer‐reviewed reports using The Joint Commission/CMS measure data confirmed that the measures could successfully monitor and track hospital improvement and identify disparities in performance,18, 19 as called for by the Institute of Medicine's (IOM) landmark 2001 report, Crossing the Quality Chasm.20

In order to promote transparency in health care, both CMSthrough the efforts of the Hospital Quality Allianceand The Joint Commission began publicly reporting measure rates in 2004 using identical measure and data element specifications. It is important to note that during the five‐year span covered by this study, both The Joint Commission and CMS emphasized the reporting of performance measure data. While performance improvement has been the clear objective of these efforts, neither organization established targets for measure rates or set benchmarks for performance improvement. Similarly, while Joint Commission‐accredited hospitals were required to submit performance measure data as a condition of accreditation, their actual performance on the measure rates did not factor into the accreditation decision. In the absence of such direct leverage, it is interesting to note that several studies have demonstrated the positive impact of public reporting on hospital performance,21 and on providing useful information to the general public and health care professionals regarding hospital quality.22

The 16 measures used in this study address hospital compliance with evidence‐based processes of care recommended by the clinical treatment guidelines of respected professional societies.23 Process of care measures are particularly well suited for quality improvement purposes, as they can identify deficiencies which can be immediately addressed by hospitals and do not require risk‐adjustment, as opposed to outcome measures, which do not necessarily directly identify obvious performance improvement opportunities.2426 The measures were also implemented in sets in order to provide hospitals with a more complete portrayal of quality than might be provided using unrelated individual measures. Research has demonstrated that greater collective performance on these process measures is associated with improved one‐year survival after heart failure hospitalization27 and inpatient mortality for those Medicare patients discharged with acute myocardial infarction, heart failure, and pneumonia,28 while other research has shown little association with short‐term outcomes.29

Using the Specifications Manual for National Hospital Inpatient Quality Measures,16 hospitals identify the initial measure populations through International Classification of Diseases (ICD‐CM‐9) codes and patient age obtained through administrative data. Trained abstractors then collect the data for measure‐specific data elements through medical record review on the identified measure population or a sample of this population. Measure algorithms then identify patients in the numerator and denominator of each measure.

Process measure rates reflect the number of times a hospital treated a patient in a manner consistent with specific evidence‐based clinical practice guidelines (numerator cases), divided by the number of patients who were eligible to receive such care (denominator cases). Because precise measure specifications permit the exclusion of patients contraindicated to receive the specific process of care for the measure, ideal performance should be characterized by measure rates that approach 100% (although rare or unpredictable situations, and the reality that no measure is perfect in its design, make consistent performance at 100% improbable). Accuracy of the measure data, as measured by data element agreement rates on reabstraction, has been reported to exceed 90%.30

In addition to the individual performance measures, hospital performance was assessed using 3 condition‐specific summary scores, one for each of the 3 clinical areas: acute myocardial infarction, heart failure, and pneumonia. The summary scores are a weighted average of the individual measure rates in the clinical area, where the weights are the sample sizes for each of the measures.31 A summary score was also calculated based on all 16 measures as a summary measure of overall compliance with recommended care.

One way of studying performance measurement in a way that relates to standards is to evaluate whether a hospital achieves a high rate of performance, where high is defined as a performance rate of 90% or more. In this context, measures were created from each of the 2004 and 2008 hospital performance rates by dichotomizing them as being either less than 90%, or greater than or equal to 90%.32

Data Sources

The data for the measures included in the study are available on the CMS Hospital Compare public databases or The Joint Commission for discharges in 2004 and 2008.33 These 16 measures, active for all 5 years of the study period, include: 7 measures related to acute myocardial infarction care; 4 measures related to heart failure care; and 5 measures related to pneumonia care. The majority of the performance data for the study were obtained from the yearly CMS Hospital Compare public download databases (http://www.medicare.gov/Download/DownloadDB.asp). When hospitals only reported to The Joint Commission (154 hospitals; of which 118 are Veterans Administration and 30 are Department of Defense hospitals), data were obtained from The Joint Commission's ORYX database, which is available for public download on The Joint Commission's Quality Check web site.23 Most accredited hospitals participated in Hospital Compare (95.5% of accredited hospitals in 2004 and 93.3% in 2008).

Hospital Characteristics

We then linked the CMS performance data, augmented by The Joint Commission performance data when necessary, to hospital characteristics data in the American Hospital Association (AHA) Annual Survey with respect to profit status, number of beds (<100 beds, 100299 beds, 300+ beds), rural status, geographic region, and whether or not the hospital was a critical access hospital. (Teaching status, although available in the AHA database, was not used in the analysis, as almost all teaching hospitals are Joint Commission accredited.) These characteristics were chosen since previous research has identified them as being associated with hospital quality.9, 19, 3437 Data on accreditation status were obtained from The Joint Commission's hospital accreditation database. Hospitals were grouped into 3 hospital accreditation strata based on longitudinal hospital accreditation status between 2004 and 2008: 1) hospitals not accredited in the study period; 2) hospitals accredited between one to four years; and 3) hospitals accredited for the entire study period. Analyses of this middle group (those hospitals accredited for part of the study period; n = 212, 5.4% of the whole sample) led to no significant change in our findings (their performance tended to be midway between always accredited and never‐accredited hospitals) and are thus omitted from our results. Instead, we present only hospitals who were never accredited (n = 762) and those who were accredited through the entire study period (n = 2917).

Statistical Analysis

We assessed the relationship between hospital characteristics and 2004 performance of Joint Commission‐accredited hospitals with hospitals that were not Joint Commission accredited using 2 tests for categorical variables and t tests for continuous variables. Linear regression was used to estimate the five‐year change in performance at each hospital as a function of accreditation group, controlling for hospital characteristics. Baseline hospital performance was also included in the regression models to control for ceiling effects for those hospitals with high baseline performance. To summarize the results, we used the regression models to calculate adjusted change in performance for each accreditation group, and calculated a 95% confidence interval and P value for the difference between the adjusted change scores, using bootstrap methods.38

Next we analyzed the association between accreditation and the likelihood of high 2008 hospital performance by dichotomizing the hospital rates, using a 90% cut point, and using logistic regression to estimate the probability of high performance as a function of accreditation group, controlling for hospital characteristics and baseline hospital performance. The logistic models were then used to calculate adjusted rates of high performance for each accreditation group in presenting the results.

We used two‐sided tests for significance; P < 0.05 was considered statistically significant. This study had no external funding source.

RESULTS

For the 16 individual measures used in this study, a total of 4798 hospitals participated in Hospital Compare or reported data to The Joint Commission in 2004 or 2008. Of these, 907 were excluded because the performance data were not available for either 2004 (576 hospitals) or 2008 (331 hospitals) resulting in a missing value for the change in performance score. Therefore, 3891 hospitals (81%) were included in the final analyses. The 907 excluded hospitals were more likely to be rural (50.8% vs 17.5%), be critical access hospitals (53.9% vs 13.9%), have less than 100 beds (77.4% vs 37.6%), be government owned (34.6% vs 22.1%), be for profit (61.4% vs 49.5%), or be unaccredited (79.8% vs 45.8% in 2004; 75.6% vs 12.8% in 2008), compared with the included hospitals (P < 0.001 for all comparisons).

Hospital Performance at Baseline

Joint Commission‐accredited hospitals were more likely to be large, for profit, or urban, and less likely to be government owned, from the Midwest, or critical access (Table 1). Non‐accredited hospitals performed more poorly than accredited hospitals on most of the publicly reported measures in 2004; the only exception is the timing of initial antibiotic therapy measure for pneumonia (Table 2).

Hospital Characteristics in 2004 Stratified by Joint Commission Accreditation Status
CharacteristicNon‐Accredited (n = 786)Accredited (n = 3105)P Value*
  • P values based on 2 for categorical variables.

Profit status, No. (%)  <0.001
For profit60 (7.6)586 (18.9) 
Government289 (36.8)569 (18.3) 
Not for profit437 (55.6)1,950 (62.8) 
Census region, No. (%)  <0.001
Northeast72 (9.2)497 (16.0) 
Midwest345 (43.9)716 (23.1) 
South248 (31.6)1,291 (41.6) 
West121 (15.4)601 (19.4) 
Rural setting, No. (%)  <0.001
Rural495 (63.0)833 (26.8) 
Urban291 (37.0)2,272 (73.2) 
Bed size  <0.001
<100 beds603 (76.7)861 (27.7) 
100299 beds158 (20.1)1,444 (46.5) 
300+ beds25 (3.2)800 (25.8) 
Critical access hospital status, No. (%)  <0.001
Critical access hospital376 (47.8)164 (5.3) 
Acute care hospital410 (52.2)2,941 (94.7) 
Hospital Raw Performance in 2004 and 2008, Stratified by Joint Commission Accreditation Status
Quality Measure, Mean (SD)*20042008
Non‐AccreditedAccreditedP ValueNon‐AccreditedAccreditedP Value
(n = 786)(n = 3105)(n = 950)(n = 2,941)
  • Abbreviations: ACE, angiotensin‐converting enzyme; AMI, acute myocardial infarction; LV, left ventricular; PCI, percutaneous coronary intervention.

  • Calculated as the proportion of all eligible patients who received the indicated care.

  • P values based on t tests.

AMI      
Aspirin at admission87.1 (20.0)92.6 (9.4)<0.00188.6 (22.1)96.0 (8.6)<0.001
Aspirin at discharge81.2 (26.1)88.5 (14.9)<0.00187.8 (22.7)94.8 (10.1)<0.001
ACE inhibitor for LV dysfunction72.1 (33.4)76.7 (22.9)0.01083.2 (30.5)92.1 (14.8)<0.001
Beta blocker at discharge78.2 (27.9)87.0 (16.2)<0.00187.4 (23.4)95.5 (9.9)<0.001
Smoking cessation advice59.6 (40.8)74.5 (29.9)<0.00187.2 (29.5)97.2 (11.3)<0.001
PCI received within 90 min60.3 (26.2)60.6 (23.8)0.94670.1 (24.8)77.7 (19.2)0.006
Thrombolytic agent within 30 min27.9 (35.5)32.1 (32.8)0.15231.4 (40.7)43.7 (40.2)0.008
Composite AMI score80.6 (20.3)87.7 (10.4)<0.00185.8 (20.0)94.6 (8.1)<0.001
Heart failure      
Discharge instructions36.8 (32.3)49.7 (28.2)<0.00167.4 (29.6)82.3 (16.4)<0.001
Assessment of LV function63.3 (27.6)83.6 (14.9)<0.00179.6 (24.4)95.6 (8.1)<0.001
ACE inhibitor for LV dysfunction70.8 (27.6)75.7 (16.3)<0.00182.5 (22.7)91.5 (9.7)<0.001
Smoking cessation advice57.1 (36.4)68.6 (26.2)<0.00181.5 (29.9)96.1 (10.7)<0.001
Composite heart failure score56.3 (24.1)71.2 (15.6)<0.00175.4 (22.3)90.4 (9.4)<0.001
Pneumonia      
Oxygenation assessment97.4 (7.3)98.4 (4.0)<0.00199.0 (3.2)99.7 (1.2)<0.001
Pneumococcal vaccination45.5 (29.0)48.7 (26.2)0.00779.9 (21.3)87.9 (12.9)<0.001
Timing of initial antibiotic therapy80.6 (13.1)70.9 (14.0)<0.00193.4 (9.2)93.6 (6.1)0.525
Smoking cessation advice56.6 (33.1)65.7 (24.8)<0.00181.6 (25.1)94.4 (11.4)<0.001
Initial antibiotic selection73.6 (19.6)74.1 (13.4)0.50886.1 (13.8)88.6 (8.7)<0.001
Composite pneumonia score77.2 (10.2)76.6 (8.2)0.11990.0 (9.6)93.6 (4.9)<0.001
Overall composite73.7 (10.6)78.0 (8.7)<0.00186.8 (11.1)93.3 (5.0)<0.001

Five‐Year Changes in Hospital Performance

Between 2004 and 2008, Joint Commission‐accredited hospitals improved their performance more than did non‐accredited hospitals (Table 3). After adjustment for baseline characteristics previously shown to be associated with performance, the overall relative (absolute) difference in improvement was 26% (4.2%) (AMI score difference 67% [3.9%], CHF 48% [10.1%], and pneumonia 21% [3.7%]). Accredited hospitals improved their performance significantly more than non‐accredited for 13 of the 16 individual performance measures.

Performance Change and Difference in Performance Change From 2004 to 2008 by Joint Commission Accreditation Status
CharacteristicChange in Performance*Absolute Difference, Always vs Never (95% CI)Relative Difference, % Always vs NeverP Value
Never Accredited (n = 762)Always Accredited (n = 2,917)
  • Abbreviations: ACE angiotensin‐converting enzyme; AMI, acute myocardial infarction; CI, confidence interval; LV, left ventricular; PCI, percutaneous coronary intervention.

  • Performance calculated as the proportion of all eligible patients who received the indicated care. Change in performance estimated based on multivariate regression adjusting for baseline performance, profit status, bed size, rural setting, critical access hospital status, and region except for PCI received within 90 minutes and thrombolytic agent within 30 minutes which did not adjust for critical access hospital status.

  • P values and CIs calculated based on bootstrapped standard errors.

AMI     
Aspirin at admission1.12.03.2 (1.25.2)1600.001
Aspirin at discharge4.78.03.2 (1.45.1)400.008
ACE inhibitor for LV dysfunction8.515.97.4 (3.711.5)47<0.001
Beta blocker at discharge4.48.44.0 (2.06.0)48<0.001
Smoking cessation advice18.622.43.7 (1.16.9)170.012
PCI received within 90 min6.313.06.7 (0.314.2)520.070
Thrombolytic agent within 30 min0.65.46.1 (9.520.4)1130.421
Composite AMI score2.05.83.9 (2.25.5)67<0.001
Heart failure     
Discharge instructions24.235.611.4 (8.714.0)32<0.001
Assessment of LV function4.612.88.3 (6.610.0)65<0.001
ACE inhibitor for LV dysfunction10.115.25.1 (3.56.8)34<0.001
Smoking cessation advice20.526.46.0 (3.38.7)23<0.001
Composite heart failure score10.820.910.1 (8.312.0)48<0.001
Pneumonia     
Oxygenation assessment0.91.40.6 (0.30.9)43<0.001
Pneumococcal vaccination33.440.97.5 (5.69.4)18<0.001
Timing of initial antibiotic therapy19.221.11.9 (1.12.7)9<0.001
Smoking cessation advice21.827.96.0 (3.88.3)22<0.001
Initial antibiotic selection13.614.30.7 (0.51.9)50.293
Composite pneumonia score13.717.53.7 (2.84.6)21<0.001
Overall composite12.016.14.2 (3.25.1)26<0.001

High Performing Hospitals in 2008

The likelihood that a hospital was a high performer in 2008 was significantly associated with Joint Commission accreditation status, with a higher proportion of accredited hospitals reaching the 90% threshold compared to never‐accredited hospitals (Table 4). Accredited hospitals attained the 90% threshold significantly more often for 13 of the 16 performance measures and all four summary scores, compared to non‐accredited hospitals. In 2008, 82% of Joint Commission‐accredited hospitals demonstrated greater than 90% on the overall summary score, compared to 48% of never‐accredited hospitals. Even after adjusting for differences among hospitals, including performance at baseline, Joint Commission‐accredited hospitals were more likely than never‐accredited hospitals to exceed 90% performance in 2008 (84% vs 69%).

Percent of Hospitals With High Performance* in 2008 by Joint Commission Accreditation Status
CharacteristicPercent of Hospitals with Performance Over 90% Adjusted (Actual)Odds Ratio, Always vs Never (95% CI)P Value
Never Accredited (n = 762)Always Accredited (n = 2,917)
  • Abbreviations: ACE angiotensin‐converting enzyme; AMI, acute myocardial infarction; CI, confidence interval; LV, left ventricular; PCI, percutaneous coronary intervention.

  • High performance defined as performance rates of 90% or more.

  • Performance calculated as the proportion of all eligible patients who received the indicated care. Percent of hospitals with performance over 90% estimated based on multivariate logistic regression adjusting for baseline performance, profit status, bed size, rural setting, critical access hospital status, and region except for PCI received within 90 minutes and thrombolytic agent within 30 minutes which did not adjust for critical access hospital status. Odds ratios, CIs, and P values based on the logistic regression analysis.

AMI    
Aspirin at admission91.8 (71.8)93.9 (90.7)1.38 (1.001.89)0.049
Aspirin at discharge83.7 (69.2)88.2 (85.1)1.45 (1.081.94)0.013
ACE inhibitor for LV dysfunction65.1 (65.8)77.2 (76.5)1.81 (1.322.50)<0.001
Beta blocker at discharge84.7 (69.4)90.9 (88.4)1.80 (1.332.44)<0.001
Smoking cessation advice91.1 (81.3)95.9 (94.1)2.29 (1.314.01)0.004
PCI received within 90 min21.5 (16.2)29.9 (29.8)1.56 (0.713.40)0.265
Thrombolytic agent within 30 min21.4 (21.3)22.7 (23.6)1.08 (0.422.74)0.879
Composite AMI score80.5 (56.6)88.2 (85.9)1.82 (1.372.41)<0.001
Heart failure    
Discharge instructions27.0 (26.3)38.9 (39.3)1.72 (1.302.27)<0.001
Assessment of LV function76.2 (45.0)89.1 (88.8)2.54 (1.953.31)<0.001
ACE inhibitor for LV dysfunction58.0 (51.4)67.8 (68.5)1.52 (1.211.92)<0.001
Smoking cessation advice84.2 (62.3)90.3 (89.2)1.76 (1.282.43)<0.001
Composite heart failure score38.2 (27.6)61.5 (64.6)2.57 (2.033.26)<0.001
Pneumonia    
Oxygenation assessment100 (98.2)100 (99.8)4.38 (1.201.32)0.025
Pneumococcal vaccination44.1 (40.3)57.3 (58.2)1.70 (1.362.12)<0.001
Timing of initial antibiotic therapy74.3 (79.1)84.2 (82.7)1.85 (1.402.46)<0.001
Smoking cessation advice76.2 (54.6)85.8 (84.2)1.89 (1.422.51)<0.001
Initial antibiotic selection51.8 (47.4)51.0 (51.8)0.97 (0.761.25)0.826
Composite pneumonia score69.3 (59.4)85.3 (83.9)2.58 (2.013.31)<0.001
Overall composite69.0 (47.5)83.8 (82.0)2.32 (1.763.06)<0.001

DISCUSSION

While accreditation has face validity and is desired by key stakeholders, it is expensive and time consuming. Stakeholders thus are justified in seeking evidence that accreditation is associated with better quality and safety. Ideally, not only would it be associated with better performance at a single point in time, it would also be associated with the pace of improvement over time.

Our study is the first, to our knowledge, to show the association of accreditation status with improvement in the trajectory of performance over a five‐year period. Taking advantage of the fact that the accreditation process changed substantially at about the same time that TJC and CMS began requiring public reporting of evidence‐based quality measures, we found that hospitals accredited by The Joint Commission had had larger improvements in hospital performance from 2004 to 2008 than non‐accredited hospitals, even though the former started with higher baseline performance levels. This accelerated improvement was broad‐based: Accredited hospitals were more likely to achieve superior performance (greater than 90% adherence to quality measures) in 2008 on 13 of 16 nationally standardized quality‐of‐care measures, three clinical area summary scores, and an overall score compared to hospitals that were not accredited. These results are consistent with other studies that have looked at both process and outcome measures and accreditation.912

It is important to note that the observed accreditation effect reflects a difference between hospitals that have elected to seek one particular self‐regulatory alternative to the more restrictive and extensive public regulatory or licensure requirements with those that have not.39 The non‐accredited hospitals that were included in this study are not considered to be sub‐standard hospitals. In fact, hospitals not accredited by The Joint Commission have also met the standards set by Medicare in the Conditions of Participation, and our study demonstrates that these hospitals achieved reasonably strong performance on publicly reported quality measures (86.8% adherence on the composite measure in 2008) and considerable improvement over the 5 years of public reporting (average improvement on composite measure from 2004 to 2008 of 11.8%). Moreover, there are many paths to improvement, and some non‐accredited hospitals achieve stellar performance on quality measures, perhaps by embracing other methods to catalyze improvement.

That said, our data demonstrate that, on average, accredited hospitals achieve superior performance on these evidence‐based quality measures, and their performance improved more strikingly over time. In interpreting these results, it is important to recognize that, while Joint Commission‐accredited hospitals must report quality data, performance on these measures is not directly factored into the accreditation decision; if this were not so, one could argue that this association is a statistical tautology. As it is, we believe that the 2 measures (accreditation and publicly reported quality measures) are two independent assessments of the quality of an organization, and, while the performance measures may not be a gold standard, a measure of their association does provide useful information about the degree to which accreditation is linked to organizational quality.

There are several potential limitations of the current study. First, while we adjusted for most of the known hospital demographic and organizational factors associated with performance, there may be unidentified factors that are associated with both accreditation and performance. This may not be relevant to a patient or payer choosing a hospital based on accreditation status (who may not care whether accreditation is simply associated with higher quality or actually helps produce such quality), but it is relevant to policy‐makers, who may weigh the value of embracing accreditation versus other maneuvers (such as pay for performance or new educational requirements) as a vehicle to promote high‐quality care.

A second limitation is that the specification of the measures can change over time due to the acquisition of new clinical knowledge, which makes longitudinal comparison and tracking of results over time difficult. There were two measures that had definitional changes that had noticeable impact on longitudinal trends: the AMI measure Primary Percutaneous Coronary Intervention (PCI) Received within 90 Minutes of Hospital Arrival (which in 2004 and 2005 used 120 minutes as the threshold), and the pneumonia measure Antibiotic Within 4 Hours of Arrival (which in 2007 changed the threshold to six hours). Other changes included adding angiotensin‐receptor blocker therapy (ARB) as an alternative to angiotensin‐converting enzyme inhibitor (ACEI) therapy in 2005 to the AMI and heart failure measures ACEI or ARB for left ventricular dysfunction. Other less significant changes have been made to the data collection methods for other measures, which could impact the interpretation of changes in performance over time. That said, these changes influenced both accredited and non‐accredited hospitals equally, and we cannot think of reasons that they would have created differential impacts.

Another limitation is that the 16 process measures provide a limited picture of hospital performance. Although the three conditions in the study account for over 15% of Medicare admissions,19 it is possible that non‐accredited hospitals performed as well as accredited hospitals on other measures of quality that were not captured by the 16 measures. As more standardized measures are added to The Joint Commission and CMS databases, it will be possible to use the same study methodology to incorporate these additional domains.

From the original cohort of 4798 hospitals reporting in 2004 or 2008, 19% were not included in the study due to missing data in either 2004 or 2008. Almost two‐thirds of the hospitals excluded from the study were missing 2004 data and, of these, 77% were critical access hospitals. The majority of these critical access hospitals (97%) were non‐accredited. This is in contrast to the hospitals missing 2008 data, of which only 13% were critical access. Since reporting of data to Hospital Compare was voluntary in 2004, it appears that critical access hospitals chose to wait later to report data to Hospital Compare, compared to acute care hospitals. Since critical access hospitals tended to have lower rates, smaller sample sizes, and be non‐accredited, the results of the study would be expected to slightly underestimate the difference between accredited and non‐accredited hospitals.

Finally, while we have argued that the publicly reported quality measures and TJC accreditation decisions provide different lenses into the quality of a given hospital, we cannot entirely exclude the possibility that there are subtle relationships between these two methods that might be partly responsible for our findings. For example, while performance measure rates do not factor directly into the accreditation decision, it is possible that Joint Commission surveyors may be influenced by their knowledge of these rates and biased in their scoring of unrelated standards during the survey process. While we cannot rule out such biases, we are aware of no research on the subject, and have no reason to believe that such biases may have confounded the analysis.

In summary, we found that Joint Commission‐accredited hospitals outperformed non‐accredited hospitals on nationally standardized quality measures of AMI, heart failure, and pneumonia. The performance gap between Joint Commission‐accredited and non‐accredited hospitals increased over the five years of the study. Future studies should incorporate more robust and varied measures of quality as outcomes, and seek to examine the nature of the observed relationship (ie, whether accreditation is simply a marker of higher quality and more rapid improvement, or the accreditation process actually helps create these salutary outcomes).

Acknowledgements

The authors thank Barbara Braun, PhD and Nicole Wineman, MPH, MBA for their literature review on the impact of accreditation, and Barbara Braun, PhD for her thoughtful review of the manuscript.

The Joint Commission (TJC) currently accredits approximately 4546 acute care, critical access, and specialty hospitals,1 accounting for approximately 82% of U.S. hospitals (representing 92% of hospital beds). Hospitals seeking to earn and maintain accreditation undergo unannounced on‐site visits by a team of Joint Commission surveyors at least once every 3 years. These surveys address a variety of domains, including the environment of care, infection prevention and control, information management, adherence to a series of national patient safety goals, and leadership.1

The survey process has changed markedly in recent years. Since 2002, accredited hospitals have been required to continuously collect and submit selected performance measure data to The Joint Commission throughout the three‐year accreditation cycle. The tracer methodology, an evaluation method in which surveyors select a patient to follow through the organization in order to assess compliance with selected standards, was instituted in 2004. Soon thereafter, on‐site surveys went from announced to unannounced in 2006.

Despite the 50+ year history of hospital accreditation in the United States, there has been surprisingly little research on the link between accreditation status and measures of hospital quality (both processes and outcomes). It is only recently that a growing number of studies have attempted to examine this relationship. Empirical support for the relationship between accreditation and other quality measures is emerging. Accredited hospitals have been shown to provide better emergency response planning2 and training3 compared to non‐accredited hospitals. Accreditation has been observed to be a key predictor of patient safety system implementation4 and the primary driver of hospitals' patient‐safety initiatives.5 Accredited trauma centers have been associated with significant reductions in patient mortality,6 and accreditation has been linked to better compliance with evidence‐based methadone and substance abuse treatment.7, 8 Accredited hospitals have been shown to perform better on measures of hospital quality in acute myocardial infarction (AMI), heart failure, and pneumonia care.9, 10 Similarly, accreditation has been associated with lower risk‐adjusted in‐hospital mortality rates for congestive heart failure (CHF), stroke, and pneumonia.11, 12 The results of such research, however, have not always been consistent. Several studies have been unable to demonstrate a relationship between accreditation and quality measures. A study of financial and cost‐related outcome measures found no relationship to accreditation,13 and a study comparing medication error rates across different types of organizations found no relationship to accreditation status.14 Similarly, a comparison of accredited versus non‐accredited ambulatory surgical organizations found that patients were less likely to be hospitalized when treated at an accredited facility for colonoscopy procedures, but no such relationship was observed for the other 4 procedures studied.15

While the research to date has been generally supportive of the link between accreditation and other measures of health care quality, the studies were typically limited to only a few measures and/or involved relatively small samples of accredited and non‐accredited organizations. Over the last decade, however, changes in the performance measurement landscape have created previously unavailable opportunities to more robustly examine the relationship between accreditation and other indicators of hospital quality.

At about the same time that The Joint Commission's accreditation process was becoming more vigorous, the Centers for Medicare and Medicaid Services (CMS) began a program of publicly reporting quality data (http://www.hospitalcompare.hhs.gov). The alignment of Joint Commission and CMS quality measures establishes a mechanism through which accredited and non‐accredited hospitals can be compared using the same nationally standardized quality measures. Therefore, we took advantage of this unique circumstancea new and more robust TJC accreditation program and the launching of public quality reportingto examine the relationship between Joint Commission accreditation status and publicly reported hospital quality measures. Moreover, by examining trends in these publicly reported measures over five years and incorporating performance data not found in the Hospital Compare Database, we assessed whether accreditation status was also linked to the pace of performance improvement over time.

By using a population of hospitals and a range of standardized quality measures greater than those used in previous studies, we seek to address the following questions: Is Joint Commission accreditation status truly associated with higher quality care? And does accreditation status help identify hospitals that are more likely to improve their quality and safety over time?

METHODS

Performance Measures

Since July 2002, U.S. hospitals have been collecting data on standardized measures of quality developed by The Joint Commission and CMS. These measures have been endorsed by the National Quality Forum16 and adopted by the Hospital Quality Alliance.17 The first peer‐reviewed reports using The Joint Commission/CMS measure data confirmed that the measures could successfully monitor and track hospital improvement and identify disparities in performance,18, 19 as called for by the Institute of Medicine's (IOM) landmark 2001 report, Crossing the Quality Chasm.20

In order to promote transparency in health care, both CMSthrough the efforts of the Hospital Quality Allianceand The Joint Commission began publicly reporting measure rates in 2004 using identical measure and data element specifications. It is important to note that during the five‐year span covered by this study, both The Joint Commission and CMS emphasized the reporting of performance measure data. While performance improvement has been the clear objective of these efforts, neither organization established targets for measure rates or set benchmarks for performance improvement. Similarly, while Joint Commission‐accredited hospitals were required to submit performance measure data as a condition of accreditation, their actual performance on the measure rates did not factor into the accreditation decision. In the absence of such direct leverage, it is interesting to note that several studies have demonstrated the positive impact of public reporting on hospital performance,21 and on providing useful information to the general public and health care professionals regarding hospital quality.22

The 16 measures used in this study address hospital compliance with evidence‐based processes of care recommended by the clinical treatment guidelines of respected professional societies.23 Process of care measures are particularly well suited for quality improvement purposes, as they can identify deficiencies which can be immediately addressed by hospitals and do not require risk‐adjustment, as opposed to outcome measures, which do not necessarily directly identify obvious performance improvement opportunities.2426 The measures were also implemented in sets in order to provide hospitals with a more complete portrayal of quality than might be provided using unrelated individual measures. Research has demonstrated that greater collective performance on these process measures is associated with improved one‐year survival after heart failure hospitalization27 and inpatient mortality for those Medicare patients discharged with acute myocardial infarction, heart failure, and pneumonia,28 while other research has shown little association with short‐term outcomes.29

Using the Specifications Manual for National Hospital Inpatient Quality Measures,16 hospitals identify the initial measure populations through International Classification of Diseases (ICD‐CM‐9) codes and patient age obtained through administrative data. Trained abstractors then collect the data for measure‐specific data elements through medical record review on the identified measure population or a sample of this population. Measure algorithms then identify patients in the numerator and denominator of each measure.

Process measure rates reflect the number of times a hospital treated a patient in a manner consistent with specific evidence‐based clinical practice guidelines (numerator cases), divided by the number of patients who were eligible to receive such care (denominator cases). Because precise measure specifications permit the exclusion of patients contraindicated to receive the specific process of care for the measure, ideal performance should be characterized by measure rates that approach 100% (although rare or unpredictable situations, and the reality that no measure is perfect in its design, make consistent performance at 100% improbable). Accuracy of the measure data, as measured by data element agreement rates on reabstraction, has been reported to exceed 90%.30

In addition to the individual performance measures, hospital performance was assessed using 3 condition‐specific summary scores, one for each of the 3 clinical areas: acute myocardial infarction, heart failure, and pneumonia. The summary scores are a weighted average of the individual measure rates in the clinical area, where the weights are the sample sizes for each of the measures.31 A summary score was also calculated based on all 16 measures as a summary measure of overall compliance with recommended care.

One way of studying performance measurement in a way that relates to standards is to evaluate whether a hospital achieves a high rate of performance, where high is defined as a performance rate of 90% or more. In this context, measures were created from each of the 2004 and 2008 hospital performance rates by dichotomizing them as being either less than 90%, or greater than or equal to 90%.32

Data Sources

The data for the measures included in the study are available on the CMS Hospital Compare public databases or The Joint Commission for discharges in 2004 and 2008.33 These 16 measures, active for all 5 years of the study period, include: 7 measures related to acute myocardial infarction care; 4 measures related to heart failure care; and 5 measures related to pneumonia care. The majority of the performance data for the study were obtained from the yearly CMS Hospital Compare public download databases (http://www.medicare.gov/Download/DownloadDB.asp). When hospitals only reported to The Joint Commission (154 hospitals; of which 118 are Veterans Administration and 30 are Department of Defense hospitals), data were obtained from The Joint Commission's ORYX database, which is available for public download on The Joint Commission's Quality Check web site.23 Most accredited hospitals participated in Hospital Compare (95.5% of accredited hospitals in 2004 and 93.3% in 2008).

Hospital Characteristics

We then linked the CMS performance data, augmented by The Joint Commission performance data when necessary, to hospital characteristics data in the American Hospital Association (AHA) Annual Survey with respect to profit status, number of beds (<100 beds, 100299 beds, 300+ beds), rural status, geographic region, and whether or not the hospital was a critical access hospital. (Teaching status, although available in the AHA database, was not used in the analysis, as almost all teaching hospitals are Joint Commission accredited.) These characteristics were chosen since previous research has identified them as being associated with hospital quality.9, 19, 3437 Data on accreditation status were obtained from The Joint Commission's hospital accreditation database. Hospitals were grouped into 3 hospital accreditation strata based on longitudinal hospital accreditation status between 2004 and 2008: 1) hospitals not accredited in the study period; 2) hospitals accredited between one to four years; and 3) hospitals accredited for the entire study period. Analyses of this middle group (those hospitals accredited for part of the study period; n = 212, 5.4% of the whole sample) led to no significant change in our findings (their performance tended to be midway between always accredited and never‐accredited hospitals) and are thus omitted from our results. Instead, we present only hospitals who were never accredited (n = 762) and those who were accredited through the entire study period (n = 2917).

Statistical Analysis

We assessed the relationship between hospital characteristics and 2004 performance of Joint Commission‐accredited hospitals with hospitals that were not Joint Commission accredited using 2 tests for categorical variables and t tests for continuous variables. Linear regression was used to estimate the five‐year change in performance at each hospital as a function of accreditation group, controlling for hospital characteristics. Baseline hospital performance was also included in the regression models to control for ceiling effects for those hospitals with high baseline performance. To summarize the results, we used the regression models to calculate adjusted change in performance for each accreditation group, and calculated a 95% confidence interval and P value for the difference between the adjusted change scores, using bootstrap methods.38

Next we analyzed the association between accreditation and the likelihood of high 2008 hospital performance by dichotomizing the hospital rates, using a 90% cut point, and using logistic regression to estimate the probability of high performance as a function of accreditation group, controlling for hospital characteristics and baseline hospital performance. The logistic models were then used to calculate adjusted rates of high performance for each accreditation group in presenting the results.

We used two‐sided tests for significance; P < 0.05 was considered statistically significant. This study had no external funding source.

RESULTS

For the 16 individual measures used in this study, a total of 4798 hospitals participated in Hospital Compare or reported data to The Joint Commission in 2004 or 2008. Of these, 907 were excluded because the performance data were not available for either 2004 (576 hospitals) or 2008 (331 hospitals) resulting in a missing value for the change in performance score. Therefore, 3891 hospitals (81%) were included in the final analyses. The 907 excluded hospitals were more likely to be rural (50.8% vs 17.5%), be critical access hospitals (53.9% vs 13.9%), have less than 100 beds (77.4% vs 37.6%), be government owned (34.6% vs 22.1%), be for profit (61.4% vs 49.5%), or be unaccredited (79.8% vs 45.8% in 2004; 75.6% vs 12.8% in 2008), compared with the included hospitals (P < 0.001 for all comparisons).

Hospital Performance at Baseline

Joint Commission‐accredited hospitals were more likely to be large, for profit, or urban, and less likely to be government owned, from the Midwest, or critical access (Table 1). Non‐accredited hospitals performed more poorly than accredited hospitals on most of the publicly reported measures in 2004; the only exception is the timing of initial antibiotic therapy measure for pneumonia (Table 2).

Hospital Characteristics in 2004 Stratified by Joint Commission Accreditation Status
CharacteristicNon‐Accredited (n = 786)Accredited (n = 3105)P Value*
  • P values based on 2 for categorical variables.

Profit status, No. (%)  <0.001
For profit60 (7.6)586 (18.9) 
Government289 (36.8)569 (18.3) 
Not for profit437 (55.6)1,950 (62.8) 
Census region, No. (%)  <0.001
Northeast72 (9.2)497 (16.0) 
Midwest345 (43.9)716 (23.1) 
South248 (31.6)1,291 (41.6) 
West121 (15.4)601 (19.4) 
Rural setting, No. (%)  <0.001
Rural495 (63.0)833 (26.8) 
Urban291 (37.0)2,272 (73.2) 
Bed size  <0.001
<100 beds603 (76.7)861 (27.7) 
100299 beds158 (20.1)1,444 (46.5) 
300+ beds25 (3.2)800 (25.8) 
Critical access hospital status, No. (%)  <0.001
Critical access hospital376 (47.8)164 (5.3) 
Acute care hospital410 (52.2)2,941 (94.7) 
Hospital Raw Performance in 2004 and 2008, Stratified by Joint Commission Accreditation Status
Quality Measure, Mean (SD)*20042008
Non‐AccreditedAccreditedP ValueNon‐AccreditedAccreditedP Value
(n = 786)(n = 3105)(n = 950)(n = 2,941)
  • Abbreviations: ACE, angiotensin‐converting enzyme; AMI, acute myocardial infarction; LV, left ventricular; PCI, percutaneous coronary intervention.

  • Calculated as the proportion of all eligible patients who received the indicated care.

  • P values based on t tests.

AMI      
Aspirin at admission87.1 (20.0)92.6 (9.4)<0.00188.6 (22.1)96.0 (8.6)<0.001
Aspirin at discharge81.2 (26.1)88.5 (14.9)<0.00187.8 (22.7)94.8 (10.1)<0.001
ACE inhibitor for LV dysfunction72.1 (33.4)76.7 (22.9)0.01083.2 (30.5)92.1 (14.8)<0.001
Beta blocker at discharge78.2 (27.9)87.0 (16.2)<0.00187.4 (23.4)95.5 (9.9)<0.001
Smoking cessation advice59.6 (40.8)74.5 (29.9)<0.00187.2 (29.5)97.2 (11.3)<0.001
PCI received within 90 min60.3 (26.2)60.6 (23.8)0.94670.1 (24.8)77.7 (19.2)0.006
Thrombolytic agent within 30 min27.9 (35.5)32.1 (32.8)0.15231.4 (40.7)43.7 (40.2)0.008
Composite AMI score80.6 (20.3)87.7 (10.4)<0.00185.8 (20.0)94.6 (8.1)<0.001
Heart failure      
Discharge instructions36.8 (32.3)49.7 (28.2)<0.00167.4 (29.6)82.3 (16.4)<0.001
Assessment of LV function63.3 (27.6)83.6 (14.9)<0.00179.6 (24.4)95.6 (8.1)<0.001
ACE inhibitor for LV dysfunction70.8 (27.6)75.7 (16.3)<0.00182.5 (22.7)91.5 (9.7)<0.001
Smoking cessation advice57.1 (36.4)68.6 (26.2)<0.00181.5 (29.9)96.1 (10.7)<0.001
Composite heart failure score56.3 (24.1)71.2 (15.6)<0.00175.4 (22.3)90.4 (9.4)<0.001
Pneumonia      
Oxygenation assessment97.4 (7.3)98.4 (4.0)<0.00199.0 (3.2)99.7 (1.2)<0.001
Pneumococcal vaccination45.5 (29.0)48.7 (26.2)0.00779.9 (21.3)87.9 (12.9)<0.001
Timing of initial antibiotic therapy80.6 (13.1)70.9 (14.0)<0.00193.4 (9.2)93.6 (6.1)0.525
Smoking cessation advice56.6 (33.1)65.7 (24.8)<0.00181.6 (25.1)94.4 (11.4)<0.001
Initial antibiotic selection73.6 (19.6)74.1 (13.4)0.50886.1 (13.8)88.6 (8.7)<0.001
Composite pneumonia score77.2 (10.2)76.6 (8.2)0.11990.0 (9.6)93.6 (4.9)<0.001
Overall composite73.7 (10.6)78.0 (8.7)<0.00186.8 (11.1)93.3 (5.0)<0.001

Five‐Year Changes in Hospital Performance

Between 2004 and 2008, Joint Commission‐accredited hospitals improved their performance more than did non‐accredited hospitals (Table 3). After adjustment for baseline characteristics previously shown to be associated with performance, the overall relative (absolute) difference in improvement was 26% (4.2%) (AMI score difference 67% [3.9%], CHF 48% [10.1%], and pneumonia 21% [3.7%]). Accredited hospitals improved their performance significantly more than non‐accredited for 13 of the 16 individual performance measures.

Performance Change and Difference in Performance Change From 2004 to 2008 by Joint Commission Accreditation Status
CharacteristicChange in Performance*Absolute Difference, Always vs Never (95% CI)Relative Difference, % Always vs NeverP Value
Never Accredited (n = 762)Always Accredited (n = 2,917)
  • Abbreviations: ACE angiotensin‐converting enzyme; AMI, acute myocardial infarction; CI, confidence interval; LV, left ventricular; PCI, percutaneous coronary intervention.

  • Performance calculated as the proportion of all eligible patients who received the indicated care. Change in performance estimated based on multivariate regression adjusting for baseline performance, profit status, bed size, rural setting, critical access hospital status, and region except for PCI received within 90 minutes and thrombolytic agent within 30 minutes which did not adjust for critical access hospital status.

  • P values and CIs calculated based on bootstrapped standard errors.

AMI     
Aspirin at admission1.12.03.2 (1.25.2)1600.001
Aspirin at discharge4.78.03.2 (1.45.1)400.008
ACE inhibitor for LV dysfunction8.515.97.4 (3.711.5)47<0.001
Beta blocker at discharge4.48.44.0 (2.06.0)48<0.001
Smoking cessation advice18.622.43.7 (1.16.9)170.012
PCI received within 90 min6.313.06.7 (0.314.2)520.070
Thrombolytic agent within 30 min0.65.46.1 (9.520.4)1130.421
Composite AMI score2.05.83.9 (2.25.5)67<0.001
Heart failure     
Discharge instructions24.235.611.4 (8.714.0)32<0.001
Assessment of LV function4.612.88.3 (6.610.0)65<0.001
ACE inhibitor for LV dysfunction10.115.25.1 (3.56.8)34<0.001
Smoking cessation advice20.526.46.0 (3.38.7)23<0.001
Composite heart failure score10.820.910.1 (8.312.0)48<0.001
Pneumonia     
Oxygenation assessment0.91.40.6 (0.30.9)43<0.001
Pneumococcal vaccination33.440.97.5 (5.69.4)18<0.001
Timing of initial antibiotic therapy19.221.11.9 (1.12.7)9<0.001
Smoking cessation advice21.827.96.0 (3.88.3)22<0.001
Initial antibiotic selection13.614.30.7 (0.51.9)50.293
Composite pneumonia score13.717.53.7 (2.84.6)21<0.001
Overall composite12.016.14.2 (3.25.1)26<0.001

High Performing Hospitals in 2008

The likelihood that a hospital was a high performer in 2008 was significantly associated with Joint Commission accreditation status, with a higher proportion of accredited hospitals reaching the 90% threshold compared to never‐accredited hospitals (Table 4). Accredited hospitals attained the 90% threshold significantly more often for 13 of the 16 performance measures and all four summary scores, compared to non‐accredited hospitals. In 2008, 82% of Joint Commission‐accredited hospitals demonstrated greater than 90% on the overall summary score, compared to 48% of never‐accredited hospitals. Even after adjusting for differences among hospitals, including performance at baseline, Joint Commission‐accredited hospitals were more likely than never‐accredited hospitals to exceed 90% performance in 2008 (84% vs 69%).

Percent of Hospitals With High Performance* in 2008 by Joint Commission Accreditation Status
CharacteristicPercent of Hospitals with Performance Over 90% Adjusted (Actual)Odds Ratio, Always vs Never (95% CI)P Value
Never Accredited (n = 762)Always Accredited (n = 2,917)
  • Abbreviations: ACE angiotensin‐converting enzyme; AMI, acute myocardial infarction; CI, confidence interval; LV, left ventricular; PCI, percutaneous coronary intervention.

  • High performance defined as performance rates of 90% or more.

  • Performance calculated as the proportion of all eligible patients who received the indicated care. Percent of hospitals with performance over 90% estimated based on multivariate logistic regression adjusting for baseline performance, profit status, bed size, rural setting, critical access hospital status, and region except for PCI received within 90 minutes and thrombolytic agent within 30 minutes which did not adjust for critical access hospital status. Odds ratios, CIs, and P values based on the logistic regression analysis.

AMI    
Aspirin at admission91.8 (71.8)93.9 (90.7)1.38 (1.001.89)0.049
Aspirin at discharge83.7 (69.2)88.2 (85.1)1.45 (1.081.94)0.013
ACE inhibitor for LV dysfunction65.1 (65.8)77.2 (76.5)1.81 (1.322.50)<0.001
Beta blocker at discharge84.7 (69.4)90.9 (88.4)1.80 (1.332.44)<0.001
Smoking cessation advice91.1 (81.3)95.9 (94.1)2.29 (1.314.01)0.004
PCI received within 90 min21.5 (16.2)29.9 (29.8)1.56 (0.713.40)0.265
Thrombolytic agent within 30 min21.4 (21.3)22.7 (23.6)1.08 (0.422.74)0.879
Composite AMI score80.5 (56.6)88.2 (85.9)1.82 (1.372.41)<0.001
Heart failure    
Discharge instructions27.0 (26.3)38.9 (39.3)1.72 (1.302.27)<0.001
Assessment of LV function76.2 (45.0)89.1 (88.8)2.54 (1.953.31)<0.001
ACE inhibitor for LV dysfunction58.0 (51.4)67.8 (68.5)1.52 (1.211.92)<0.001
Smoking cessation advice84.2 (62.3)90.3 (89.2)1.76 (1.282.43)<0.001
Composite heart failure score38.2 (27.6)61.5 (64.6)2.57 (2.033.26)<0.001
Pneumonia    
Oxygenation assessment100 (98.2)100 (99.8)4.38 (1.201.32)0.025
Pneumococcal vaccination44.1 (40.3)57.3 (58.2)1.70 (1.362.12)<0.001
Timing of initial antibiotic therapy74.3 (79.1)84.2 (82.7)1.85 (1.402.46)<0.001
Smoking cessation advice76.2 (54.6)85.8 (84.2)1.89 (1.422.51)<0.001
Initial antibiotic selection51.8 (47.4)51.0 (51.8)0.97 (0.761.25)0.826
Composite pneumonia score69.3 (59.4)85.3 (83.9)2.58 (2.013.31)<0.001
Overall composite69.0 (47.5)83.8 (82.0)2.32 (1.763.06)<0.001

DISCUSSION

While accreditation has face validity and is desired by key stakeholders, it is expensive and time consuming. Stakeholders thus are justified in seeking evidence that accreditation is associated with better quality and safety. Ideally, not only would it be associated with better performance at a single point in time, it would also be associated with the pace of improvement over time.

Our study is the first, to our knowledge, to show the association of accreditation status with improvement in the trajectory of performance over a five‐year period. Taking advantage of the fact that the accreditation process changed substantially at about the same time that TJC and CMS began requiring public reporting of evidence‐based quality measures, we found that hospitals accredited by The Joint Commission had had larger improvements in hospital performance from 2004 to 2008 than non‐accredited hospitals, even though the former started with higher baseline performance levels. This accelerated improvement was broad‐based: Accredited hospitals were more likely to achieve superior performance (greater than 90% adherence to quality measures) in 2008 on 13 of 16 nationally standardized quality‐of‐care measures, three clinical area summary scores, and an overall score compared to hospitals that were not accredited. These results are consistent with other studies that have looked at both process and outcome measures and accreditation.912

It is important to note that the observed accreditation effect reflects a difference between hospitals that have elected to seek one particular self‐regulatory alternative to the more restrictive and extensive public regulatory or licensure requirements with those that have not.39 The non‐accredited hospitals that were included in this study are not considered to be sub‐standard hospitals. In fact, hospitals not accredited by The Joint Commission have also met the standards set by Medicare in the Conditions of Participation, and our study demonstrates that these hospitals achieved reasonably strong performance on publicly reported quality measures (86.8% adherence on the composite measure in 2008) and considerable improvement over the 5 years of public reporting (average improvement on composite measure from 2004 to 2008 of 11.8%). Moreover, there are many paths to improvement, and some non‐accredited hospitals achieve stellar performance on quality measures, perhaps by embracing other methods to catalyze improvement.

That said, our data demonstrate that, on average, accredited hospitals achieve superior performance on these evidence‐based quality measures, and their performance improved more strikingly over time. In interpreting these results, it is important to recognize that, while Joint Commission‐accredited hospitals must report quality data, performance on these measures is not directly factored into the accreditation decision; if this were not so, one could argue that this association is a statistical tautology. As it is, we believe that the 2 measures (accreditation and publicly reported quality measures) are two independent assessments of the quality of an organization, and, while the performance measures may not be a gold standard, a measure of their association does provide useful information about the degree to which accreditation is linked to organizational quality.

There are several potential limitations of the current study. First, while we adjusted for most of the known hospital demographic and organizational factors associated with performance, there may be unidentified factors that are associated with both accreditation and performance. This may not be relevant to a patient or payer choosing a hospital based on accreditation status (who may not care whether accreditation is simply associated with higher quality or actually helps produce such quality), but it is relevant to policy‐makers, who may weigh the value of embracing accreditation versus other maneuvers (such as pay for performance or new educational requirements) as a vehicle to promote high‐quality care.

A second limitation is that the specification of the measures can change over time due to the acquisition of new clinical knowledge, which makes longitudinal comparison and tracking of results over time difficult. There were two measures that had definitional changes that had noticeable impact on longitudinal trends: the AMI measure Primary Percutaneous Coronary Intervention (PCI) Received within 90 Minutes of Hospital Arrival (which in 2004 and 2005 used 120 minutes as the threshold), and the pneumonia measure Antibiotic Within 4 Hours of Arrival (which in 2007 changed the threshold to six hours). Other changes included adding angiotensin‐receptor blocker therapy (ARB) as an alternative to angiotensin‐converting enzyme inhibitor (ACEI) therapy in 2005 to the AMI and heart failure measures ACEI or ARB for left ventricular dysfunction. Other less significant changes have been made to the data collection methods for other measures, which could impact the interpretation of changes in performance over time. That said, these changes influenced both accredited and non‐accredited hospitals equally, and we cannot think of reasons that they would have created differential impacts.

Another limitation is that the 16 process measures provide a limited picture of hospital performance. Although the three conditions in the study account for over 15% of Medicare admissions,19 it is possible that non‐accredited hospitals performed as well as accredited hospitals on other measures of quality that were not captured by the 16 measures. As more standardized measures are added to The Joint Commission and CMS databases, it will be possible to use the same study methodology to incorporate these additional domains.

From the original cohort of 4798 hospitals reporting in 2004 or 2008, 19% were not included in the study due to missing data in either 2004 or 2008. Almost two‐thirds of the hospitals excluded from the study were missing 2004 data and, of these, 77% were critical access hospitals. The majority of these critical access hospitals (97%) were non‐accredited. This is in contrast to the hospitals missing 2008 data, of which only 13% were critical access. Since reporting of data to Hospital Compare was voluntary in 2004, it appears that critical access hospitals chose to wait later to report data to Hospital Compare, compared to acute care hospitals. Since critical access hospitals tended to have lower rates, smaller sample sizes, and be non‐accredited, the results of the study would be expected to slightly underestimate the difference between accredited and non‐accredited hospitals.

Finally, while we have argued that the publicly reported quality measures and TJC accreditation decisions provide different lenses into the quality of a given hospital, we cannot entirely exclude the possibility that there are subtle relationships between these two methods that might be partly responsible for our findings. For example, while performance measure rates do not factor directly into the accreditation decision, it is possible that Joint Commission surveyors may be influenced by their knowledge of these rates and biased in their scoring of unrelated standards during the survey process. While we cannot rule out such biases, we are aware of no research on the subject, and have no reason to believe that such biases may have confounded the analysis.

In summary, we found that Joint Commission‐accredited hospitals outperformed non‐accredited hospitals on nationally standardized quality measures of AMI, heart failure, and pneumonia. The performance gap between Joint Commission‐accredited and non‐accredited hospitals increased over the five years of the study. Future studies should incorporate more robust and varied measures of quality as outcomes, and seek to examine the nature of the observed relationship (ie, whether accreditation is simply a marker of higher quality and more rapid improvement, or the accreditation process actually helps create these salutary outcomes).

Acknowledgements

The authors thank Barbara Braun, PhD and Nicole Wineman, MPH, MBA for their literature review on the impact of accreditation, and Barbara Braun, PhD for her thoughtful review of the manuscript.

References
  1. The Joint Commission. Facts About Hospital Accreditation. Available at: http://www.jointcommission.org/assets/1/18/Hospital_Accreditation_1_31_11.pdf. Accessed on Feb 16, 2011.
  2. Niska RW,Burt CW.Emergency Response Planning in Hospitals, United States: 2003–2004. Advance Data from Vital and Health Statistics; No. 391.Hattsville, MD:National Center for Health Statistics;2007.
  3. Niska RW,Burt CW.Training for Terrorism‐Related Conditions in Hospitals, United States: 2003–2004. Advance Data from Vital and Health Statistics; No. 380.Hattsville, MD:National Center for Health Statistics;2006.
  4. Longo DR,Hewett JE,Ge B,Shubert S.Hospital patient safety: characteristics of best‐performing hospitals.J Healthcare Manag.2007;52 (3):188205.
  5. Devers KJ,Pham HH,Liu G.What is driving hospitals' patient‐safety efforts?Health Aff.2004;23(2):103115.
  6. DeBritz JN,Pollak AN.The impact of trauma centre accreditation on patient outcome.Injury.2006;37(12):11661171.
  7. Lemak CH,Alexander JA.Factors that influence staffing of outpatient substance abuse treatment programs.Psychiatr Serv.2005;56(8)934939.
  8. D'Aunno T,Pollack HA.Changes in methadone treatment practices. Results from a national panel study, 1988–2000.JAMA.2002;288:850856.
  9. Landon BE,Normand ST,Lesser A, et al.Quality of care for the treatment of acute medical conditions in US hospitals.Arch Intern Med.2006;166:25112517.
  10. Chen J,Rathore S,Radford M,Krumholz H.JCAHO accreditation and quality of care for acute myocardial infarction.Health Aff.2003;22(2):243254.
  11. Morlock L,Pronovost P,Engineer L, et al.Is JCAHO Accreditation Associated with Better Patient Outcomes in Rural Hospitals? Academy Health Annual Meeting; Boston, MA; June2005.
  12. Joshi MS.Hospital quality of care: the link between accreditation and mortality.J Clin Outcomes Manag.2003;10(9):473480.
  13. Griffith JR, Knutzen SR, Alexander JA. Structural versus outcome measures in hospitals: A comparison of Joint Commission and medicare outcome scores in hospitals. Qual Manage Health Care. 2002;10(2): 2938.
  14. Barker KN,Flynn EA,Pepper GA,Bates D,Mikeal RL.Medication errors observed in 36 health care facilities.Arch Intern Med.2002;162:18971903.
  15. Menachemi N,Chukmaitov A,Brown LS,Saunders C,Brooks RG.Quality of care in accredited and non‐accredited ambulatory surgical centers.Jt Comm J Qual Patient Saf.2008;34(9):546551.
  16. Joint Commission on Accreditation of Healthcare Organizations. Specification Manual for National Hospital Quality Measures 2009. Available at: http://www.jointcommission.org/PerformanceMeasurement/PerformanceMeasurement/Current+NHQM+Manual.htm. Accessed May 21,2009.
  17. Hospital Quality Alliance Homepage. Available at: http://www.hospitalqualityalliance.org/hospitalqualityalliance/index.html. Accessed May 6,2010
  18. Williams SC,Schmaltz SP,Morton DJ,Koss RG,Loeb JM.Quality of care in U.S. hospitals as reflected by standardized measures, 2002–2004.N Engl J Med.2005;353(3):255264.
  19. Jha AK,Li Z,Orav EJ,Epstein AM.Care in U.S. hospitals—the Hospital Quality Alliance Program.N Engl J Med.2005;353:265274.
  20. Institute of Medicine, Committee on Quality Health Care in America.Crossing the Quality Chasm: A New Health System for the 21st Century.Washington, DC:The National Academy Press;2001.
  21. Hibbard JH,Stockard J,Tusler M.Does publicizing hospital performance stimulate quality improvement efforts?Health Aff.2003;22(2):8494.
  22. Williams SC,Morton DJ,Koss RG,Loeb JM.Performance of top ranked heart care hospitals on evidence‐based process measures.Circulation.2006;114:558564.
  23. The Joint Commission Performance Measure Initiatives Homepage. Available at: http://www.jointcommission.org/PerformanceMeasurement/PerformanceMeasurement/default.htm. Accessed on July 27,2010.
  24. Palmer RH.Using health outcomes data to compare plans, networks and providers.Int J Qual Health Care.1998;10(6):477483.
  25. Mant J.Process versus outcome indicators in the assessment of quality of health care.Int J Qual Health Care.2001;13:475480.
  26. Chassin MR.Does paying for performance improve the quality of health care?Med Care Res Rev.2006;63(1):122S125S.
  27. Kfoury AG,French TK,Horne BD, et al.Incremental survival benefit with adherence to standardized health failure core measures: a performance evaluation study of 2958 patients.J Card Fail.2008;14(2):95102.
  28. Jha AK,Orav EJ,Li Z,Epstein AM.The inverse relationship between mortality rates and performance in the hospital quality alliance measures.Health Aff.2007;26(4):11041110.
  29. Bradley EH,Herrin J,Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006:296(1):7278.
  30. Williams SC,Watt A,Schmaltz SP,Koss RG,Loeb JM.Assessing the reliability of standardized performance measures.Int J Qual Health Care.2006;18:246255.
  31. Centers for Medicare and Medicaid Services (CMS). CMS HQI Demonstration Project‐Composite Quality Score Methodology Overview. Available at: http://www.cms.hhs.gov/HospitalQualityInits/downloads/HospitalCompositeQualityScoreMethodologyOverview.pdf. Accessed March 8,2010.
  32. Normand SLT,Wolf RE,Ayanian JZ,McNeil BJ.Assessing the accuracy of hospital performance measures.Med Decis Making.2007;27:920.
  33. Quality Check Data Download Website. Available at: http://www.healthcarequalitydata.org. Accessed May 21,2009.
  34. Hartz AJ,Krakauer H,Kuhn EM.Hospital characteristics and mortality rates.N Engl J Med.1989;321(25):17201725.
  35. Goldman LE,Dudley RA.United States rural hospital quality in the Hospital Compare Database—accounting for hospital characteristics.Health Policy.2008;87:112127.
  36. Lehrman WG,Elliott MN,Goldstein E,Beckett MK,Klein DJ,Giordano LA.Characteristics of hospitals demonstrating superior performance in patient experience and clinical process measures of care.Med Care Res Rev.2010;67(1):3855.
  37. Werner RM,Goldman LE,Dudley RA.Comparison of change in quality of care between safety‐net and non‐safety‐net hospitals.JAMA.2008;299(18):21802187.
  38. Davison AC,Hinkley DV.Bootstrap Methods and Their Application.New York:Cambridge;1997:chap 6.
  39. Pawlson LF,Torda P,Roski J,O'Kane ME.The role of accreditation in an era of market‐driven accountability.Am J Manag Care.2005;11(5):290293.
References
  1. The Joint Commission. Facts About Hospital Accreditation. Available at: http://www.jointcommission.org/assets/1/18/Hospital_Accreditation_1_31_11.pdf. Accessed on Feb 16, 2011.
  2. Niska RW,Burt CW.Emergency Response Planning in Hospitals, United States: 2003–2004. Advance Data from Vital and Health Statistics; No. 391.Hattsville, MD:National Center for Health Statistics;2007.
  3. Niska RW,Burt CW.Training for Terrorism‐Related Conditions in Hospitals, United States: 2003–2004. Advance Data from Vital and Health Statistics; No. 380.Hattsville, MD:National Center for Health Statistics;2006.
  4. Longo DR,Hewett JE,Ge B,Shubert S.Hospital patient safety: characteristics of best‐performing hospitals.J Healthcare Manag.2007;52 (3):188205.
  5. Devers KJ,Pham HH,Liu G.What is driving hospitals' patient‐safety efforts?Health Aff.2004;23(2):103115.
  6. DeBritz JN,Pollak AN.The impact of trauma centre accreditation on patient outcome.Injury.2006;37(12):11661171.
  7. Lemak CH,Alexander JA.Factors that influence staffing of outpatient substance abuse treatment programs.Psychiatr Serv.2005;56(8)934939.
  8. D'Aunno T,Pollack HA.Changes in methadone treatment practices. Results from a national panel study, 1988–2000.JAMA.2002;288:850856.
  9. Landon BE,Normand ST,Lesser A, et al.Quality of care for the treatment of acute medical conditions in US hospitals.Arch Intern Med.2006;166:25112517.
  10. Chen J,Rathore S,Radford M,Krumholz H.JCAHO accreditation and quality of care for acute myocardial infarction.Health Aff.2003;22(2):243254.
  11. Morlock L,Pronovost P,Engineer L, et al.Is JCAHO Accreditation Associated with Better Patient Outcomes in Rural Hospitals? Academy Health Annual Meeting; Boston, MA; June2005.
  12. Joshi MS.Hospital quality of care: the link between accreditation and mortality.J Clin Outcomes Manag.2003;10(9):473480.
  13. Griffith JR, Knutzen SR, Alexander JA. Structural versus outcome measures in hospitals: A comparison of Joint Commission and medicare outcome scores in hospitals. Qual Manage Health Care. 2002;10(2): 2938.
  14. Barker KN,Flynn EA,Pepper GA,Bates D,Mikeal RL.Medication errors observed in 36 health care facilities.Arch Intern Med.2002;162:18971903.
  15. Menachemi N,Chukmaitov A,Brown LS,Saunders C,Brooks RG.Quality of care in accredited and non‐accredited ambulatory surgical centers.Jt Comm J Qual Patient Saf.2008;34(9):546551.
  16. Joint Commission on Accreditation of Healthcare Organizations. Specification Manual for National Hospital Quality Measures 2009. Available at: http://www.jointcommission.org/PerformanceMeasurement/PerformanceMeasurement/Current+NHQM+Manual.htm. Accessed May 21,2009.
  17. Hospital Quality Alliance Homepage. Available at: http://www.hospitalqualityalliance.org/hospitalqualityalliance/index.html. Accessed May 6,2010
  18. Williams SC,Schmaltz SP,Morton DJ,Koss RG,Loeb JM.Quality of care in U.S. hospitals as reflected by standardized measures, 2002–2004.N Engl J Med.2005;353(3):255264.
  19. Jha AK,Li Z,Orav EJ,Epstein AM.Care in U.S. hospitals—the Hospital Quality Alliance Program.N Engl J Med.2005;353:265274.
  20. Institute of Medicine, Committee on Quality Health Care in America.Crossing the Quality Chasm: A New Health System for the 21st Century.Washington, DC:The National Academy Press;2001.
  21. Hibbard JH,Stockard J,Tusler M.Does publicizing hospital performance stimulate quality improvement efforts?Health Aff.2003;22(2):8494.
  22. Williams SC,Morton DJ,Koss RG,Loeb JM.Performance of top ranked heart care hospitals on evidence‐based process measures.Circulation.2006;114:558564.
  23. The Joint Commission Performance Measure Initiatives Homepage. Available at: http://www.jointcommission.org/PerformanceMeasurement/PerformanceMeasurement/default.htm. Accessed on July 27,2010.
  24. Palmer RH.Using health outcomes data to compare plans, networks and providers.Int J Qual Health Care.1998;10(6):477483.
  25. Mant J.Process versus outcome indicators in the assessment of quality of health care.Int J Qual Health Care.2001;13:475480.
  26. Chassin MR.Does paying for performance improve the quality of health care?Med Care Res Rev.2006;63(1):122S125S.
  27. Kfoury AG,French TK,Horne BD, et al.Incremental survival benefit with adherence to standardized health failure core measures: a performance evaluation study of 2958 patients.J Card Fail.2008;14(2):95102.
  28. Jha AK,Orav EJ,Li Z,Epstein AM.The inverse relationship between mortality rates and performance in the hospital quality alliance measures.Health Aff.2007;26(4):11041110.
  29. Bradley EH,Herrin J,Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006:296(1):7278.
  30. Williams SC,Watt A,Schmaltz SP,Koss RG,Loeb JM.Assessing the reliability of standardized performance measures.Int J Qual Health Care.2006;18:246255.
  31. Centers for Medicare and Medicaid Services (CMS). CMS HQI Demonstration Project‐Composite Quality Score Methodology Overview. Available at: http://www.cms.hhs.gov/HospitalQualityInits/downloads/HospitalCompositeQualityScoreMethodologyOverview.pdf. Accessed March 8,2010.
  32. Normand SLT,Wolf RE,Ayanian JZ,McNeil BJ.Assessing the accuracy of hospital performance measures.Med Decis Making.2007;27:920.
  33. Quality Check Data Download Website. Available at: http://www.healthcarequalitydata.org. Accessed May 21,2009.
  34. Hartz AJ,Krakauer H,Kuhn EM.Hospital characteristics and mortality rates.N Engl J Med.1989;321(25):17201725.
  35. Goldman LE,Dudley RA.United States rural hospital quality in the Hospital Compare Database—accounting for hospital characteristics.Health Policy.2008;87:112127.
  36. Lehrman WG,Elliott MN,Goldstein E,Beckett MK,Klein DJ,Giordano LA.Characteristics of hospitals demonstrating superior performance in patient experience and clinical process measures of care.Med Care Res Rev.2010;67(1):3855.
  37. Werner RM,Goldman LE,Dudley RA.Comparison of change in quality of care between safety‐net and non‐safety‐net hospitals.JAMA.2008;299(18):21802187.
  38. Davison AC,Hinkley DV.Bootstrap Methods and Their Application.New York:Cambridge;1997:chap 6.
  39. Pawlson LF,Torda P,Roski J,O'Kane ME.The role of accreditation in an era of market‐driven accountability.Am J Manag Care.2005;11(5):290293.
Issue
Journal of Hospital Medicine - 6(8)
Issue
Journal of Hospital Medicine - 6(8)
Page Number
454-461
Page Number
454-461
Publications
Publications
Article Type
Display Headline
Hospital performance trends on national quality measures and the association with joint commission accreditation
Display Headline
Hospital performance trends on national quality measures and the association with joint commission accreditation
Sections
Article Source

Copyright © 2011 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
The Joint Commission, One Renaissance Blvd., Oakbrook Terrace, IL 60181
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files