User login
Resource Utilization and Satisfaction
The patient experience has become increasingly important to healthcare in the United States. It is now a metric used commonly to determine physician compensation and accounts for nearly 30% of the Centers for Medicare and Medicaid Services' (CMS) Value‐Based Purchasing (VBP) reimbursement for fiscal years 2015 and 2016.[1, 2]
In April 2015, CMS added a 5‐star patient experience score to its Hospital Compare website in an attempt to address the Affordable Care Act's call for transparent and easily understandable public reporting.[3] A hospital's principal score is the Summary Star Rating, which is based on responses to the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey. The formulas used to calculate Summary Star Ratings have been reported by CMS.[4]
Studies published over the past decade suggest that gender, age, education level, length of hospital stay, travel distance, and other factors may influence patient satisfaction.[5, 6, 7, 8] One study utilizing a national dataset suggested that higher patient satisfaction was associated with greater inpatient healthcare utilization and higher healthcare expenditures.[9] It is therefore possible that emphasizing patient experience scores could adversely impact healthcare resource utilization. However, positive patient experience may also be an important independent dimension of quality for patients and correlate with improved clinical outcomes.[10]
We know of no literature describing patient factors associated with the Summary Star Rating. Given that this rating is now used as a standard metric by which patient experience can be compared across more than 3,500 hospitals,[11] data describing the association between patient‐level factors and the Summary Star Rating may provide hospitals with an opportunity to target improvement efforts. We aimed to determine the degree to which resource utilization is associated with a satisfaction score based on the Summary Star Rating methodology.
METHODS
The study was conducted at the University of Rochester Medical Center (URMC), an 830‐bed tertiary care center in upstate New York. This was a retrospective review of all HCAHPS surveys returned to URMC over a 27‐month period from January 1, 2012 to April 1, 2014. URMC follows the standard CMS process for determining which patients receive surveys as follows. During the study timeframe, HCAHPS surveys were mailed to patients 18 years of age and older who had an inpatient stay spanning at least 1 midnight. Surveys were mailed within 5 days of discharge, and were generally returned within 6 weeks. URMC did not utilize telephone or email surveys during the study period. Surveys were not sent to patients who (1) were transferred to another facility, (2) were discharged to hospice, (3) died during the hospitalization, (4) received psychiatric or rehabilitative services during the hospitalization, (5) had an international address, and/or (6) were prisoners.
The survey vendor (Press Ganey, South Bend, IN) for URMC provided raw data for returned surveys with patient answers to questions. Administrative and billing databases were used to add demographic and clinical data for the corresponding hospitalization to the dataset. These data included age, gender, payer status (public, private, self, charity), length of stay, number of attendings who saw the patient (based on encounters documented in the electronic medical record (EMR)), all discharge International Classification of Diseases, 9th Revision (ICD‐9) diagnoses for the hospitalization, total charges for the hospitalization, and intensive care unit (ICU) utilization as evidenced by a documented encounter with a member of the Division of Critical Care/Pulmonary Medicine.
CMS analyzes surveys within 1 of 3 clinical service categories (medical, surgical, or obstetrics/gynecology) based on the discharging service. To parallel this approach, each returned survey was placed into 1 of these categories based on the clinical service of the discharging physician. Patients placed in the obstetrics/gynecology category (n = 1317, 13%) will be analyzed in a future analysis given inherent differences in patient characteristics that require evaluation of other variables.
Approximations of CMS Summary Star Rating
The HCAHPS survey is a multiple‐choice questionnaire that includes several domains of patient satisfaction. Respondents are asked to rate areas of satisfaction with their hospital experience on a Likert scale. CMS uses a weighted average of Likert responses to a subset of HCAHPS questions to calculate a hospital's raw score in 11 domains, as well as an overall raw summary score. CMS then adjusts each raw score for differences between hospitals (eg, clustering, improvement over time, method of survey) to determine a hospital's star rating in each domain and an overall Summary Star Rating (the Summary Star Rating is the primary factor by which consumers can compare hospitals).[4] Because our data were from a single hospital system, the between‐hospital scoring adjustments utilized by CMS were not applicable. Instead, we calculated the raw scores exactly as CMS does prior to the adjustments. Thus, our scores reflect the scores that CMS would have given URMC during the study period prior to standardized adjustments; we refer to this as the raw satisfaction rating (RSR). We calculated an RSR for every eligible survey. The RSR was calculated as a continuous variable from 0 (lowest) to 1 (highest). Detailed explanation of our RSR calculation is available in the Supporting Information in the online version of this article.
Statistical Analysis
All analyses were performed in aggregate and by service (medical vs surgical). Categorical variables were summarized using frequencies with percentages. Comparisons across levels of categorical variables were performed with the 2 test. We report bivariate associations between the independent variables and RSRs in the top decile using unadjusted odds ratios (ORs) with 95% confidence intervals (CIs). Similarly, multivariable logistic regression was used for adjusted analyses. For the variables of severity of illness and resource intensity, the group with the lowest illness severity and lowest resource use served as the reference groups. We modeled patients without an ICU encounter and with an ICU encounter separately.
Charges, number of unique attendings encountered, and lengths of stay were highly correlated, and likely various measures of the same underlying construct of resource intensity, and therefore could not be entered into our models simultaneously. We combined these into a resource intensity score using factor analysis with a varimax rotation, and extracted factor scores for a single factor (supported by a scree plot). We then placed patients into 4 groups based on the distribution of the factor scores: low (<25th percentile), moderate (25th50th percentile), major (50th75th percentile), and extreme (>75th percentile).
We used the Charlson‐Deyo comorbidity score as our disease severity index.[12] The index uses ICD‐9 diagnoses with points assigned for the impact of each diagnosis on morbidity and the points summed to an overall score. This provides a measure of disease severity for a patient based on the number of diagnoses and relative mortality of the individual diagnoses. Scores were categorized as 0 (representing no major illness burden), 1 to 3, 4 to 6, and >6.
All statistical analyses were performed using SAS version 9.4 (SAS Institute, Cary, NC), and P values <0.05 were considered statistically significant. This study was approved by the institutional review board at the University of Rochester Medical Center.
RESULTS
Our initial search identified 10,007 returned surveys (29% of eligible patients returned surveys during the study period). Of these, 5059 (51%) were categorized as medical, 3630 (36%) as surgical, and 1317 (13%) as obstetrics/gynecology. One survey did not have the service of the discharging physician recorded and was excluded. Cohort demographics and relationship to RSRs in the top decile for the 8689 medical and surgical patients can be found in Table 1. The most common discharge diagnosis‐related groups (DRGs) for medical patients were 247, percutaneous cardiovascular procedure with drug‐eluding stent without major complications or comorbidities (MCC) (3.8%); 871, septicemia or severe sepsis without mechanical ventilation >96 hours with MCC (2.7%); and 392, esophagitis, gastroenteritis, and miscellaneous digestive disorders with MCC (2.3%). The most common DRGs for surgical patients were 460, spinal fusion except cervical without MCC (3.5%); 328, stomach, esophageal and duodenal procedure without complication or comorbidities or MCC (3.3%); and 491, back and neck procedure excluding spinal fusion without complication or comorbidities or MCC (3.1%).
Overall | Medical | Surgical | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Total | <90th | Top Decile | P | Total | <90th | Top Decile | P | Total | <90th | Top Decile | P | |
| ||||||||||||
Overall | 8,689 | 7,789 (90) | 900 (10) | 5,059 | 4,646 (92) | 413 (8) | 3,630 | 3,143 (87) | 487 (13) | |||
Age, y | ||||||||||||
<30 | 419 (5) | 371 (89) | 48 (12) | <0.001 | 218 (4) | 208 (95) | 10 (5) | <0.001 | 201 (6) | 163 (81) | 38 (19) | <0.001 |
3049 | 1,029 (12) | 902 (88) | 127 (12) | 533 (11) | 482 (90) | 51 (10) | 496 (14) | 420 (85) | 76 (15) | |||
5069 | 3,911 (45) | 3,450 (88) | 461 (12) | 2,136 (42) | 1,930 (90) | 206 (10) | 1,775 (49) | 1,520 (86) | 255 (14) | |||
>69 | 3,330 (38) | 3,066 (92) | 264 (8) | 2,172 (43) | 2,026 (93) | 146 (7) | 1,158 (32) | 1,040 (90) | 118 (10) | |||
Gender | ||||||||||||
Male | 4,640 (53) | 4,142 (89) | 498 (11) | 0.220 | 2,596 (51) | 2,379 (92) | 217 (8) | 0.602 | 2,044 (56) | 1,763 (86) | 281 (14) | 0.506 |
Female | 4,049 (47) | 3,647 (90) | 402 (10) | 2,463 (49) | 2,267 (92) | 196 (8) | 1,586 (44) | 1,380 (87) | 206 (13) | |||
ICU encounter | ||||||||||||
No | 7,122 (82) | 6,441 (90) | 681 (10) | <0.001 | 4,547 (90) | 4,193 (92) | 354 (8) | <0.001 | 2,575 (71) | 2,248 (87) | 327 (13) | 0.048 |
Yes | 1,567 (18) | 1,348 (86) | 219 (14) | 512 (10) | 453 (89) | 59 (12) | 1,055 (29) | 895 (85) | 160 (15) | |||
Payer | ||||||||||||
Public | 5,564 (64) | 5,036 (91) | 528 (10) | <0.001 | 3,424 (68) | 3,161 (92) | 263 (8) | 0.163 | 2,140 (59) | 1,875 (88) | 265 (12) | 0.148 |
Private | 3,064 (35) | 2,702 (88) | 362 (12) | 1,603 (32) | 1,458 (91) | 145 (9) | 1,461 (40) | 1,244 (85) | 217 (15) | |||
Charity | 45 (1) | 37 (82) | 8 (18) | 25 (1) | 21 (84) | 4 (16) | 20 (1) | 16 (80) | 4 (20) | |||
Self | 16 (0) | 14 (88) | 2 (13) | 7 (0) | 6 (86) | 1 (14) | 9 (0) | 8 (89) | 1 (11) | |||
Length of stay, d | ||||||||||||
<3 | 3,156 (36) | 2,930 (93) | 226 (7) | <0.001 | 1,961 (39) | 1,865 (95) | 96 (5) | <0.001 | 1,195 (33) | 1,065 (89) | 130 (11) | <0.001 |
36 | 3,330 (38) | 2,959 (89) | 371 (11) | 1,867 (37) | 1,702 (91) | 165 (9) | 1,463 (40) | 1,257 (86) | 206 (14) | |||
>6 | 2,203 (25) | 1,900 (86) | 303 (14) | 1,231 (24) | 1,079 (88) | 152 (12) | 972 (27) | 821 (85) | 151 (16) | |||
No. of attendings | ||||||||||||
<4 | 3,959 (46) | 3,615 (91) | 344 (9) | <0.001 | 2,307 (46) | 2,160 (94) | 147 (6) | <0.001 | 1,652 (46) | 1,455 (88) | 197 (12) | 0.052 |
46 | 3,067 (35) | 2,711 (88) | 356 (12) | 1,836 (36) | 1,663 (91) | 173 (9) | 1,231 (34) | 1,048 (85) | 183 (15) | |||
>6 | 1,663 (19) | 1,463 (88) | 200 (12) | 916 (18) | 823 (90) | 93 (10) | 747 (21) | 640 (86) | 107 (14) | |||
Severity index* | ||||||||||||
0 (lowest) | 2,812 (32) | 2,505 (89) | 307 (11) | 0.272 | 1,273 (25) | 1,185 (93) | 88 (7) | 0.045 | 1,539 (42) | 1,320 (86) | 219 (14) | 0.261 |
13 | 4,253 (49) | 3,827 (90) | 426 (10) | 2,604 (52) | 2,395 (92) | 209 (8) | 1,649 (45) | 1,432 (87) | 217 (13) | |||
46 | 1163 (13) | 1,052 (91) | 111 (10) | 849 (17) | 770 (91) | 79 (9) | 314 (9) | 282 (90) | 32 (10) | |||
>6 (highest) | 461 (5) | 405 (88) | 56 (12) | 333 (7) | 296 (89) | 37 (11) | 128 (4) | 109 (85) | 19 (15) | |||
Charges, | ||||||||||||
Low | 1,820 (21) | 1,707 (94) | 113 (6) | <0.001 | 1,426 (28) | 1,357 (95) | 69 (5) | <0.001 | 394 (11) | 350 (89) | 44 (11) | 0.007 |
Medium | 5,094 (59) | 4,581 (90) | 513 (10) | 2,807 (56) | 2,582 (92) | 225 (8) | 2,287 (63) | 1,999 (87) | 288 (13) | |||
High | 1,775 (20) | 1,501 (85) | 274 (15) | 826 (16) | 707 (86) | 119 (14) | 949 (26) | 794 (84) | 155 (16) |
Unadjusted analysis of medical and surgical patients identified significant associations of several variables with a top decile RSR (Table 2). Patients with longer lengths of stay (OR: 2.07, 95% CI: 1.72‐2.48), more attendings (OR: 1.44, 95% CI: 1.19‐1.73), and higher hospital charges (OR: 2.76, 95% CI: 2.19‐3.47) were more likely to report an RSR in the top decile. Patients without an ICU encounter (OR: 0.65, 95% CI: 0.55‐0.77) and on a medical service (OR: 0.57, 95% CI: 0.5‐ 0.66) were less likely to report an RSR in the top decile. Several associations were identified in only the medical or surgical cohorts. In the medical cohort, patients with the highest illness severity index (OR: 1.68, 95% CI: 1.12‐ 2.52) and with 7 different attending physicians (OR: 1.66, 95% CI: 1.27‐2.18) were more likely to report RSRs in the top decile. In the surgical cohort, patients <30 years of age (OR: 2.05, 95% CI 1.38‐3.07) were more likely to report an RSR in the top decile than patients >69 years of age. Insurance payer category and gender were not significantly associated with top decile RSRs.
Overall | Medical | Surgical | ||||
---|---|---|---|---|---|---|
Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | |
| ||||||
Age, y | ||||||
<30 | 1.5 (1.082.08) | 0.014 | 0.67 (0.351.29) | 0.227 | 2.05 (1.383.07) | <0.001 |
3049 | 1.64 (1.312.05) | <.001 | 1.47 (1.052.05) | 0.024 | 1.59 (1.172.17) | 0.003 |
5069 | 1.55 (1.321.82) | <.001 | 1.48 (1.191.85) | 0.001 | 1.48 (1.171.86) | 0.001 |
>69 | Ref | Ref | Ref | |||
Gender | ||||||
Male | 1.09 (0.951.25) | 0.220 | 1.06 (0.861.29) | 0.602 | 1.07 (0.881.3) | 0.506 |
Female | Ref | Ref | Ref | |||
ICU encounter | ||||||
No | 0.65 (0.550.77) | <0.001 | 0.65 (0.480.87) | 0.004 | 0.81 (0.661) | 0.048 |
Yes | Ref | Ref | Ref | |||
Payer | ||||||
Public | 0.73 (0.173.24) | 0.683 | 0.5 (0.064.16) | 0.521 | 1.13 (0.149.08) | 0.908 |
Private | 0.94 (0.214.14) | 0.933 | 0.6 (0.074.99) | 0.634 | 1.4 (0.1711.21) | 0.754 |
Charity | 1.51 (0.298.02) | 0.626 | 1.14 (0.1112.25) | 0.912 | 2 (0.1920.97) | 0.563 |
Self | Ref | Ref | Ref | |||
Length of stay, d | ||||||
<3 | Ref | Ref | Ref | |||
36 | 1.63 (1.371.93) | <0.001 | 1.88 (1.452.44) | <0.001 | 1.34 (1.061.7) | 0.014 |
>6 | 2.07 (1.722.48) | <0.001 | 2.74 (2.13.57) | <0.001 | 1.51 (1.171.94) | 0.001 |
No. of attendings | ||||||
<4 | Ref | Ref | Ref | |||
46 | 1.38 (1.181.61) | <0.001 | 1.53 (1.221.92) | <0.001 | 1.29 (1.041.6) | 0.021 |
>6 | 1.44 (1.191.73) | <0.001 | 1.66 (1.272.18) | <0.001 | 1.23 (0.961.59) | 0.102 |
Severity index* | ||||||
0 (lowest) | Ref | Ref | Ref | |||
13 | 0.91 (0.781.06) | 0.224 | 1.18 (0.911.52) | 0.221 | 0.91 (0.751.12) | 0.380 |
46 | 0.86 (0.681.08) | 0.200 | 1.38 (1.011.9) | 0.046 | 0.68 (0.461.01) | 0.058 |
>6 (highest) | 1.13 (0.831.53) | 0.436 | 1.68 (1.122.52) | 0.012 | 1.05 (0.631.75) | 0.849 |
Charges | ||||||
Low | Ref | Ref | Ref | |||
Medium | 1.69 (1.372.09) | <0.001 | 1.71 (1.32.26) | <0.001 | 1.15 (0.821.61) | 0.428 |
High | 2.76 (2.193.47) | <0.001 | 3.31 (2.434.51) | <0.001 | 1.55 (1.092.22) | 0.016 |
Service | ||||||
Medical | 0.57 (0.50.66) | <0.001 | ||||
Surgical | Ref |
Multivariable modeling (Table 3) for all patients without an ICU encounter suggested that (1) patients aged <30 years, 30 to 49 years, and 50 to 69 years were more likely to report top decile RSRs when compared to patients 70 years and older (OR: 1.61, 95% CI: 1.09‐2.36; OR: 1.44, 95% CI: 1.08‐1.93; and OR: 1.39, 95% CI: 1.13‐1.71, respectively) and (2), when compared to patients with extreme resource intensity scores, patients with higher resource intensity scores were more likely to report top decile RSRs (moderate [OR: 1.42, 95% CI: 1.11‐1.83], major [OR: 1.56, 95% CI: 1.22‐2.01], and extreme [OR: 2.29, 95% CI: 1.8‐2.92]. These results were relatively consistent within medical and surgical subgroups (Table 3).
Overall | Medical | Surgical | ||||
---|---|---|---|---|---|---|
Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | |
| ||||||
Age, y | ||||||
<30 | 1.61 (1.092.36) | 0.016 | 0.82 (0.41.7) | 0.596 | 2.31 (1.393.82) | 0.001 |
3049 | 1.44 (1.081.93) | 0.014 | 1.55 (1.032.32) | 0.034 | 1.41 (0.912.17) | 0.120 |
5069 | 1.39 (1.131.71) | 0.002 | 1.44 (1.11.88) | 0.008 | 1.39 (11.93) | 0.049 |
>69 | Ref | Ref | Ref | |||
Sex | ||||||
Male | 1 (0.851.17) | 0.964 | 1 (0.81.25) | 0.975 | 0.99 (0.791.26) | 0.965 |
Female | Ref | Ref | Ref | |||
Payer | ||||||
Public | 0.62 (0.142.8) | 0.531 | 0.42 (0.053.67) | 0.432 | 1.03 (0.128.59) | 0.978 |
Private | 0.67 (0.153.02) | 0.599 | 0.42 (0.053.67) | 0.434 | 1.17 (0.149.69) | 0.884 |
Charity | 1.54 (0.288.41) | 0.620 | 1 (0.0911.13) | 0.999 | 2.56 (0.2328.25) | 0.444 |
Self | Ref | Ref | Ref | |||
Severity index | ||||||
0 (lowest) | Ref | Ref | Ref | |||
13 | 1.07 (0.891.29) | 0.485 | 1.18 (0.881.58) | 0.267 | 1 (0.781.29) | 0.986 |
46 | 1.14 (0.861.51) | 0.377 | 1.42 (0.992.04) | 0.056 | 0.6 (0.331.1) | 0.100 |
>6 (highest) | 1.31 (0.911.9) | 0.150 | 1.47 (0.932.33) | 0.097 | 1.1 (0.542.21) | 0.795 |
Resource intensity score | ||||||
Low | Ref | Ref | Ref | |||
Moderate | 1.42 (1.111.83) | 0.006 | 1.6 (1.112.3) | 0.011 | 0.94 (0.661.34) | 0.722 |
Major | 1.56 (1.222.01) | 0.001 | 1.69 (1.182.43) | 0.004 | 1.28 (0.911.8) | 0.151 |
Extreme | 2.29 (1.82.92) | <0.001 | 2.72 (1.943.82) | <0.001 | 1.63 (1.172.26) | 0.004 |
Service | ||||||
Medical | 0.59 (0.50.69) | <0.001 | ||||
Surgical | Ref |
In those with at least 1 ICU attending encounter (see Supporting Table 1 in the online version of this article), no variables demonstrated significant association with top decile RSRs in the overall group or in the medical subgroup. For surgical patients with at least 1 ICU attending encounter (see Supporting Table 1 in the online version of this article), patients aged 30 to 49 and 50 to 69 years were more likely to provide top decile RSRs (OR: 1.93, 95% CI: 1.08‐3.46 and OR: 1.65, 95% CI 1.07‐2.53, respectively). Resource intensity was not significantly associated with top decile RSRs.
DISCUSSION
Our analysis suggests that, for patients on the general care floors, resource utilization is associated with the RSR and, therefore, potentially the CMS Summary Star Rating. Adjusting for severity of illness, patients with higher resource utilization were more likely to report top decile RSRs.
Prior data regarding utilization and satisfaction are mixed. In a 2‐year, prospective, national examination, patients in the highest quartile of patient satisfaction had increased healthcare and prescription drug expenditures as well as increased rates of hospitalization when compared with patients in the lowest quartile of patient satisfaction.[9] However, a recent national study of surgical administrative databases suggested hospitals with high patient satisfaction provided more efficient care.[13]
One reason for the conflicting data may be that large, national evaluations are unable to control for between‐hospital confounders (ie, hospital quality of care). By capturing all eligible returned surveys at 1 institution, our design allowed us to collect granular data. We found that in 1 hospital setting, patient population, facilities, and food services, patients receiving more clinical resources generally assigned higher ratings than patients receiving less.
It is possible that utilization is a proxy for serious illness, and that patients with serious illness receive more attention during hospitalization and are more satisfied when discharged in a good state of health. However, we did adjust for severity of illness in our model using the Charlson‐Deyo index and we suggest that, other factors being equal, hospitals with higher per‐patient expenditures may be assigned higher Summary Star Ratings.
CMS has recently implemented a number of metrics designed to decrease healthcare costs by improving quality, safety, and efficiency. Concurrently, CMS has also prioritized patient experience. The Summary Star Rating was created to provide healthcare consumers with an easy way to compare the patient experience between hospitals[4]; however, our data suggest that this metric may be at odds with inpatient cost savings and efficiency metrics.
Per‐patient spending becomes particularly salient when considering that in fiscal year 2016, CMS' hospital VBP reimbursement will include 2 metrics: an efficiency outcome measure labeled Medicare spending per beneficiary, and a patient experience outcome measure based on HCAHPS survey dimensions.[2] Together, these 2 metrics will comprise nearly half of the total VBP performance score used to determine reimbursement. Although our data suggest that these 2 VBP metrics may be correlated, it should be noted that we measured inpatient hospital charges, whereas the CMS efficiency outcome measure includes costs for episode of care spanning 3 days prior to hospitalization to 30 days after hospitalization.
Patient expectations likely play a role in satisfaction.[14, 15, 16] In an outpatient setting, physician fulfillment of patient requests has been associated with positive patient evaluations of care.[17] However, patients appear to value education, shared decision making, and provider empathy more than testing and intervention.[14, 18, 19, 20, 21, 22, 23] Perhaps, in the absence of the former attributes, patients use additional resource expenditure as a proxy.
It is not clear that higher resource expenditure improves outcomes. A landmark study of nearly 1 million Medicare enrollees by Fisher et al. suggests that, although Medicare patients in higher‐spending regions receive more care than those in lower‐spending regions, this does not result in better health outcomes, specifically with regard to mortality.[24, 25] Patients who live in areas of high hospital capacity use the hospital more frequently than do patients in areas of low hospital capacity, but this does not appear to result in improved mortality rates.[26] In fact, physicians in areas of high healthcare capacity report more difficulty maintaining high‐quality patient relationships and feel less able to provide high‐quality care than physicians in lower‐capacity areas.[27]
We hypothesize the cause of the association between resource utilization and patient satisfaction could be that patients (1) perceive that a doctor who allows them to stay longer in the hospital or who performs additional testing cares more about their well‐being and (2) that these patients feel more strongly that their concerns are being heard and addressed by their physicians. A systematic review of primary care patients identified many studies that found a positive association between meeting patient expectations and satisfaction with care, but also suggested that although patients frequently expect information, physicians misperceive this as an expectation of specific action.[28] A separate systematic review found that patient education in the form of decision aides can help patients develop more reasonable expectations and reduce utilization of certain discretionary procedures such as elective surgeries and prostate‐specific antigen testing.[29]
We did not specifically address clinical outcomes in our analysis because the clinical outcomes on which CMS currently adjusts VBP reimbursement focus on 30‐day mortality for specific diagnoses, nosocomial infections, and iatrogenic events.[30] Our data include only returned surveys from living patients, and it is likely that 30‐day mortality was similar throughout all subsets of patients. Additionally, the nosocomial and iatrogenic outcome measures used by CMS are sufficiently rare on the general floors and are unlikely to have significantly influenced our results.[31]
Our study has several strengths. Nearly all medical and surgical patient surveys returned during the study period were included, and therefore our calculations are likely to accurately reflect the Summary Star Rating that would have been assigned for the period. Second, the large sample size helps attenuate potential differences in commonly used outcome metrics. Third, by adjusting for a variety of demographic and clinical variables, we were able to decrease the likelihood of unidentified confounders.
Notably, we identified 38 (0.4%) surveys returned for patients under 18 years of age at admission. These surveys were included in our analysis because, to the best of our knowledge, they would have existed in the pool of surveys CMS could have used to assign a Summary Star Rating.
Our study also has limitations. First, geographically diverse data are needed to ensure generalizability. Second, we used the Charlson‐Deyo Comorbidity Index to describe the degree of illness for each patient. This index represents a patient's total illness burden but may not describe the relative severity of the patient's current illness relative to another patient. Third, we selected variables we felt were most likely to be associated with patient experience, but unidentified confounding remains possible. Fourth, attendings caring for ICU patients fall within the Division of Critical Care/Pulmonary Medicine. Therefore, we may have inadvertently placed patients into the ICU cohort who received a pulmonary/critical care consult on the general floors. Fifth, our data describe associations only for patients who returned surveys. Although there may be inherent biases in patients who return surveys, HCAHPS survey responses are used by CMS to determine a hospital's overall satisfaction score.
CONCLUSION
For patients who return HCAHPS surveys, resource utilization may be positively associated with a hospital's Summary Star Rating. These data suggest that hospitals with higher per‐patient expenditures may receive higher Summary Star Ratings, which could result in hospitals with higher per‐patient resource utilization appearing more attractive to healthcare consumers. Future studies should attempt to confirm our findings at other institutions and to determine causative factors.
Acknowledgements
The authors thank Jason Machan, PhD (Department of Orthopedics and Surgery, Warren Alpert Medical School, Brown University, Providence, Rhode Island) for his help with study design, and Ms. Brenda Foster (data analyst, University of Rochester Medical Center, Rochester, NY) for her help with data collection.
Disclosures: Nothing to report.
- Redesigning physician compensation and improving ED performance. Healthc Financ Manage. 2011;65(6):114–117. , , .
- QualityNet. Available at: https://www.qualitynet.org/dcs/ContentServer?c=Page97(13):1041–1048.
- Factors determining inpatient satisfaction with care. Soc Sci Med. 2002;54(4):493–504. , , , .
- Patient satisfaction revisited: a multilevel approach. Soc Sci Med. 2009;69(1):68–75. , , , , .
- Predictors of patient satisfaction with hospital health care. BMC Health Serv Res. 2006;6:102. , , , et al.
- The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012;172(5):405–411. , , , .
- Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):41–48. , , , , .
- Becker's Infection Control and Clinical Quality. Star Ratings go live on Hospital Compare: how many hospitals got 5 stars? Available at: http://www.beckershospitalreview.com/quality/star‐ratings‐go‐live‐on‐hospital‐compare‐how‐many‐hospitals‐got‐5‐stars.html. Published April 16, 2015. Accessed October 5, 2015.
- Adapting a clinical comorbidity index for use with ICD‐9‐CM administrative databases. J Clin Epidemiol. 1992;45(6):613–619. , , .
- Patient satisfaction and quality of surgical care in US hospitals. Ann Surg. 2015;261(1):2–8. , , .
- Should health care providers be accountable for patients' care experiences? J Gen Intern Med. 2015;30(2):253–256. , , , , .
- Unmet expectations for care and the patient‐physician relationship. J Gen Intern Med. 2002;17(11):817–824. , , , , .
- Do unmet expectations for specific tests, referrals, and new medications reduce patients' satisfaction? J Gen Intern Med. 2004;19(11):1080–1087. , , , et al.
- Request fulfillment in office practice: antecedents and relationship to outcomes. Med Care. 2002;40(1):38–51. , , , , , .
- Factors associated with patient satisfaction with care among dermatological outpatients. Br J Dermatol. 2001;145(4):617–623. , , , et al.
- Patient expectations of emergency department care: phase II—a cross‐sectional survey. CJEM. 2006;8(3):148–157. , , , .
- Patients' perspectives on ideal physician behaviors. Mayo Clin Proc. 2006;81(3):338–344. , , , , .
- What do people want from their health care? A qualitative study. J Participat Med. 2015;18:e10. , .
- Evaluations of care by adults following a denial of an advertisement‐related prescription drug request: the role of expectations, symptom severity, and physician communication style. Soc Sci Med. 2006;62(4):888–899. , , .
- Getting to “no”: strategies primary care physicians use to deny patient requests. Arch Intern Med. 2010;170(4):381–388. , , , , , .
- The implications of regional variations in Medicare spending. Part 1: the content, quality, and accessibility of care. Ann Intern Med. 2003;138(4):273–287. , , , , , .
- The implications of regional variations in Medicare spending. Part 2: health outcomes and satisfaction with care. Ann Intern Med. 2003;138(4):288–298. , , , , , .
- Associations among hospital capacity, utilization, and mortality of US Medicare beneficiaries, controlling for sociodemographic factors. Health Serv Res. 2000;34(6):1351–1362. , , , et al.
- Regional variations in health care intensity and physician perceptions of quality of care. Ann Intern Med. 2006;144(9):641–649. , , , .
- Visit‐specific expectations and patient‐centered outcomes: a literature review. Arch Fam Med. 2000;9(10):1148–1155. , , .
- Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2014;1:CD001431. , , , et al.
- Centers for Medicare and Medicaid Services. Hospital Compare. Outcome domain. Available at: https://www.medicare.gov/hospitalcompare/data/outcome‐domain.html. Accessed October 5, 2015.
- Centers for Disease Control and Prevention. 2013 national and state healthcare‐associated infections progress report. Available at: www.cdc.gov/hai/progress‐report/index.html. Accessed October 5, 2015.
The patient experience has become increasingly important to healthcare in the United States. It is now a metric used commonly to determine physician compensation and accounts for nearly 30% of the Centers for Medicare and Medicaid Services' (CMS) Value‐Based Purchasing (VBP) reimbursement for fiscal years 2015 and 2016.[1, 2]
In April 2015, CMS added a 5‐star patient experience score to its Hospital Compare website in an attempt to address the Affordable Care Act's call for transparent and easily understandable public reporting.[3] A hospital's principal score is the Summary Star Rating, which is based on responses to the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey. The formulas used to calculate Summary Star Ratings have been reported by CMS.[4]
Studies published over the past decade suggest that gender, age, education level, length of hospital stay, travel distance, and other factors may influence patient satisfaction.[5, 6, 7, 8] One study utilizing a national dataset suggested that higher patient satisfaction was associated with greater inpatient healthcare utilization and higher healthcare expenditures.[9] It is therefore possible that emphasizing patient experience scores could adversely impact healthcare resource utilization. However, positive patient experience may also be an important independent dimension of quality for patients and correlate with improved clinical outcomes.[10]
We know of no literature describing patient factors associated with the Summary Star Rating. Given that this rating is now used as a standard metric by which patient experience can be compared across more than 3,500 hospitals,[11] data describing the association between patient‐level factors and the Summary Star Rating may provide hospitals with an opportunity to target improvement efforts. We aimed to determine the degree to which resource utilization is associated with a satisfaction score based on the Summary Star Rating methodology.
METHODS
The study was conducted at the University of Rochester Medical Center (URMC), an 830‐bed tertiary care center in upstate New York. This was a retrospective review of all HCAHPS surveys returned to URMC over a 27‐month period from January 1, 2012 to April 1, 2014. URMC follows the standard CMS process for determining which patients receive surveys as follows. During the study timeframe, HCAHPS surveys were mailed to patients 18 years of age and older who had an inpatient stay spanning at least 1 midnight. Surveys were mailed within 5 days of discharge, and were generally returned within 6 weeks. URMC did not utilize telephone or email surveys during the study period. Surveys were not sent to patients who (1) were transferred to another facility, (2) were discharged to hospice, (3) died during the hospitalization, (4) received psychiatric or rehabilitative services during the hospitalization, (5) had an international address, and/or (6) were prisoners.
The survey vendor (Press Ganey, South Bend, IN) for URMC provided raw data for returned surveys with patient answers to questions. Administrative and billing databases were used to add demographic and clinical data for the corresponding hospitalization to the dataset. These data included age, gender, payer status (public, private, self, charity), length of stay, number of attendings who saw the patient (based on encounters documented in the electronic medical record (EMR)), all discharge International Classification of Diseases, 9th Revision (ICD‐9) diagnoses for the hospitalization, total charges for the hospitalization, and intensive care unit (ICU) utilization as evidenced by a documented encounter with a member of the Division of Critical Care/Pulmonary Medicine.
CMS analyzes surveys within 1 of 3 clinical service categories (medical, surgical, or obstetrics/gynecology) based on the discharging service. To parallel this approach, each returned survey was placed into 1 of these categories based on the clinical service of the discharging physician. Patients placed in the obstetrics/gynecology category (n = 1317, 13%) will be analyzed in a future analysis given inherent differences in patient characteristics that require evaluation of other variables.
Approximations of CMS Summary Star Rating
The HCAHPS survey is a multiple‐choice questionnaire that includes several domains of patient satisfaction. Respondents are asked to rate areas of satisfaction with their hospital experience on a Likert scale. CMS uses a weighted average of Likert responses to a subset of HCAHPS questions to calculate a hospital's raw score in 11 domains, as well as an overall raw summary score. CMS then adjusts each raw score for differences between hospitals (eg, clustering, improvement over time, method of survey) to determine a hospital's star rating in each domain and an overall Summary Star Rating (the Summary Star Rating is the primary factor by which consumers can compare hospitals).[4] Because our data were from a single hospital system, the between‐hospital scoring adjustments utilized by CMS were not applicable. Instead, we calculated the raw scores exactly as CMS does prior to the adjustments. Thus, our scores reflect the scores that CMS would have given URMC during the study period prior to standardized adjustments; we refer to this as the raw satisfaction rating (RSR). We calculated an RSR for every eligible survey. The RSR was calculated as a continuous variable from 0 (lowest) to 1 (highest). Detailed explanation of our RSR calculation is available in the Supporting Information in the online version of this article.
Statistical Analysis
All analyses were performed in aggregate and by service (medical vs surgical). Categorical variables were summarized using frequencies with percentages. Comparisons across levels of categorical variables were performed with the 2 test. We report bivariate associations between the independent variables and RSRs in the top decile using unadjusted odds ratios (ORs) with 95% confidence intervals (CIs). Similarly, multivariable logistic regression was used for adjusted analyses. For the variables of severity of illness and resource intensity, the group with the lowest illness severity and lowest resource use served as the reference groups. We modeled patients without an ICU encounter and with an ICU encounter separately.
Charges, number of unique attendings encountered, and lengths of stay were highly correlated, and likely various measures of the same underlying construct of resource intensity, and therefore could not be entered into our models simultaneously. We combined these into a resource intensity score using factor analysis with a varimax rotation, and extracted factor scores for a single factor (supported by a scree plot). We then placed patients into 4 groups based on the distribution of the factor scores: low (<25th percentile), moderate (25th50th percentile), major (50th75th percentile), and extreme (>75th percentile).
We used the Charlson‐Deyo comorbidity score as our disease severity index.[12] The index uses ICD‐9 diagnoses with points assigned for the impact of each diagnosis on morbidity and the points summed to an overall score. This provides a measure of disease severity for a patient based on the number of diagnoses and relative mortality of the individual diagnoses. Scores were categorized as 0 (representing no major illness burden), 1 to 3, 4 to 6, and >6.
All statistical analyses were performed using SAS version 9.4 (SAS Institute, Cary, NC), and P values <0.05 were considered statistically significant. This study was approved by the institutional review board at the University of Rochester Medical Center.
RESULTS
Our initial search identified 10,007 returned surveys (29% of eligible patients returned surveys during the study period). Of these, 5059 (51%) were categorized as medical, 3630 (36%) as surgical, and 1317 (13%) as obstetrics/gynecology. One survey did not have the service of the discharging physician recorded and was excluded. Cohort demographics and relationship to RSRs in the top decile for the 8689 medical and surgical patients can be found in Table 1. The most common discharge diagnosis‐related groups (DRGs) for medical patients were 247, percutaneous cardiovascular procedure with drug‐eluding stent without major complications or comorbidities (MCC) (3.8%); 871, septicemia or severe sepsis without mechanical ventilation >96 hours with MCC (2.7%); and 392, esophagitis, gastroenteritis, and miscellaneous digestive disorders with MCC (2.3%). The most common DRGs for surgical patients were 460, spinal fusion except cervical without MCC (3.5%); 328, stomach, esophageal and duodenal procedure without complication or comorbidities or MCC (3.3%); and 491, back and neck procedure excluding spinal fusion without complication or comorbidities or MCC (3.1%).
Overall | Medical | Surgical | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Total | <90th | Top Decile | P | Total | <90th | Top Decile | P | Total | <90th | Top Decile | P | |
| ||||||||||||
Overall | 8,689 | 7,789 (90) | 900 (10) | 5,059 | 4,646 (92) | 413 (8) | 3,630 | 3,143 (87) | 487 (13) | |||
Age, y | ||||||||||||
<30 | 419 (5) | 371 (89) | 48 (12) | <0.001 | 218 (4) | 208 (95) | 10 (5) | <0.001 | 201 (6) | 163 (81) | 38 (19) | <0.001 |
3049 | 1,029 (12) | 902 (88) | 127 (12) | 533 (11) | 482 (90) | 51 (10) | 496 (14) | 420 (85) | 76 (15) | |||
5069 | 3,911 (45) | 3,450 (88) | 461 (12) | 2,136 (42) | 1,930 (90) | 206 (10) | 1,775 (49) | 1,520 (86) | 255 (14) | |||
>69 | 3,330 (38) | 3,066 (92) | 264 (8) | 2,172 (43) | 2,026 (93) | 146 (7) | 1,158 (32) | 1,040 (90) | 118 (10) | |||
Gender | ||||||||||||
Male | 4,640 (53) | 4,142 (89) | 498 (11) | 0.220 | 2,596 (51) | 2,379 (92) | 217 (8) | 0.602 | 2,044 (56) | 1,763 (86) | 281 (14) | 0.506 |
Female | 4,049 (47) | 3,647 (90) | 402 (10) | 2,463 (49) | 2,267 (92) | 196 (8) | 1,586 (44) | 1,380 (87) | 206 (13) | |||
ICU encounter | ||||||||||||
No | 7,122 (82) | 6,441 (90) | 681 (10) | <0.001 | 4,547 (90) | 4,193 (92) | 354 (8) | <0.001 | 2,575 (71) | 2,248 (87) | 327 (13) | 0.048 |
Yes | 1,567 (18) | 1,348 (86) | 219 (14) | 512 (10) | 453 (89) | 59 (12) | 1,055 (29) | 895 (85) | 160 (15) | |||
Payer | ||||||||||||
Public | 5,564 (64) | 5,036 (91) | 528 (10) | <0.001 | 3,424 (68) | 3,161 (92) | 263 (8) | 0.163 | 2,140 (59) | 1,875 (88) | 265 (12) | 0.148 |
Private | 3,064 (35) | 2,702 (88) | 362 (12) | 1,603 (32) | 1,458 (91) | 145 (9) | 1,461 (40) | 1,244 (85) | 217 (15) | |||
Charity | 45 (1) | 37 (82) | 8 (18) | 25 (1) | 21 (84) | 4 (16) | 20 (1) | 16 (80) | 4 (20) | |||
Self | 16 (0) | 14 (88) | 2 (13) | 7 (0) | 6 (86) | 1 (14) | 9 (0) | 8 (89) | 1 (11) | |||
Length of stay, d | ||||||||||||
<3 | 3,156 (36) | 2,930 (93) | 226 (7) | <0.001 | 1,961 (39) | 1,865 (95) | 96 (5) | <0.001 | 1,195 (33) | 1,065 (89) | 130 (11) | <0.001 |
36 | 3,330 (38) | 2,959 (89) | 371 (11) | 1,867 (37) | 1,702 (91) | 165 (9) | 1,463 (40) | 1,257 (86) | 206 (14) | |||
>6 | 2,203 (25) | 1,900 (86) | 303 (14) | 1,231 (24) | 1,079 (88) | 152 (12) | 972 (27) | 821 (85) | 151 (16) | |||
No. of attendings | ||||||||||||
<4 | 3,959 (46) | 3,615 (91) | 344 (9) | <0.001 | 2,307 (46) | 2,160 (94) | 147 (6) | <0.001 | 1,652 (46) | 1,455 (88) | 197 (12) | 0.052 |
46 | 3,067 (35) | 2,711 (88) | 356 (12) | 1,836 (36) | 1,663 (91) | 173 (9) | 1,231 (34) | 1,048 (85) | 183 (15) | |||
>6 | 1,663 (19) | 1,463 (88) | 200 (12) | 916 (18) | 823 (90) | 93 (10) | 747 (21) | 640 (86) | 107 (14) | |||
Severity index* | ||||||||||||
0 (lowest) | 2,812 (32) | 2,505 (89) | 307 (11) | 0.272 | 1,273 (25) | 1,185 (93) | 88 (7) | 0.045 | 1,539 (42) | 1,320 (86) | 219 (14) | 0.261 |
13 | 4,253 (49) | 3,827 (90) | 426 (10) | 2,604 (52) | 2,395 (92) | 209 (8) | 1,649 (45) | 1,432 (87) | 217 (13) | |||
46 | 1163 (13) | 1,052 (91) | 111 (10) | 849 (17) | 770 (91) | 79 (9) | 314 (9) | 282 (90) | 32 (10) | |||
>6 (highest) | 461 (5) | 405 (88) | 56 (12) | 333 (7) | 296 (89) | 37 (11) | 128 (4) | 109 (85) | 19 (15) | |||
Charges, | ||||||||||||
Low | 1,820 (21) | 1,707 (94) | 113 (6) | <0.001 | 1,426 (28) | 1,357 (95) | 69 (5) | <0.001 | 394 (11) | 350 (89) | 44 (11) | 0.007 |
Medium | 5,094 (59) | 4,581 (90) | 513 (10) | 2,807 (56) | 2,582 (92) | 225 (8) | 2,287 (63) | 1,999 (87) | 288 (13) | |||
High | 1,775 (20) | 1,501 (85) | 274 (15) | 826 (16) | 707 (86) | 119 (14) | 949 (26) | 794 (84) | 155 (16) |
Unadjusted analysis of medical and surgical patients identified significant associations of several variables with a top decile RSR (Table 2). Patients with longer lengths of stay (OR: 2.07, 95% CI: 1.72‐2.48), more attendings (OR: 1.44, 95% CI: 1.19‐1.73), and higher hospital charges (OR: 2.76, 95% CI: 2.19‐3.47) were more likely to report an RSR in the top decile. Patients without an ICU encounter (OR: 0.65, 95% CI: 0.55‐0.77) and on a medical service (OR: 0.57, 95% CI: 0.5‐ 0.66) were less likely to report an RSR in the top decile. Several associations were identified in only the medical or surgical cohorts. In the medical cohort, patients with the highest illness severity index (OR: 1.68, 95% CI: 1.12‐ 2.52) and with 7 different attending physicians (OR: 1.66, 95% CI: 1.27‐2.18) were more likely to report RSRs in the top decile. In the surgical cohort, patients <30 years of age (OR: 2.05, 95% CI 1.38‐3.07) were more likely to report an RSR in the top decile than patients >69 years of age. Insurance payer category and gender were not significantly associated with top decile RSRs.
Overall | Medical | Surgical | ||||
---|---|---|---|---|---|---|
Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | |
| ||||||
Age, y | ||||||
<30 | 1.5 (1.082.08) | 0.014 | 0.67 (0.351.29) | 0.227 | 2.05 (1.383.07) | <0.001 |
3049 | 1.64 (1.312.05) | <.001 | 1.47 (1.052.05) | 0.024 | 1.59 (1.172.17) | 0.003 |
5069 | 1.55 (1.321.82) | <.001 | 1.48 (1.191.85) | 0.001 | 1.48 (1.171.86) | 0.001 |
>69 | Ref | Ref | Ref | |||
Gender | ||||||
Male | 1.09 (0.951.25) | 0.220 | 1.06 (0.861.29) | 0.602 | 1.07 (0.881.3) | 0.506 |
Female | Ref | Ref | Ref | |||
ICU encounter | ||||||
No | 0.65 (0.550.77) | <0.001 | 0.65 (0.480.87) | 0.004 | 0.81 (0.661) | 0.048 |
Yes | Ref | Ref | Ref | |||
Payer | ||||||
Public | 0.73 (0.173.24) | 0.683 | 0.5 (0.064.16) | 0.521 | 1.13 (0.149.08) | 0.908 |
Private | 0.94 (0.214.14) | 0.933 | 0.6 (0.074.99) | 0.634 | 1.4 (0.1711.21) | 0.754 |
Charity | 1.51 (0.298.02) | 0.626 | 1.14 (0.1112.25) | 0.912 | 2 (0.1920.97) | 0.563 |
Self | Ref | Ref | Ref | |||
Length of stay, d | ||||||
<3 | Ref | Ref | Ref | |||
36 | 1.63 (1.371.93) | <0.001 | 1.88 (1.452.44) | <0.001 | 1.34 (1.061.7) | 0.014 |
>6 | 2.07 (1.722.48) | <0.001 | 2.74 (2.13.57) | <0.001 | 1.51 (1.171.94) | 0.001 |
No. of attendings | ||||||
<4 | Ref | Ref | Ref | |||
46 | 1.38 (1.181.61) | <0.001 | 1.53 (1.221.92) | <0.001 | 1.29 (1.041.6) | 0.021 |
>6 | 1.44 (1.191.73) | <0.001 | 1.66 (1.272.18) | <0.001 | 1.23 (0.961.59) | 0.102 |
Severity index* | ||||||
0 (lowest) | Ref | Ref | Ref | |||
13 | 0.91 (0.781.06) | 0.224 | 1.18 (0.911.52) | 0.221 | 0.91 (0.751.12) | 0.380 |
46 | 0.86 (0.681.08) | 0.200 | 1.38 (1.011.9) | 0.046 | 0.68 (0.461.01) | 0.058 |
>6 (highest) | 1.13 (0.831.53) | 0.436 | 1.68 (1.122.52) | 0.012 | 1.05 (0.631.75) | 0.849 |
Charges | ||||||
Low | Ref | Ref | Ref | |||
Medium | 1.69 (1.372.09) | <0.001 | 1.71 (1.32.26) | <0.001 | 1.15 (0.821.61) | 0.428 |
High | 2.76 (2.193.47) | <0.001 | 3.31 (2.434.51) | <0.001 | 1.55 (1.092.22) | 0.016 |
Service | ||||||
Medical | 0.57 (0.50.66) | <0.001 | ||||
Surgical | Ref |
Multivariable modeling (Table 3) for all patients without an ICU encounter suggested that (1) patients aged <30 years, 30 to 49 years, and 50 to 69 years were more likely to report top decile RSRs when compared to patients 70 years and older (OR: 1.61, 95% CI: 1.09‐2.36; OR: 1.44, 95% CI: 1.08‐1.93; and OR: 1.39, 95% CI: 1.13‐1.71, respectively) and (2), when compared to patients with extreme resource intensity scores, patients with higher resource intensity scores were more likely to report top decile RSRs (moderate [OR: 1.42, 95% CI: 1.11‐1.83], major [OR: 1.56, 95% CI: 1.22‐2.01], and extreme [OR: 2.29, 95% CI: 1.8‐2.92]. These results were relatively consistent within medical and surgical subgroups (Table 3).
Overall | Medical | Surgical | ||||
---|---|---|---|---|---|---|
Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | |
| ||||||
Age, y | ||||||
<30 | 1.61 (1.092.36) | 0.016 | 0.82 (0.41.7) | 0.596 | 2.31 (1.393.82) | 0.001 |
3049 | 1.44 (1.081.93) | 0.014 | 1.55 (1.032.32) | 0.034 | 1.41 (0.912.17) | 0.120 |
5069 | 1.39 (1.131.71) | 0.002 | 1.44 (1.11.88) | 0.008 | 1.39 (11.93) | 0.049 |
>69 | Ref | Ref | Ref | |||
Sex | ||||||
Male | 1 (0.851.17) | 0.964 | 1 (0.81.25) | 0.975 | 0.99 (0.791.26) | 0.965 |
Female | Ref | Ref | Ref | |||
Payer | ||||||
Public | 0.62 (0.142.8) | 0.531 | 0.42 (0.053.67) | 0.432 | 1.03 (0.128.59) | 0.978 |
Private | 0.67 (0.153.02) | 0.599 | 0.42 (0.053.67) | 0.434 | 1.17 (0.149.69) | 0.884 |
Charity | 1.54 (0.288.41) | 0.620 | 1 (0.0911.13) | 0.999 | 2.56 (0.2328.25) | 0.444 |
Self | Ref | Ref | Ref | |||
Severity index | ||||||
0 (lowest) | Ref | Ref | Ref | |||
13 | 1.07 (0.891.29) | 0.485 | 1.18 (0.881.58) | 0.267 | 1 (0.781.29) | 0.986 |
46 | 1.14 (0.861.51) | 0.377 | 1.42 (0.992.04) | 0.056 | 0.6 (0.331.1) | 0.100 |
>6 (highest) | 1.31 (0.911.9) | 0.150 | 1.47 (0.932.33) | 0.097 | 1.1 (0.542.21) | 0.795 |
Resource intensity score | ||||||
Low | Ref | Ref | Ref | |||
Moderate | 1.42 (1.111.83) | 0.006 | 1.6 (1.112.3) | 0.011 | 0.94 (0.661.34) | 0.722 |
Major | 1.56 (1.222.01) | 0.001 | 1.69 (1.182.43) | 0.004 | 1.28 (0.911.8) | 0.151 |
Extreme | 2.29 (1.82.92) | <0.001 | 2.72 (1.943.82) | <0.001 | 1.63 (1.172.26) | 0.004 |
Service | ||||||
Medical | 0.59 (0.50.69) | <0.001 | ||||
Surgical | Ref |
In those with at least 1 ICU attending encounter (see Supporting Table 1 in the online version of this article), no variables demonstrated significant association with top decile RSRs in the overall group or in the medical subgroup. For surgical patients with at least 1 ICU attending encounter (see Supporting Table 1 in the online version of this article), patients aged 30 to 49 and 50 to 69 years were more likely to provide top decile RSRs (OR: 1.93, 95% CI: 1.08‐3.46 and OR: 1.65, 95% CI 1.07‐2.53, respectively). Resource intensity was not significantly associated with top decile RSRs.
DISCUSSION
Our analysis suggests that, for patients on the general care floors, resource utilization is associated with the RSR and, therefore, potentially the CMS Summary Star Rating. Adjusting for severity of illness, patients with higher resource utilization were more likely to report top decile RSRs.
Prior data regarding utilization and satisfaction are mixed. In a 2‐year, prospective, national examination, patients in the highest quartile of patient satisfaction had increased healthcare and prescription drug expenditures as well as increased rates of hospitalization when compared with patients in the lowest quartile of patient satisfaction.[9] However, a recent national study of surgical administrative databases suggested hospitals with high patient satisfaction provided more efficient care.[13]
One reason for the conflicting data may be that large, national evaluations are unable to control for between‐hospital confounders (ie, hospital quality of care). By capturing all eligible returned surveys at 1 institution, our design allowed us to collect granular data. We found that in 1 hospital setting, patient population, facilities, and food services, patients receiving more clinical resources generally assigned higher ratings than patients receiving less.
It is possible that utilization is a proxy for serious illness, and that patients with serious illness receive more attention during hospitalization and are more satisfied when discharged in a good state of health. However, we did adjust for severity of illness in our model using the Charlson‐Deyo index and we suggest that, other factors being equal, hospitals with higher per‐patient expenditures may be assigned higher Summary Star Ratings.
CMS has recently implemented a number of metrics designed to decrease healthcare costs by improving quality, safety, and efficiency. Concurrently, CMS has also prioritized patient experience. The Summary Star Rating was created to provide healthcare consumers with an easy way to compare the patient experience between hospitals[4]; however, our data suggest that this metric may be at odds with inpatient cost savings and efficiency metrics.
Per‐patient spending becomes particularly salient when considering that in fiscal year 2016, CMS' hospital VBP reimbursement will include 2 metrics: an efficiency outcome measure labeled Medicare spending per beneficiary, and a patient experience outcome measure based on HCAHPS survey dimensions.[2] Together, these 2 metrics will comprise nearly half of the total VBP performance score used to determine reimbursement. Although our data suggest that these 2 VBP metrics may be correlated, it should be noted that we measured inpatient hospital charges, whereas the CMS efficiency outcome measure includes costs for episode of care spanning 3 days prior to hospitalization to 30 days after hospitalization.
Patient expectations likely play a role in satisfaction.[14, 15, 16] In an outpatient setting, physician fulfillment of patient requests has been associated with positive patient evaluations of care.[17] However, patients appear to value education, shared decision making, and provider empathy more than testing and intervention.[14, 18, 19, 20, 21, 22, 23] Perhaps, in the absence of the former attributes, patients use additional resource expenditure as a proxy.
It is not clear that higher resource expenditure improves outcomes. A landmark study of nearly 1 million Medicare enrollees by Fisher et al. suggests that, although Medicare patients in higher‐spending regions receive more care than those in lower‐spending regions, this does not result in better health outcomes, specifically with regard to mortality.[24, 25] Patients who live in areas of high hospital capacity use the hospital more frequently than do patients in areas of low hospital capacity, but this does not appear to result in improved mortality rates.[26] In fact, physicians in areas of high healthcare capacity report more difficulty maintaining high‐quality patient relationships and feel less able to provide high‐quality care than physicians in lower‐capacity areas.[27]
We hypothesize the cause of the association between resource utilization and patient satisfaction could be that patients (1) perceive that a doctor who allows them to stay longer in the hospital or who performs additional testing cares more about their well‐being and (2) that these patients feel more strongly that their concerns are being heard and addressed by their physicians. A systematic review of primary care patients identified many studies that found a positive association between meeting patient expectations and satisfaction with care, but also suggested that although patients frequently expect information, physicians misperceive this as an expectation of specific action.[28] A separate systematic review found that patient education in the form of decision aides can help patients develop more reasonable expectations and reduce utilization of certain discretionary procedures such as elective surgeries and prostate‐specific antigen testing.[29]
We did not specifically address clinical outcomes in our analysis because the clinical outcomes on which CMS currently adjusts VBP reimbursement focus on 30‐day mortality for specific diagnoses, nosocomial infections, and iatrogenic events.[30] Our data include only returned surveys from living patients, and it is likely that 30‐day mortality was similar throughout all subsets of patients. Additionally, the nosocomial and iatrogenic outcome measures used by CMS are sufficiently rare on the general floors and are unlikely to have significantly influenced our results.[31]
Our study has several strengths. Nearly all medical and surgical patient surveys returned during the study period were included, and therefore our calculations are likely to accurately reflect the Summary Star Rating that would have been assigned for the period. Second, the large sample size helps attenuate potential differences in commonly used outcome metrics. Third, by adjusting for a variety of demographic and clinical variables, we were able to decrease the likelihood of unidentified confounders.
Notably, we identified 38 (0.4%) surveys returned for patients under 18 years of age at admission. These surveys were included in our analysis because, to the best of our knowledge, they would have existed in the pool of surveys CMS could have used to assign a Summary Star Rating.
Our study also has limitations. First, geographically diverse data are needed to ensure generalizability. Second, we used the Charlson‐Deyo Comorbidity Index to describe the degree of illness for each patient. This index represents a patient's total illness burden but may not describe the relative severity of the patient's current illness relative to another patient. Third, we selected variables we felt were most likely to be associated with patient experience, but unidentified confounding remains possible. Fourth, attendings caring for ICU patients fall within the Division of Critical Care/Pulmonary Medicine. Therefore, we may have inadvertently placed patients into the ICU cohort who received a pulmonary/critical care consult on the general floors. Fifth, our data describe associations only for patients who returned surveys. Although there may be inherent biases in patients who return surveys, HCAHPS survey responses are used by CMS to determine a hospital's overall satisfaction score.
CONCLUSION
For patients who return HCAHPS surveys, resource utilization may be positively associated with a hospital's Summary Star Rating. These data suggest that hospitals with higher per‐patient expenditures may receive higher Summary Star Ratings, which could result in hospitals with higher per‐patient resource utilization appearing more attractive to healthcare consumers. Future studies should attempt to confirm our findings at other institutions and to determine causative factors.
Acknowledgements
The authors thank Jason Machan, PhD (Department of Orthopedics and Surgery, Warren Alpert Medical School, Brown University, Providence, Rhode Island) for his help with study design, and Ms. Brenda Foster (data analyst, University of Rochester Medical Center, Rochester, NY) for her help with data collection.
Disclosures: Nothing to report.
The patient experience has become increasingly important to healthcare in the United States. It is now a metric used commonly to determine physician compensation and accounts for nearly 30% of the Centers for Medicare and Medicaid Services' (CMS) Value‐Based Purchasing (VBP) reimbursement for fiscal years 2015 and 2016.[1, 2]
In April 2015, CMS added a 5‐star patient experience score to its Hospital Compare website in an attempt to address the Affordable Care Act's call for transparent and easily understandable public reporting.[3] A hospital's principal score is the Summary Star Rating, which is based on responses to the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey. The formulas used to calculate Summary Star Ratings have been reported by CMS.[4]
Studies published over the past decade suggest that gender, age, education level, length of hospital stay, travel distance, and other factors may influence patient satisfaction.[5, 6, 7, 8] One study utilizing a national dataset suggested that higher patient satisfaction was associated with greater inpatient healthcare utilization and higher healthcare expenditures.[9] It is therefore possible that emphasizing patient experience scores could adversely impact healthcare resource utilization. However, positive patient experience may also be an important independent dimension of quality for patients and correlate with improved clinical outcomes.[10]
We know of no literature describing patient factors associated with the Summary Star Rating. Given that this rating is now used as a standard metric by which patient experience can be compared across more than 3,500 hospitals,[11] data describing the association between patient‐level factors and the Summary Star Rating may provide hospitals with an opportunity to target improvement efforts. We aimed to determine the degree to which resource utilization is associated with a satisfaction score based on the Summary Star Rating methodology.
METHODS
The study was conducted at the University of Rochester Medical Center (URMC), an 830‐bed tertiary care center in upstate New York. This was a retrospective review of all HCAHPS surveys returned to URMC over a 27‐month period from January 1, 2012 to April 1, 2014. URMC follows the standard CMS process for determining which patients receive surveys as follows. During the study timeframe, HCAHPS surveys were mailed to patients 18 years of age and older who had an inpatient stay spanning at least 1 midnight. Surveys were mailed within 5 days of discharge, and were generally returned within 6 weeks. URMC did not utilize telephone or email surveys during the study period. Surveys were not sent to patients who (1) were transferred to another facility, (2) were discharged to hospice, (3) died during the hospitalization, (4) received psychiatric or rehabilitative services during the hospitalization, (5) had an international address, and/or (6) were prisoners.
The survey vendor (Press Ganey, South Bend, IN) for URMC provided raw data for returned surveys with patient answers to questions. Administrative and billing databases were used to add demographic and clinical data for the corresponding hospitalization to the dataset. These data included age, gender, payer status (public, private, self, charity), length of stay, number of attendings who saw the patient (based on encounters documented in the electronic medical record (EMR)), all discharge International Classification of Diseases, 9th Revision (ICD‐9) diagnoses for the hospitalization, total charges for the hospitalization, and intensive care unit (ICU) utilization as evidenced by a documented encounter with a member of the Division of Critical Care/Pulmonary Medicine.
CMS analyzes surveys within 1 of 3 clinical service categories (medical, surgical, or obstetrics/gynecology) based on the discharging service. To parallel this approach, each returned survey was placed into 1 of these categories based on the clinical service of the discharging physician. Patients placed in the obstetrics/gynecology category (n = 1317, 13%) will be analyzed in a future analysis given inherent differences in patient characteristics that require evaluation of other variables.
Approximations of CMS Summary Star Rating
The HCAHPS survey is a multiple‐choice questionnaire that includes several domains of patient satisfaction. Respondents are asked to rate areas of satisfaction with their hospital experience on a Likert scale. CMS uses a weighted average of Likert responses to a subset of HCAHPS questions to calculate a hospital's raw score in 11 domains, as well as an overall raw summary score. CMS then adjusts each raw score for differences between hospitals (eg, clustering, improvement over time, method of survey) to determine a hospital's star rating in each domain and an overall Summary Star Rating (the Summary Star Rating is the primary factor by which consumers can compare hospitals).[4] Because our data were from a single hospital system, the between‐hospital scoring adjustments utilized by CMS were not applicable. Instead, we calculated the raw scores exactly as CMS does prior to the adjustments. Thus, our scores reflect the scores that CMS would have given URMC during the study period prior to standardized adjustments; we refer to this as the raw satisfaction rating (RSR). We calculated an RSR for every eligible survey. The RSR was calculated as a continuous variable from 0 (lowest) to 1 (highest). Detailed explanation of our RSR calculation is available in the Supporting Information in the online version of this article.
Statistical Analysis
All analyses were performed in aggregate and by service (medical vs surgical). Categorical variables were summarized using frequencies with percentages. Comparisons across levels of categorical variables were performed with the 2 test. We report bivariate associations between the independent variables and RSRs in the top decile using unadjusted odds ratios (ORs) with 95% confidence intervals (CIs). Similarly, multivariable logistic regression was used for adjusted analyses. For the variables of severity of illness and resource intensity, the group with the lowest illness severity and lowest resource use served as the reference groups. We modeled patients without an ICU encounter and with an ICU encounter separately.
Charges, number of unique attendings encountered, and lengths of stay were highly correlated, and likely various measures of the same underlying construct of resource intensity, and therefore could not be entered into our models simultaneously. We combined these into a resource intensity score using factor analysis with a varimax rotation, and extracted factor scores for a single factor (supported by a scree plot). We then placed patients into 4 groups based on the distribution of the factor scores: low (<25th percentile), moderate (25th50th percentile), major (50th75th percentile), and extreme (>75th percentile).
We used the Charlson‐Deyo comorbidity score as our disease severity index.[12] The index uses ICD‐9 diagnoses with points assigned for the impact of each diagnosis on morbidity and the points summed to an overall score. This provides a measure of disease severity for a patient based on the number of diagnoses and relative mortality of the individual diagnoses. Scores were categorized as 0 (representing no major illness burden), 1 to 3, 4 to 6, and >6.
All statistical analyses were performed using SAS version 9.4 (SAS Institute, Cary, NC), and P values <0.05 were considered statistically significant. This study was approved by the institutional review board at the University of Rochester Medical Center.
RESULTS
Our initial search identified 10,007 returned surveys (29% of eligible patients returned surveys during the study period). Of these, 5059 (51%) were categorized as medical, 3630 (36%) as surgical, and 1317 (13%) as obstetrics/gynecology. One survey did not have the service of the discharging physician recorded and was excluded. Cohort demographics and relationship to RSRs in the top decile for the 8689 medical and surgical patients can be found in Table 1. The most common discharge diagnosis‐related groups (DRGs) for medical patients were 247, percutaneous cardiovascular procedure with drug‐eluding stent without major complications or comorbidities (MCC) (3.8%); 871, septicemia or severe sepsis without mechanical ventilation >96 hours with MCC (2.7%); and 392, esophagitis, gastroenteritis, and miscellaneous digestive disorders with MCC (2.3%). The most common DRGs for surgical patients were 460, spinal fusion except cervical without MCC (3.5%); 328, stomach, esophageal and duodenal procedure without complication or comorbidities or MCC (3.3%); and 491, back and neck procedure excluding spinal fusion without complication or comorbidities or MCC (3.1%).
Overall | Medical | Surgical | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Total | <90th | Top Decile | P | Total | <90th | Top Decile | P | Total | <90th | Top Decile | P | |
| ||||||||||||
Overall | 8,689 | 7,789 (90) | 900 (10) | 5,059 | 4,646 (92) | 413 (8) | 3,630 | 3,143 (87) | 487 (13) | |||
Age, y | ||||||||||||
<30 | 419 (5) | 371 (89) | 48 (12) | <0.001 | 218 (4) | 208 (95) | 10 (5) | <0.001 | 201 (6) | 163 (81) | 38 (19) | <0.001 |
3049 | 1,029 (12) | 902 (88) | 127 (12) | 533 (11) | 482 (90) | 51 (10) | 496 (14) | 420 (85) | 76 (15) | |||
5069 | 3,911 (45) | 3,450 (88) | 461 (12) | 2,136 (42) | 1,930 (90) | 206 (10) | 1,775 (49) | 1,520 (86) | 255 (14) | |||
>69 | 3,330 (38) | 3,066 (92) | 264 (8) | 2,172 (43) | 2,026 (93) | 146 (7) | 1,158 (32) | 1,040 (90) | 118 (10) | |||
Gender | ||||||||||||
Male | 4,640 (53) | 4,142 (89) | 498 (11) | 0.220 | 2,596 (51) | 2,379 (92) | 217 (8) | 0.602 | 2,044 (56) | 1,763 (86) | 281 (14) | 0.506 |
Female | 4,049 (47) | 3,647 (90) | 402 (10) | 2,463 (49) | 2,267 (92) | 196 (8) | 1,586 (44) | 1,380 (87) | 206 (13) | |||
ICU encounter | ||||||||||||
No | 7,122 (82) | 6,441 (90) | 681 (10) | <0.001 | 4,547 (90) | 4,193 (92) | 354 (8) | <0.001 | 2,575 (71) | 2,248 (87) | 327 (13) | 0.048 |
Yes | 1,567 (18) | 1,348 (86) | 219 (14) | 512 (10) | 453 (89) | 59 (12) | 1,055 (29) | 895 (85) | 160 (15) | |||
Payer | ||||||||||||
Public | 5,564 (64) | 5,036 (91) | 528 (10) | <0.001 | 3,424 (68) | 3,161 (92) | 263 (8) | 0.163 | 2,140 (59) | 1,875 (88) | 265 (12) | 0.148 |
Private | 3,064 (35) | 2,702 (88) | 362 (12) | 1,603 (32) | 1,458 (91) | 145 (9) | 1,461 (40) | 1,244 (85) | 217 (15) | |||
Charity | 45 (1) | 37 (82) | 8 (18) | 25 (1) | 21 (84) | 4 (16) | 20 (1) | 16 (80) | 4 (20) | |||
Self | 16 (0) | 14 (88) | 2 (13) | 7 (0) | 6 (86) | 1 (14) | 9 (0) | 8 (89) | 1 (11) | |||
Length of stay, d | ||||||||||||
<3 | 3,156 (36) | 2,930 (93) | 226 (7) | <0.001 | 1,961 (39) | 1,865 (95) | 96 (5) | <0.001 | 1,195 (33) | 1,065 (89) | 130 (11) | <0.001 |
36 | 3,330 (38) | 2,959 (89) | 371 (11) | 1,867 (37) | 1,702 (91) | 165 (9) | 1,463 (40) | 1,257 (86) | 206 (14) | |||
>6 | 2,203 (25) | 1,900 (86) | 303 (14) | 1,231 (24) | 1,079 (88) | 152 (12) | 972 (27) | 821 (85) | 151 (16) | |||
No. of attendings | ||||||||||||
<4 | 3,959 (46) | 3,615 (91) | 344 (9) | <0.001 | 2,307 (46) | 2,160 (94) | 147 (6) | <0.001 | 1,652 (46) | 1,455 (88) | 197 (12) | 0.052 |
46 | 3,067 (35) | 2,711 (88) | 356 (12) | 1,836 (36) | 1,663 (91) | 173 (9) | 1,231 (34) | 1,048 (85) | 183 (15) | |||
>6 | 1,663 (19) | 1,463 (88) | 200 (12) | 916 (18) | 823 (90) | 93 (10) | 747 (21) | 640 (86) | 107 (14) | |||
Severity index* | ||||||||||||
0 (lowest) | 2,812 (32) | 2,505 (89) | 307 (11) | 0.272 | 1,273 (25) | 1,185 (93) | 88 (7) | 0.045 | 1,539 (42) | 1,320 (86) | 219 (14) | 0.261 |
13 | 4,253 (49) | 3,827 (90) | 426 (10) | 2,604 (52) | 2,395 (92) | 209 (8) | 1,649 (45) | 1,432 (87) | 217 (13) | |||
46 | 1163 (13) | 1,052 (91) | 111 (10) | 849 (17) | 770 (91) | 79 (9) | 314 (9) | 282 (90) | 32 (10) | |||
>6 (highest) | 461 (5) | 405 (88) | 56 (12) | 333 (7) | 296 (89) | 37 (11) | 128 (4) | 109 (85) | 19 (15) | |||
Charges, | ||||||||||||
Low | 1,820 (21) | 1,707 (94) | 113 (6) | <0.001 | 1,426 (28) | 1,357 (95) | 69 (5) | <0.001 | 394 (11) | 350 (89) | 44 (11) | 0.007 |
Medium | 5,094 (59) | 4,581 (90) | 513 (10) | 2,807 (56) | 2,582 (92) | 225 (8) | 2,287 (63) | 1,999 (87) | 288 (13) | |||
High | 1,775 (20) | 1,501 (85) | 274 (15) | 826 (16) | 707 (86) | 119 (14) | 949 (26) | 794 (84) | 155 (16) |
Unadjusted analysis of medical and surgical patients identified significant associations of several variables with a top decile RSR (Table 2). Patients with longer lengths of stay (OR: 2.07, 95% CI: 1.72‐2.48), more attendings (OR: 1.44, 95% CI: 1.19‐1.73), and higher hospital charges (OR: 2.76, 95% CI: 2.19‐3.47) were more likely to report an RSR in the top decile. Patients without an ICU encounter (OR: 0.65, 95% CI: 0.55‐0.77) and on a medical service (OR: 0.57, 95% CI: 0.5‐ 0.66) were less likely to report an RSR in the top decile. Several associations were identified in only the medical or surgical cohorts. In the medical cohort, patients with the highest illness severity index (OR: 1.68, 95% CI: 1.12‐ 2.52) and with 7 different attending physicians (OR: 1.66, 95% CI: 1.27‐2.18) were more likely to report RSRs in the top decile. In the surgical cohort, patients <30 years of age (OR: 2.05, 95% CI 1.38‐3.07) were more likely to report an RSR in the top decile than patients >69 years of age. Insurance payer category and gender were not significantly associated with top decile RSRs.
Overall | Medical | Surgical | ||||
---|---|---|---|---|---|---|
Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | |
| ||||||
Age, y | ||||||
<30 | 1.5 (1.082.08) | 0.014 | 0.67 (0.351.29) | 0.227 | 2.05 (1.383.07) | <0.001 |
3049 | 1.64 (1.312.05) | <.001 | 1.47 (1.052.05) | 0.024 | 1.59 (1.172.17) | 0.003 |
5069 | 1.55 (1.321.82) | <.001 | 1.48 (1.191.85) | 0.001 | 1.48 (1.171.86) | 0.001 |
>69 | Ref | Ref | Ref | |||
Gender | ||||||
Male | 1.09 (0.951.25) | 0.220 | 1.06 (0.861.29) | 0.602 | 1.07 (0.881.3) | 0.506 |
Female | Ref | Ref | Ref | |||
ICU encounter | ||||||
No | 0.65 (0.550.77) | <0.001 | 0.65 (0.480.87) | 0.004 | 0.81 (0.661) | 0.048 |
Yes | Ref | Ref | Ref | |||
Payer | ||||||
Public | 0.73 (0.173.24) | 0.683 | 0.5 (0.064.16) | 0.521 | 1.13 (0.149.08) | 0.908 |
Private | 0.94 (0.214.14) | 0.933 | 0.6 (0.074.99) | 0.634 | 1.4 (0.1711.21) | 0.754 |
Charity | 1.51 (0.298.02) | 0.626 | 1.14 (0.1112.25) | 0.912 | 2 (0.1920.97) | 0.563 |
Self | Ref | Ref | Ref | |||
Length of stay, d | ||||||
<3 | Ref | Ref | Ref | |||
36 | 1.63 (1.371.93) | <0.001 | 1.88 (1.452.44) | <0.001 | 1.34 (1.061.7) | 0.014 |
>6 | 2.07 (1.722.48) | <0.001 | 2.74 (2.13.57) | <0.001 | 1.51 (1.171.94) | 0.001 |
No. of attendings | ||||||
<4 | Ref | Ref | Ref | |||
46 | 1.38 (1.181.61) | <0.001 | 1.53 (1.221.92) | <0.001 | 1.29 (1.041.6) | 0.021 |
>6 | 1.44 (1.191.73) | <0.001 | 1.66 (1.272.18) | <0.001 | 1.23 (0.961.59) | 0.102 |
Severity index* | ||||||
0 (lowest) | Ref | Ref | Ref | |||
13 | 0.91 (0.781.06) | 0.224 | 1.18 (0.911.52) | 0.221 | 0.91 (0.751.12) | 0.380 |
46 | 0.86 (0.681.08) | 0.200 | 1.38 (1.011.9) | 0.046 | 0.68 (0.461.01) | 0.058 |
>6 (highest) | 1.13 (0.831.53) | 0.436 | 1.68 (1.122.52) | 0.012 | 1.05 (0.631.75) | 0.849 |
Charges | ||||||
Low | Ref | Ref | Ref | |||
Medium | 1.69 (1.372.09) | <0.001 | 1.71 (1.32.26) | <0.001 | 1.15 (0.821.61) | 0.428 |
High | 2.76 (2.193.47) | <0.001 | 3.31 (2.434.51) | <0.001 | 1.55 (1.092.22) | 0.016 |
Service | ||||||
Medical | 0.57 (0.50.66) | <0.001 | ||||
Surgical | Ref |
Multivariable modeling (Table 3) for all patients without an ICU encounter suggested that (1) patients aged <30 years, 30 to 49 years, and 50 to 69 years were more likely to report top decile RSRs when compared to patients 70 years and older (OR: 1.61, 95% CI: 1.09‐2.36; OR: 1.44, 95% CI: 1.08‐1.93; and OR: 1.39, 95% CI: 1.13‐1.71, respectively) and (2), when compared to patients with extreme resource intensity scores, patients with higher resource intensity scores were more likely to report top decile RSRs (moderate [OR: 1.42, 95% CI: 1.11‐1.83], major [OR: 1.56, 95% CI: 1.22‐2.01], and extreme [OR: 2.29, 95% CI: 1.8‐2.92]. These results were relatively consistent within medical and surgical subgroups (Table 3).
Overall | Medical | Surgical | ||||
---|---|---|---|---|---|---|
Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | Odds Ratio (95% CI) | P | |
| ||||||
Age, y | ||||||
<30 | 1.61 (1.092.36) | 0.016 | 0.82 (0.41.7) | 0.596 | 2.31 (1.393.82) | 0.001 |
3049 | 1.44 (1.081.93) | 0.014 | 1.55 (1.032.32) | 0.034 | 1.41 (0.912.17) | 0.120 |
5069 | 1.39 (1.131.71) | 0.002 | 1.44 (1.11.88) | 0.008 | 1.39 (11.93) | 0.049 |
>69 | Ref | Ref | Ref | |||
Sex | ||||||
Male | 1 (0.851.17) | 0.964 | 1 (0.81.25) | 0.975 | 0.99 (0.791.26) | 0.965 |
Female | Ref | Ref | Ref | |||
Payer | ||||||
Public | 0.62 (0.142.8) | 0.531 | 0.42 (0.053.67) | 0.432 | 1.03 (0.128.59) | 0.978 |
Private | 0.67 (0.153.02) | 0.599 | 0.42 (0.053.67) | 0.434 | 1.17 (0.149.69) | 0.884 |
Charity | 1.54 (0.288.41) | 0.620 | 1 (0.0911.13) | 0.999 | 2.56 (0.2328.25) | 0.444 |
Self | Ref | Ref | Ref | |||
Severity index | ||||||
0 (lowest) | Ref | Ref | Ref | |||
13 | 1.07 (0.891.29) | 0.485 | 1.18 (0.881.58) | 0.267 | 1 (0.781.29) | 0.986 |
46 | 1.14 (0.861.51) | 0.377 | 1.42 (0.992.04) | 0.056 | 0.6 (0.331.1) | 0.100 |
>6 (highest) | 1.31 (0.911.9) | 0.150 | 1.47 (0.932.33) | 0.097 | 1.1 (0.542.21) | 0.795 |
Resource intensity score | ||||||
Low | Ref | Ref | Ref | |||
Moderate | 1.42 (1.111.83) | 0.006 | 1.6 (1.112.3) | 0.011 | 0.94 (0.661.34) | 0.722 |
Major | 1.56 (1.222.01) | 0.001 | 1.69 (1.182.43) | 0.004 | 1.28 (0.911.8) | 0.151 |
Extreme | 2.29 (1.82.92) | <0.001 | 2.72 (1.943.82) | <0.001 | 1.63 (1.172.26) | 0.004 |
Service | ||||||
Medical | 0.59 (0.50.69) | <0.001 | ||||
Surgical | Ref |
In those with at least 1 ICU attending encounter (see Supporting Table 1 in the online version of this article), no variables demonstrated significant association with top decile RSRs in the overall group or in the medical subgroup. For surgical patients with at least 1 ICU attending encounter (see Supporting Table 1 in the online version of this article), patients aged 30 to 49 and 50 to 69 years were more likely to provide top decile RSRs (OR: 1.93, 95% CI: 1.08‐3.46 and OR: 1.65, 95% CI 1.07‐2.53, respectively). Resource intensity was not significantly associated with top decile RSRs.
DISCUSSION
Our analysis suggests that, for patients on the general care floors, resource utilization is associated with the RSR and, therefore, potentially the CMS Summary Star Rating. Adjusting for severity of illness, patients with higher resource utilization were more likely to report top decile RSRs.
Prior data regarding utilization and satisfaction are mixed. In a 2‐year, prospective, national examination, patients in the highest quartile of patient satisfaction had increased healthcare and prescription drug expenditures as well as increased rates of hospitalization when compared with patients in the lowest quartile of patient satisfaction.[9] However, a recent national study of surgical administrative databases suggested hospitals with high patient satisfaction provided more efficient care.[13]
One reason for the conflicting data may be that large, national evaluations are unable to control for between‐hospital confounders (ie, hospital quality of care). By capturing all eligible returned surveys at 1 institution, our design allowed us to collect granular data. We found that in 1 hospital setting, patient population, facilities, and food services, patients receiving more clinical resources generally assigned higher ratings than patients receiving less.
It is possible that utilization is a proxy for serious illness, and that patients with serious illness receive more attention during hospitalization and are more satisfied when discharged in a good state of health. However, we did adjust for severity of illness in our model using the Charlson‐Deyo index and we suggest that, other factors being equal, hospitals with higher per‐patient expenditures may be assigned higher Summary Star Ratings.
CMS has recently implemented a number of metrics designed to decrease healthcare costs by improving quality, safety, and efficiency. Concurrently, CMS has also prioritized patient experience. The Summary Star Rating was created to provide healthcare consumers with an easy way to compare the patient experience between hospitals[4]; however, our data suggest that this metric may be at odds with inpatient cost savings and efficiency metrics.
Per‐patient spending becomes particularly salient when considering that in fiscal year 2016, CMS' hospital VBP reimbursement will include 2 metrics: an efficiency outcome measure labeled Medicare spending per beneficiary, and a patient experience outcome measure based on HCAHPS survey dimensions.[2] Together, these 2 metrics will comprise nearly half of the total VBP performance score used to determine reimbursement. Although our data suggest that these 2 VBP metrics may be correlated, it should be noted that we measured inpatient hospital charges, whereas the CMS efficiency outcome measure includes costs for episode of care spanning 3 days prior to hospitalization to 30 days after hospitalization.
Patient expectations likely play a role in satisfaction.[14, 15, 16] In an outpatient setting, physician fulfillment of patient requests has been associated with positive patient evaluations of care.[17] However, patients appear to value education, shared decision making, and provider empathy more than testing and intervention.[14, 18, 19, 20, 21, 22, 23] Perhaps, in the absence of the former attributes, patients use additional resource expenditure as a proxy.
It is not clear that higher resource expenditure improves outcomes. A landmark study of nearly 1 million Medicare enrollees by Fisher et al. suggests that, although Medicare patients in higher‐spending regions receive more care than those in lower‐spending regions, this does not result in better health outcomes, specifically with regard to mortality.[24, 25] Patients who live in areas of high hospital capacity use the hospital more frequently than do patients in areas of low hospital capacity, but this does not appear to result in improved mortality rates.[26] In fact, physicians in areas of high healthcare capacity report more difficulty maintaining high‐quality patient relationships and feel less able to provide high‐quality care than physicians in lower‐capacity areas.[27]
We hypothesize the cause of the association between resource utilization and patient satisfaction could be that patients (1) perceive that a doctor who allows them to stay longer in the hospital or who performs additional testing cares more about their well‐being and (2) that these patients feel more strongly that their concerns are being heard and addressed by their physicians. A systematic review of primary care patients identified many studies that found a positive association between meeting patient expectations and satisfaction with care, but also suggested that although patients frequently expect information, physicians misperceive this as an expectation of specific action.[28] A separate systematic review found that patient education in the form of decision aides can help patients develop more reasonable expectations and reduce utilization of certain discretionary procedures such as elective surgeries and prostate‐specific antigen testing.[29]
We did not specifically address clinical outcomes in our analysis because the clinical outcomes on which CMS currently adjusts VBP reimbursement focus on 30‐day mortality for specific diagnoses, nosocomial infections, and iatrogenic events.[30] Our data include only returned surveys from living patients, and it is likely that 30‐day mortality was similar throughout all subsets of patients. Additionally, the nosocomial and iatrogenic outcome measures used by CMS are sufficiently rare on the general floors and are unlikely to have significantly influenced our results.[31]
Our study has several strengths. Nearly all medical and surgical patient surveys returned during the study period were included, and therefore our calculations are likely to accurately reflect the Summary Star Rating that would have been assigned for the period. Second, the large sample size helps attenuate potential differences in commonly used outcome metrics. Third, by adjusting for a variety of demographic and clinical variables, we were able to decrease the likelihood of unidentified confounders.
Notably, we identified 38 (0.4%) surveys returned for patients under 18 years of age at admission. These surveys were included in our analysis because, to the best of our knowledge, they would have existed in the pool of surveys CMS could have used to assign a Summary Star Rating.
Our study also has limitations. First, geographically diverse data are needed to ensure generalizability. Second, we used the Charlson‐Deyo Comorbidity Index to describe the degree of illness for each patient. This index represents a patient's total illness burden but may not describe the relative severity of the patient's current illness relative to another patient. Third, we selected variables we felt were most likely to be associated with patient experience, but unidentified confounding remains possible. Fourth, attendings caring for ICU patients fall within the Division of Critical Care/Pulmonary Medicine. Therefore, we may have inadvertently placed patients into the ICU cohort who received a pulmonary/critical care consult on the general floors. Fifth, our data describe associations only for patients who returned surveys. Although there may be inherent biases in patients who return surveys, HCAHPS survey responses are used by CMS to determine a hospital's overall satisfaction score.
CONCLUSION
For patients who return HCAHPS surveys, resource utilization may be positively associated with a hospital's Summary Star Rating. These data suggest that hospitals with higher per‐patient expenditures may receive higher Summary Star Ratings, which could result in hospitals with higher per‐patient resource utilization appearing more attractive to healthcare consumers. Future studies should attempt to confirm our findings at other institutions and to determine causative factors.
Acknowledgements
The authors thank Jason Machan, PhD (Department of Orthopedics and Surgery, Warren Alpert Medical School, Brown University, Providence, Rhode Island) for his help with study design, and Ms. Brenda Foster (data analyst, University of Rochester Medical Center, Rochester, NY) for her help with data collection.
Disclosures: Nothing to report.
- Redesigning physician compensation and improving ED performance. Healthc Financ Manage. 2011;65(6):114–117. , , .
- QualityNet. Available at: https://www.qualitynet.org/dcs/ContentServer?c=Page97(13):1041–1048.
- Factors determining inpatient satisfaction with care. Soc Sci Med. 2002;54(4):493–504. , , , .
- Patient satisfaction revisited: a multilevel approach. Soc Sci Med. 2009;69(1):68–75. , , , , .
- Predictors of patient satisfaction with hospital health care. BMC Health Serv Res. 2006;6:102. , , , et al.
- The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012;172(5):405–411. , , , .
- Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):41–48. , , , , .
- Becker's Infection Control and Clinical Quality. Star Ratings go live on Hospital Compare: how many hospitals got 5 stars? Available at: http://www.beckershospitalreview.com/quality/star‐ratings‐go‐live‐on‐hospital‐compare‐how‐many‐hospitals‐got‐5‐stars.html. Published April 16, 2015. Accessed October 5, 2015.
- Adapting a clinical comorbidity index for use with ICD‐9‐CM administrative databases. J Clin Epidemiol. 1992;45(6):613–619. , , .
- Patient satisfaction and quality of surgical care in US hospitals. Ann Surg. 2015;261(1):2–8. , , .
- Should health care providers be accountable for patients' care experiences? J Gen Intern Med. 2015;30(2):253–256. , , , , .
- Unmet expectations for care and the patient‐physician relationship. J Gen Intern Med. 2002;17(11):817–824. , , , , .
- Do unmet expectations for specific tests, referrals, and new medications reduce patients' satisfaction? J Gen Intern Med. 2004;19(11):1080–1087. , , , et al.
- Request fulfillment in office practice: antecedents and relationship to outcomes. Med Care. 2002;40(1):38–51. , , , , , .
- Factors associated with patient satisfaction with care among dermatological outpatients. Br J Dermatol. 2001;145(4):617–623. , , , et al.
- Patient expectations of emergency department care: phase II—a cross‐sectional survey. CJEM. 2006;8(3):148–157. , , , .
- Patients' perspectives on ideal physician behaviors. Mayo Clin Proc. 2006;81(3):338–344. , , , , .
- What do people want from their health care? A qualitative study. J Participat Med. 2015;18:e10. , .
- Evaluations of care by adults following a denial of an advertisement‐related prescription drug request: the role of expectations, symptom severity, and physician communication style. Soc Sci Med. 2006;62(4):888–899. , , .
- Getting to “no”: strategies primary care physicians use to deny patient requests. Arch Intern Med. 2010;170(4):381–388. , , , , , .
- The implications of regional variations in Medicare spending. Part 1: the content, quality, and accessibility of care. Ann Intern Med. 2003;138(4):273–287. , , , , , .
- The implications of regional variations in Medicare spending. Part 2: health outcomes and satisfaction with care. Ann Intern Med. 2003;138(4):288–298. , , , , , .
- Associations among hospital capacity, utilization, and mortality of US Medicare beneficiaries, controlling for sociodemographic factors. Health Serv Res. 2000;34(6):1351–1362. , , , et al.
- Regional variations in health care intensity and physician perceptions of quality of care. Ann Intern Med. 2006;144(9):641–649. , , , .
- Visit‐specific expectations and patient‐centered outcomes: a literature review. Arch Fam Med. 2000;9(10):1148–1155. , , .
- Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2014;1:CD001431. , , , et al.
- Centers for Medicare and Medicaid Services. Hospital Compare. Outcome domain. Available at: https://www.medicare.gov/hospitalcompare/data/outcome‐domain.html. Accessed October 5, 2015.
- Centers for Disease Control and Prevention. 2013 national and state healthcare‐associated infections progress report. Available at: www.cdc.gov/hai/progress‐report/index.html. Accessed October 5, 2015.
- Redesigning physician compensation and improving ED performance. Healthc Financ Manage. 2011;65(6):114–117. , , .
- QualityNet. Available at: https://www.qualitynet.org/dcs/ContentServer?c=Page97(13):1041–1048.
- Factors determining inpatient satisfaction with care. Soc Sci Med. 2002;54(4):493–504. , , , .
- Patient satisfaction revisited: a multilevel approach. Soc Sci Med. 2009;69(1):68–75. , , , , .
- Predictors of patient satisfaction with hospital health care. BMC Health Serv Res. 2006;6:102. , , , et al.
- The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012;172(5):405–411. , , , .
- Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):41–48. , , , , .
- Becker's Infection Control and Clinical Quality. Star Ratings go live on Hospital Compare: how many hospitals got 5 stars? Available at: http://www.beckershospitalreview.com/quality/star‐ratings‐go‐live‐on‐hospital‐compare‐how‐many‐hospitals‐got‐5‐stars.html. Published April 16, 2015. Accessed October 5, 2015.
- Adapting a clinical comorbidity index for use with ICD‐9‐CM administrative databases. J Clin Epidemiol. 1992;45(6):613–619. , , .
- Patient satisfaction and quality of surgical care in US hospitals. Ann Surg. 2015;261(1):2–8. , , .
- Should health care providers be accountable for patients' care experiences? J Gen Intern Med. 2015;30(2):253–256. , , , , .
- Unmet expectations for care and the patient‐physician relationship. J Gen Intern Med. 2002;17(11):817–824. , , , , .
- Do unmet expectations for specific tests, referrals, and new medications reduce patients' satisfaction? J Gen Intern Med. 2004;19(11):1080–1087. , , , et al.
- Request fulfillment in office practice: antecedents and relationship to outcomes. Med Care. 2002;40(1):38–51. , , , , , .
- Factors associated with patient satisfaction with care among dermatological outpatients. Br J Dermatol. 2001;145(4):617–623. , , , et al.
- Patient expectations of emergency department care: phase II—a cross‐sectional survey. CJEM. 2006;8(3):148–157. , , , .
- Patients' perspectives on ideal physician behaviors. Mayo Clin Proc. 2006;81(3):338–344. , , , , .
- What do people want from their health care? A qualitative study. J Participat Med. 2015;18:e10. , .
- Evaluations of care by adults following a denial of an advertisement‐related prescription drug request: the role of expectations, symptom severity, and physician communication style. Soc Sci Med. 2006;62(4):888–899. , , .
- Getting to “no”: strategies primary care physicians use to deny patient requests. Arch Intern Med. 2010;170(4):381–388. , , , , , .
- The implications of regional variations in Medicare spending. Part 1: the content, quality, and accessibility of care. Ann Intern Med. 2003;138(4):273–287. , , , , , .
- The implications of regional variations in Medicare spending. Part 2: health outcomes and satisfaction with care. Ann Intern Med. 2003;138(4):288–298. , , , , , .
- Associations among hospital capacity, utilization, and mortality of US Medicare beneficiaries, controlling for sociodemographic factors. Health Serv Res. 2000;34(6):1351–1362. , , , et al.
- Regional variations in health care intensity and physician perceptions of quality of care. Ann Intern Med. 2006;144(9):641–649. , , , .
- Visit‐specific expectations and patient‐centered outcomes: a literature review. Arch Fam Med. 2000;9(10):1148–1155. , , .
- Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2014;1:CD001431. , , , et al.
- Centers for Medicare and Medicaid Services. Hospital Compare. Outcome domain. Available at: https://www.medicare.gov/hospitalcompare/data/outcome‐domain.html. Accessed October 5, 2015.
- Centers for Disease Control and Prevention. 2013 national and state healthcare‐associated infections progress report. Available at: www.cdc.gov/hai/progress‐report/index.html. Accessed October 5, 2015.
Improving Admission Process Efficiency
Maintaining high‐quality patient care, optimizing patient safety, and providing adequate trainee supervision has been an area of debate in medical education recently, and many physicians remain concerned that excessive regulation and duty hour restrictions may prevent residents from obtaining sufficient experience and developing an appropriate sense of autonomy.[1, 2, 3, 4] However, pediatric hospital medicine (PHM) has seen dramatic increases in evening and nighttime in‐house attending coverage, and the trend is expected to continue.[5, 6] Whether it be for financial, educational, or patient‐centered reasons, increased in‐house attending coverage at an academic medical setting, almost by definition, increases direct resident supervision.[7]
Increased supervision may result in better educational outcomes,[8] but many forces, such as night float systems and electronic medical records (EMRs), pull residents away from the bedside, leaving them with fewer opportunities to make decisions and a reduced sense of personal responsibility and patient ownership. Experiential learning is of great value in medical training, and without this, residents may exit their training with less confidence and competence, only rarely having been able to make important medical decisions on their own.[9, 10]
Counter to the shift toward increased supervision, we recently amended our process for pediatric admissions to the PHM service by transitioning from mandatory to on‐demand attending input during the admissions process. We hypothesized that this would improve its efficiency by encouraging residents to develop an increased sense of patient ownership and would not significantly impact patient care.
METHODS
Setting
This cohort study was conducted at the Golisano Children's Hospital (GCH) at the University of Rochester in Rochester, New York. The pediatric residency program at this tertiary care center includes 48 pediatric residents and 21 medicinepediatric residents. The PHM division, comprised of 8 pediatric hospitalists, provides care to approximately one‐third of the children with medical illnesses admitted to GCH. During the daytime, PHM attendings provide in‐house supervision for 2 resident teams, each consisting of a senior resident and 2 interns. At night, PHM attendings take calls from home. Residents are encouraged to contact attendings, available by cell phone and pager, with questions or concerns regarding patient care. The institutional review board of the University of Rochester Medical Center approved this study and informed consent was waived.
Process Change
Prior to the change, a pediatric emergency department (ED) provider at GCH directly contacted the PHM attending for all admissions to the PHM service (Figure 1). If the PHM attending accepted the admission, the ED provider then notified the pediatric admitting officer (PAO), a third‐year pediatric or fourth‐year medicinepediatric resident, who either performed or delegated the admission duties (eg, history and physical exam, admission orders).
On June 18, 2012, a new process for pediatric admissions was implemented (Figure 1). The ED provider now called the PAO, and not the attending, to discuss an admission to the PHM service. The PAO was empowered to accept the patient on behalf of the PHM attending, and perform or delegate the admission duties. During daytime hours (7:00 am5:00 pm), the PAO was expected to alert the PHM attending of the admission to allow the attending to see the patient on the day of admission. The PHM attending discussed the case with the admitting resident after the resident had an opportunity to assess the patient and formulate a management plan. During evening hours (5:00 pm10:00 pm), the admitting resident was expected to contact the PHM attending on call after evaluating the patient and developing a plan. Overnight (10:00 pm7:00 am), the PAO was given discretion as to whether she/he needed to contact the PHM attending on call; the PHM service attending then saw the patient in the morning. Residents were strongly encouraged to call the PHM attending with any questions or concerns or if they did not feel an admission was appropriate to the PHM service.
Study Population
The study population included all patients <19 years of age admitted to the PHM service from the ED. The pre‐ and post‐intervention cohorts included patients admitted from July 1, 2011 to September 30, 2011 and July 1, 2012 to September 30, 2012, respectively. These dates were chosen because residents are least experienced in the summer months, and hence we would predict the greatest disparity during this time. Patients who were directly admitted via transport from an outside facility, office or from home, or who were transferred from another service within GCH were excluded. Patients were identified from administrative databases.
Data Collection
Date and time of admission, severity of illness (SOI) scores, and risk of mortality (ROM) scores were obtained from the administrative dataset. The EMR was then used to extract the following variables: gender; date and time of the ED provider's admission request and first inpatient resident order; date and time of patient discharge, defined as the time the after‐visit summary was finalized by an inpatient provider; and the number of rapid response team (RRT) activations within 24 hours of the first inpatient resident order. The order time difference was calculated by subtracting the date and time of the ED provider admission request from the first inpatient order. Cases in which the order time difference was negative were excluded from the order time analysis due to the possibility that some extenuating circumstance for these patients, not related to the admission process, caused the early inpatient order. Length of stay (LOS) was calculated as the difference between the date and time of ED admission request and date and time of patient discharge.
The first 24 hours of each admission were reviewed independently by 3 PHM attending investigators. Neither reviewer evaluated a chart for which he had cosigned the admission note. Charts were assessed to determine whether a reasonable standard of care (SOC) was provided by the inpatient resident during admission. For instances in which SOC was not felt to have been provided by the resident, the chart was reviewed by the second investigator. If there was disagreement between the 2 investigators, a third PHM attending was used to determine the majority opinion. Due to the nature of data collected, it was not possible to blind reviewers.
PHM attending investigators also assessed how often the inpatient resident's antibiotic choice was changed by the admitting PHM attending. This evaluation excluded topical antibiotics and antibiotics not related to the admitting diagnosis (eg, continuation of outpatient antibiotics for otitis media). A change in antibiotics was defined as a change in class or a change within classes, initiation, or discontinuation of an antibiotic by the attending. Switching the route of administration was considered a change if it was not done as part of the transition to discharge. Antibiotic choice was considered in agreement if a change was made by the PHM attending based on new patient information that was not available to the admitting inpatient resident if it could be reasonably concluded that the attending would have otherwise agreed with the original choice. If this determination could not be made, the antibiotic agreement was classified as unknown. Data regarding antibiotic agreement were analyzed in 2 ways. The first included all patients for which agreement could be determined. For this analysis, if a patient was not prescribed an antibiotic by the resident or attending, there was considered to have been antibiotic agreement. The second analysis included only the patients for whom an antibiotic was started by the inpatient resident or admitting attending.
Finally, RRT activations within the first 24 hours of admission in the 2012 cohort were evaluated to determine whether the RRT could have been prevented by the original admission process. This determination was made via majority opinion of 3 PHM attendings who each independently reviewed the cases.
Statistical Analysis
The distributions of continuous variables (eg, order time difference, LOS) and the ordinal variables (ROM and SOI) were compared using Wilcoxon rank sum tests. 2 tests or Fisher exact tests were used to assess the differences in categorical variables (eg, SOC, gender). All tests were 2‐sided, and the significance level was set at 0.05. Analyses were conducted using the SAS statistical package version 9.3 (SAS Institute Inc., Cary, NC) and SPSS version 21 (IBM/SPSS, Armonk, NY).
RESULTS
The initial search identified 532 admissions. Of these, 140 were excluded (72 were via route other than the ED, 44 were not admitted to PHM, 14 were outside the study period, and 10 did not meet age criteria). Therefore, 182 admissions in the 2011 cohort and 210 admissions in the 2012 cohort were included. For all patients in the 2012 cohort, the correct admission process was followed.
Demographic characteristics between cohorts were similar (Table 1). Data for ROM and SOI were available for 141 (78%) 2011 patients and for 169 (81%) 2012 patients. The distribution of patients over the study months differed between cohorts. Age, gender, ROM, and SOI were not significantly different.
Variable | 2011 | 2012 | P Value |
---|---|---|---|
| |||
Male gender, n (%) | 107 (59) | 105 (50) | 0.082 |
Median age, y (IQR) | 2 (010) | 2 (07) | 0.689 |
Month admitted, n (%) | 0.002 | ||
July | 60 (33) | 87 (41) | |
August | 57 (31) | 81 (39) | |
September | 65 (36) | 42 (20) | |
Nighttime admission, n (%)* | 71 (39) | 90 (43) | 0.440 |
Risk of mortality, n (%) | 0.910 | ||
1, lowest risk | 114 (81) | 138 (82) | |
2 | 22 (16) | 23 (14) | |
3 | 5 (4) | 6 (4) | |
4, highest risk | 0 (0) | 2 (1) | |
Severity of illness, n (%) | 0.095 | ||
1, lowest severity | 60 (43) | 86 (51) | |
2 | 54 (38) | 62 (37) | |
3 | 25 (18) | 15 (9) | |
4, highest severity | 2 (1) | 6 (4) |
The median difference in time from the ED provider admission request to the first inpatient resident order was roughly half as long in 2012 than in 2011 (123 vs 62 minutes, P<0.001) (Table 2). There were 12 cases in which the inpatient order came prior to the ED admission request in 2012 and 2 cases in 2011, and these were excluded from the order time difference analysis. LOS was not significantly different between groups (P=0.348). There were no differences in the frequency of antibiotic changes when all patients were considered or in the subgroup in whom antibiotics were prescribed by either the resident or attending. The number of cases for which the admitting resident's plan was deemed not to have met standard of care were few and not significantly different (P=1). None of these patients experienced harm as a result, and in all cases, SOC was determined to have been provided by the admitting PHM attending. The frequency of RRT calls within the first 24 hours of admission on PHM patients was not significantly different (P=0.114).
Variable | 2011 | 2012 | P Value |
---|---|---|---|
| |||
Time from admission decision to first inpatient order, min, median (IQR)a | 123 (70188) | 62 (30105) | <0.001 |
Length of stay, h, median (IQR)b | 44 (3167) | 41 (2271) | 0.348 |
Change by attending to resident's antibiotic choice in all patients, n (%) | 13/182 (7) | 18/210 (9) | 0.617 |
Change by attending to resident's antibiotic choice in patients who received antibiotics, n (%) | 13/97 (13) | 18/96 (19) | 0.312 |
Resident met standard of care, n (%) | 180/182 (99) | 207/210 (99) | 1 |
RRT called within first 24 hours, n (%) | 2/182 (1) | 8/210 (4) | 0.114 |
When only patients admitted during the night in 2011 and 2012 were compared, results were consistent with the overall finding that there was a shorter time to inpatient admission order without a difference in other studied variables (Table 3).
Variable | 2011 | 2012 | P Value |
---|---|---|---|
| |||
Time from admission decision to first inpatient order, min, median (IQR)ab | 90 (40151) | 42 (1767) | 0.002 |
Length of stay, h, median (IQR)b | 53 (3461) | 36 (1769) | 0.307 |
Change by attending to resident's antibiotic choice in all patients, n (%) | 7/70 (10) | 7/88 (8) | 1 |
Resident met standard of care, n (%) | 70/71 (99) | 88/90 (98) | 1 |
RRT called within first 24 hours, n (%) | 2/71 (3) | 6/90 (7) | 0.468 |
DISCUSSION
The purpose of this study was to evaluate an admission process that removed an ineffective method of attending oversight and allowed residents an opportunity to develop patient care plans prior to attending input. The key change from the original process was removing the step in which the ED provider contacted the PHM attending for new admissions, thus eliminating mandatory inpatient attending input, removing an impediment to workflow, and empowering inpatient pediatric residents to assess new patients and develop management plans. Our data show a reduction in the time difference between the ED admission request and the inpatient resident's first order by more than an hour, indicating a more efficient admission process. Although one might expect that eliminating the act of a phone call would shorten this time by a few minutes, it cannot account for the extent of the difference we found. We postulate that an increased sense of accountability motivated inpatient residents to evaluate and begin management sooner, a topic that requires further exploration.
A more efficient admission process benefits emergency medicine residents and other ED providers as well. It is well documented that ED crowding is associated with decreased quality of care,[11, 12] and ED efficiency is receiving increased attention with newly reportable quality metrics such as Admit Decision Time to Emergency Department Departure Time for Admitted Patients.[13]
Our data do not attenuate the importance of hospitalists in patient care, as evidenced by the fact that PHM attendings continued to frequently amend the residents' antibiotic choicethe only variable we evaluated in terms of change in planand recognized several cases in which the residents' plan did not meet standard of care. Furthermore, attendings continued to be available by phone and pager for guidance and education when needed or requested by the residents. Instead, our data show that removing mandated attending input at the time of admission did not significantly impact major patient outcomes, which may partly be attributable to the general safety of the inpatient pediatric wards.[14, 15] In our study, a comprehensive analysis of patient harm was not possible given the variable list and infrequency with which SOC was not met or RRTs were called. Furthermore, our residency program continues to comply with national pediatric residency requirements for nighttime supervision.[7]
Our PHM division, which had previously allocated 2 hours of attending clinical time per call night, now averages <15 minutes. These data conflict with the current trend in PHM toward more, rather than less, direct attending oversight. Many PHM divisions have moved toward 24/7 in‐house coverage,[5] a situation that often results in shiftwork and multiple handoffs. Removing the in‐house attending overnight would allow for the rapidly growing PHM subspecialty to allocate hospitalists elsewhere depending on their scholarly needs, particularly as divisions seek to become increasingly involved in medical education, research, and hospital leadership.[16, 17] Although one might posit a financial benefit to having in‐house attendings determine the appropriateness of an admission overnight, we identified no case in which the insurance denied an admission.
Safety equivalence of an in‐house to on‐call attending is poorly studied in PHM. However, even in intensive care units, where the majority of morbidity and mortality occur, it is unclear that the presence of an attending, let alone mandating phone calls, positively impacts survival. One prospective trial failed to demonstrate a difference in patient outcomes in the critical care setting when comparing mandated attending in‐house involvement to optional attending availability by phone.[18] Furthermore, several studies have found no association with time of admission and mortality, implying there is no criticality specifically requiring nighttime coverage.[19, 20]
One adult study of nocturnists showed that residents felt they had more contact with attendings who were in‐house than attendings taking home calls.[21] However, when the residents were asked why they did not contact the attending, the only difference between at‐home and in‐house attendings was that for attendings available by phone, residents were less likely to know who to call and were hesitant to wake the attending.
This study had several limitations. First, we could not effectively blind reviewers; a salient point given that the reviewers benefited from the new system with a reduced nighttime workload. We attempted to minimize this bias by employing multiple independent evaluations followed by group consensus whenever possible. Second, even though we had 3 hospitalists independently review each 2012 RRT to determine whether it was preventable by the prior system, this task was prone to retrospective bias. Third, there was a significant difference in the month of admission between cohorts. Rather than biasing toward our observed time difference, the fact that more patients were admitted in July 2012the beginning of the academic yearmay have decreased our observed difference given that residents were less experienced. Forth, this study used certain measurable outcomes as proxies for quality of care and patient harm and was likely underpowered to truly detect a difference in some of the more infrequent variables. Furthermore, we did not evaluate other potential harms, such as cost. Fifth, we did not evaluate whether or not the new process changed ED provider behavior (ie, an ED provider may wait longer to request admission overnight given that the PHM attending is not mandated to provide input until the morning). Finally, although LOS was used as a balancing measure, it would likely have taken major events or omissions during the admission process to cause it to change significantly, and therefore the lack of statistical difference in this metric does not necessarily imply that more subtle aspects of care were the same between groups. We also chose not to include readmission rate for this reason, as any change could not conclusively be attributed to the new admission process.
CONCLUSION
Increasing resident autonomy by removing mandated input during PHM admissions makes the process more efficient and results in no significant changes to major patient outcomes. These data may be used by rapidly growing PHM divisions to redefine faculty clinical responsibilities, particularly at night.
ACKNOWLEDGMENTS
Disclosures: This project was supported by the University of Rochester CTSA award number UL1 TR000042 from the National Center for Advancing Translational Sciences of the National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no conflicts of interest.
- Accreditation Council for Graduate Medical Education Task Force on Quality Care and Professionalism. The ACGME 2011 duty hour standards: enhancing quality of care, supervision, and resident professional development. Accreditation Council for Graduate Medical Education, Chicago, IL; 2011. Available at: http://www.acgme.org/acgmeweb/Portals/0/PDFs/jgme‐monograph[1].pdf. Last accessed on December 18, 2013.
- Impact of reduction in working hours for doctors in training on postgraduate medical education and patients' outcomes: systemic review. BMJ. 2011;342:d1580. , , , , .
- ACGME 2011 duty‐hour guidelines: consequences expected by radiology residency directors and chief residents. J Am Coll Radiol. 2012;9(11):820–827. , .
- Justifying patient risks associated with medical education. JAMA. 2007;298(9):1046–1048. .
- Survey of academic pediatric hospitalist programs in the U.S.: organizational, administrative and financial factors. J Hosp Med. 2013;8(6):285–291. , , , , , .
- Inpatient staffing within pediatric residency programs: work hour restrictions and the evolving role of the pediatric hospitalist. J Hosp Med. 2012;7(4):299–303. , , , .
- ACGME Program Requirements for Graduate Medical Education in Pediatrics. ACGME Approved: September 30, 2012; Effective: July 1, 2013. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/2013‐PR‐FAQ‐PIF/320_pediatrics_07012013.pdf. Accessed September 17, 2013.
- A systematic review: the effect of clinical supervision on patient and residency education outcomes. Acad Med. 2012;87(4):428–442. , , , et al.
- Twenty‐four‐hour intensivist staffing in teaching hospitals: tension between safety today and safety tomorrow. Chest. 2012;141(5):1315–1320. , .
- Medical education on the brink: 62 years of front‐line observations and opinions. Tex Heart Inst J. 2012;39(3):322–329. .
- Emergency department crowding is associated with poor care for patients with severe pain. Ann Emerg Med. 2008;51:6–7. , .
- The effect of emergency department crowding on clinically oriented outcomes. Acad Emerg Med. 2009;16(1):1–10. , , , et al.
- The Specifications Manual for National Hospital Inpatient Quality Measures. A Collaboration of the Centers for Medicare 128(1):72–78.
- Effect of a rapid response team on hospital‐wide mortality and code rates outside the ICU in a Children's Hospital. JAMA. 2007;298(19):2267–2274. , , , et al.
- Section on Hospital Medicine. Guiding principles for Pediatric Hospital Medicine programs. Pediatrics. 2013;132(4):782–786. SHM fact sheet: about hospital medicine. http://www.hospitalmedicine.org/AM/Template.cfm?Section=Media_Kit42(5):120–126.
- A randomized trial of nighttime physician staffing in an intensive care unit. N Engl J Med. 2013;368(23):2201–2209. , , , et al.
- Association between time of admission to the ICU and mortality: a systematic review and meta‐analysis. Chest. 2010;138(1):68–75. , , , , , .
- After‐hours admissions are not associated with increased risk‐adjusted mortality in pediatric intensive care. Intensive Care Med. 2008;34(1):148–151. , , , .
- Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7(8):606–610. , , , , , .
Maintaining high‐quality patient care, optimizing patient safety, and providing adequate trainee supervision has been an area of debate in medical education recently, and many physicians remain concerned that excessive regulation and duty hour restrictions may prevent residents from obtaining sufficient experience and developing an appropriate sense of autonomy.[1, 2, 3, 4] However, pediatric hospital medicine (PHM) has seen dramatic increases in evening and nighttime in‐house attending coverage, and the trend is expected to continue.[5, 6] Whether it be for financial, educational, or patient‐centered reasons, increased in‐house attending coverage at an academic medical setting, almost by definition, increases direct resident supervision.[7]
Increased supervision may result in better educational outcomes,[8] but many forces, such as night float systems and electronic medical records (EMRs), pull residents away from the bedside, leaving them with fewer opportunities to make decisions and a reduced sense of personal responsibility and patient ownership. Experiential learning is of great value in medical training, and without this, residents may exit their training with less confidence and competence, only rarely having been able to make important medical decisions on their own.[9, 10]
Counter to the shift toward increased supervision, we recently amended our process for pediatric admissions to the PHM service by transitioning from mandatory to on‐demand attending input during the admissions process. We hypothesized that this would improve its efficiency by encouraging residents to develop an increased sense of patient ownership and would not significantly impact patient care.
METHODS
Setting
This cohort study was conducted at the Golisano Children's Hospital (GCH) at the University of Rochester in Rochester, New York. The pediatric residency program at this tertiary care center includes 48 pediatric residents and 21 medicinepediatric residents. The PHM division, comprised of 8 pediatric hospitalists, provides care to approximately one‐third of the children with medical illnesses admitted to GCH. During the daytime, PHM attendings provide in‐house supervision for 2 resident teams, each consisting of a senior resident and 2 interns. At night, PHM attendings take calls from home. Residents are encouraged to contact attendings, available by cell phone and pager, with questions or concerns regarding patient care. The institutional review board of the University of Rochester Medical Center approved this study and informed consent was waived.
Process Change
Prior to the change, a pediatric emergency department (ED) provider at GCH directly contacted the PHM attending for all admissions to the PHM service (Figure 1). If the PHM attending accepted the admission, the ED provider then notified the pediatric admitting officer (PAO), a third‐year pediatric or fourth‐year medicinepediatric resident, who either performed or delegated the admission duties (eg, history and physical exam, admission orders).
On June 18, 2012, a new process for pediatric admissions was implemented (Figure 1). The ED provider now called the PAO, and not the attending, to discuss an admission to the PHM service. The PAO was empowered to accept the patient on behalf of the PHM attending, and perform or delegate the admission duties. During daytime hours (7:00 am5:00 pm), the PAO was expected to alert the PHM attending of the admission to allow the attending to see the patient on the day of admission. The PHM attending discussed the case with the admitting resident after the resident had an opportunity to assess the patient and formulate a management plan. During evening hours (5:00 pm10:00 pm), the admitting resident was expected to contact the PHM attending on call after evaluating the patient and developing a plan. Overnight (10:00 pm7:00 am), the PAO was given discretion as to whether she/he needed to contact the PHM attending on call; the PHM service attending then saw the patient in the morning. Residents were strongly encouraged to call the PHM attending with any questions or concerns or if they did not feel an admission was appropriate to the PHM service.
Study Population
The study population included all patients <19 years of age admitted to the PHM service from the ED. The pre‐ and post‐intervention cohorts included patients admitted from July 1, 2011 to September 30, 2011 and July 1, 2012 to September 30, 2012, respectively. These dates were chosen because residents are least experienced in the summer months, and hence we would predict the greatest disparity during this time. Patients who were directly admitted via transport from an outside facility, office or from home, or who were transferred from another service within GCH were excluded. Patients were identified from administrative databases.
Data Collection
Date and time of admission, severity of illness (SOI) scores, and risk of mortality (ROM) scores were obtained from the administrative dataset. The EMR was then used to extract the following variables: gender; date and time of the ED provider's admission request and first inpatient resident order; date and time of patient discharge, defined as the time the after‐visit summary was finalized by an inpatient provider; and the number of rapid response team (RRT) activations within 24 hours of the first inpatient resident order. The order time difference was calculated by subtracting the date and time of the ED provider admission request from the first inpatient order. Cases in which the order time difference was negative were excluded from the order time analysis due to the possibility that some extenuating circumstance for these patients, not related to the admission process, caused the early inpatient order. Length of stay (LOS) was calculated as the difference between the date and time of ED admission request and date and time of patient discharge.
The first 24 hours of each admission were reviewed independently by 3 PHM attending investigators. Neither reviewer evaluated a chart for which he had cosigned the admission note. Charts were assessed to determine whether a reasonable standard of care (SOC) was provided by the inpatient resident during admission. For instances in which SOC was not felt to have been provided by the resident, the chart was reviewed by the second investigator. If there was disagreement between the 2 investigators, a third PHM attending was used to determine the majority opinion. Due to the nature of data collected, it was not possible to blind reviewers.
PHM attending investigators also assessed how often the inpatient resident's antibiotic choice was changed by the admitting PHM attending. This evaluation excluded topical antibiotics and antibiotics not related to the admitting diagnosis (eg, continuation of outpatient antibiotics for otitis media). A change in antibiotics was defined as a change in class or a change within classes, initiation, or discontinuation of an antibiotic by the attending. Switching the route of administration was considered a change if it was not done as part of the transition to discharge. Antibiotic choice was considered in agreement if a change was made by the PHM attending based on new patient information that was not available to the admitting inpatient resident if it could be reasonably concluded that the attending would have otherwise agreed with the original choice. If this determination could not be made, the antibiotic agreement was classified as unknown. Data regarding antibiotic agreement were analyzed in 2 ways. The first included all patients for which agreement could be determined. For this analysis, if a patient was not prescribed an antibiotic by the resident or attending, there was considered to have been antibiotic agreement. The second analysis included only the patients for whom an antibiotic was started by the inpatient resident or admitting attending.
Finally, RRT activations within the first 24 hours of admission in the 2012 cohort were evaluated to determine whether the RRT could have been prevented by the original admission process. This determination was made via majority opinion of 3 PHM attendings who each independently reviewed the cases.
Statistical Analysis
The distributions of continuous variables (eg, order time difference, LOS) and the ordinal variables (ROM and SOI) were compared using Wilcoxon rank sum tests. 2 tests or Fisher exact tests were used to assess the differences in categorical variables (eg, SOC, gender). All tests were 2‐sided, and the significance level was set at 0.05. Analyses were conducted using the SAS statistical package version 9.3 (SAS Institute Inc., Cary, NC) and SPSS version 21 (IBM/SPSS, Armonk, NY).
RESULTS
The initial search identified 532 admissions. Of these, 140 were excluded (72 were via route other than the ED, 44 were not admitted to PHM, 14 were outside the study period, and 10 did not meet age criteria). Therefore, 182 admissions in the 2011 cohort and 210 admissions in the 2012 cohort were included. For all patients in the 2012 cohort, the correct admission process was followed.
Demographic characteristics between cohorts were similar (Table 1). Data for ROM and SOI were available for 141 (78%) 2011 patients and for 169 (81%) 2012 patients. The distribution of patients over the study months differed between cohorts. Age, gender, ROM, and SOI were not significantly different.
Variable | 2011 | 2012 | P Value |
---|---|---|---|
| |||
Male gender, n (%) | 107 (59) | 105 (50) | 0.082 |
Median age, y (IQR) | 2 (010) | 2 (07) | 0.689 |
Month admitted, n (%) | 0.002 | ||
July | 60 (33) | 87 (41) | |
August | 57 (31) | 81 (39) | |
September | 65 (36) | 42 (20) | |
Nighttime admission, n (%)* | 71 (39) | 90 (43) | 0.440 |
Risk of mortality, n (%) | 0.910 | ||
1, lowest risk | 114 (81) | 138 (82) | |
2 | 22 (16) | 23 (14) | |
3 | 5 (4) | 6 (4) | |
4, highest risk | 0 (0) | 2 (1) | |
Severity of illness, n (%) | 0.095 | ||
1, lowest severity | 60 (43) | 86 (51) | |
2 | 54 (38) | 62 (37) | |
3 | 25 (18) | 15 (9) | |
4, highest severity | 2 (1) | 6 (4) |
The median difference in time from the ED provider admission request to the first inpatient resident order was roughly half as long in 2012 than in 2011 (123 vs 62 minutes, P<0.001) (Table 2). There were 12 cases in which the inpatient order came prior to the ED admission request in 2012 and 2 cases in 2011, and these were excluded from the order time difference analysis. LOS was not significantly different between groups (P=0.348). There were no differences in the frequency of antibiotic changes when all patients were considered or in the subgroup in whom antibiotics were prescribed by either the resident or attending. The number of cases for which the admitting resident's plan was deemed not to have met standard of care were few and not significantly different (P=1). None of these patients experienced harm as a result, and in all cases, SOC was determined to have been provided by the admitting PHM attending. The frequency of RRT calls within the first 24 hours of admission on PHM patients was not significantly different (P=0.114).
Variable | 2011 | 2012 | P Value |
---|---|---|---|
| |||
Time from admission decision to first inpatient order, min, median (IQR)a | 123 (70188) | 62 (30105) | <0.001 |
Length of stay, h, median (IQR)b | 44 (3167) | 41 (2271) | 0.348 |
Change by attending to resident's antibiotic choice in all patients, n (%) | 13/182 (7) | 18/210 (9) | 0.617 |
Change by attending to resident's antibiotic choice in patients who received antibiotics, n (%) | 13/97 (13) | 18/96 (19) | 0.312 |
Resident met standard of care, n (%) | 180/182 (99) | 207/210 (99) | 1 |
RRT called within first 24 hours, n (%) | 2/182 (1) | 8/210 (4) | 0.114 |
When only patients admitted during the night in 2011 and 2012 were compared, results were consistent with the overall finding that there was a shorter time to inpatient admission order without a difference in other studied variables (Table 3).
Variable | 2011 | 2012 | P Value |
---|---|---|---|
| |||
Time from admission decision to first inpatient order, min, median (IQR)ab | 90 (40151) | 42 (1767) | 0.002 |
Length of stay, h, median (IQR)b | 53 (3461) | 36 (1769) | 0.307 |
Change by attending to resident's antibiotic choice in all patients, n (%) | 7/70 (10) | 7/88 (8) | 1 |
Resident met standard of care, n (%) | 70/71 (99) | 88/90 (98) | 1 |
RRT called within first 24 hours, n (%) | 2/71 (3) | 6/90 (7) | 0.468 |
DISCUSSION
The purpose of this study was to evaluate an admission process that removed an ineffective method of attending oversight and allowed residents an opportunity to develop patient care plans prior to attending input. The key change from the original process was removing the step in which the ED provider contacted the PHM attending for new admissions, thus eliminating mandatory inpatient attending input, removing an impediment to workflow, and empowering inpatient pediatric residents to assess new patients and develop management plans. Our data show a reduction in the time difference between the ED admission request and the inpatient resident's first order by more than an hour, indicating a more efficient admission process. Although one might expect that eliminating the act of a phone call would shorten this time by a few minutes, it cannot account for the extent of the difference we found. We postulate that an increased sense of accountability motivated inpatient residents to evaluate and begin management sooner, a topic that requires further exploration.
A more efficient admission process benefits emergency medicine residents and other ED providers as well. It is well documented that ED crowding is associated with decreased quality of care,[11, 12] and ED efficiency is receiving increased attention with newly reportable quality metrics such as Admit Decision Time to Emergency Department Departure Time for Admitted Patients.[13]
Our data do not attenuate the importance of hospitalists in patient care, as evidenced by the fact that PHM attendings continued to frequently amend the residents' antibiotic choicethe only variable we evaluated in terms of change in planand recognized several cases in which the residents' plan did not meet standard of care. Furthermore, attendings continued to be available by phone and pager for guidance and education when needed or requested by the residents. Instead, our data show that removing mandated attending input at the time of admission did not significantly impact major patient outcomes, which may partly be attributable to the general safety of the inpatient pediatric wards.[14, 15] In our study, a comprehensive analysis of patient harm was not possible given the variable list and infrequency with which SOC was not met or RRTs were called. Furthermore, our residency program continues to comply with national pediatric residency requirements for nighttime supervision.[7]
Our PHM division, which had previously allocated 2 hours of attending clinical time per call night, now averages <15 minutes. These data conflict with the current trend in PHM toward more, rather than less, direct attending oversight. Many PHM divisions have moved toward 24/7 in‐house coverage,[5] a situation that often results in shiftwork and multiple handoffs. Removing the in‐house attending overnight would allow for the rapidly growing PHM subspecialty to allocate hospitalists elsewhere depending on their scholarly needs, particularly as divisions seek to become increasingly involved in medical education, research, and hospital leadership.[16, 17] Although one might posit a financial benefit to having in‐house attendings determine the appropriateness of an admission overnight, we identified no case in which the insurance denied an admission.
Safety equivalence of an in‐house to on‐call attending is poorly studied in PHM. However, even in intensive care units, where the majority of morbidity and mortality occur, it is unclear that the presence of an attending, let alone mandating phone calls, positively impacts survival. One prospective trial failed to demonstrate a difference in patient outcomes in the critical care setting when comparing mandated attending in‐house involvement to optional attending availability by phone.[18] Furthermore, several studies have found no association with time of admission and mortality, implying there is no criticality specifically requiring nighttime coverage.[19, 20]
One adult study of nocturnists showed that residents felt they had more contact with attendings who were in‐house than attendings taking home calls.[21] However, when the residents were asked why they did not contact the attending, the only difference between at‐home and in‐house attendings was that for attendings available by phone, residents were less likely to know who to call and were hesitant to wake the attending.
This study had several limitations. First, we could not effectively blind reviewers; a salient point given that the reviewers benefited from the new system with a reduced nighttime workload. We attempted to minimize this bias by employing multiple independent evaluations followed by group consensus whenever possible. Second, even though we had 3 hospitalists independently review each 2012 RRT to determine whether it was preventable by the prior system, this task was prone to retrospective bias. Third, there was a significant difference in the month of admission between cohorts. Rather than biasing toward our observed time difference, the fact that more patients were admitted in July 2012the beginning of the academic yearmay have decreased our observed difference given that residents were less experienced. Forth, this study used certain measurable outcomes as proxies for quality of care and patient harm and was likely underpowered to truly detect a difference in some of the more infrequent variables. Furthermore, we did not evaluate other potential harms, such as cost. Fifth, we did not evaluate whether or not the new process changed ED provider behavior (ie, an ED provider may wait longer to request admission overnight given that the PHM attending is not mandated to provide input until the morning). Finally, although LOS was used as a balancing measure, it would likely have taken major events or omissions during the admission process to cause it to change significantly, and therefore the lack of statistical difference in this metric does not necessarily imply that more subtle aspects of care were the same between groups. We also chose not to include readmission rate for this reason, as any change could not conclusively be attributed to the new admission process.
CONCLUSION
Increasing resident autonomy by removing mandated input during PHM admissions makes the process more efficient and results in no significant changes to major patient outcomes. These data may be used by rapidly growing PHM divisions to redefine faculty clinical responsibilities, particularly at night.
ACKNOWLEDGMENTS
Disclosures: This project was supported by the University of Rochester CTSA award number UL1 TR000042 from the National Center for Advancing Translational Sciences of the National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no conflicts of interest.
Maintaining high‐quality patient care, optimizing patient safety, and providing adequate trainee supervision has been an area of debate in medical education recently, and many physicians remain concerned that excessive regulation and duty hour restrictions may prevent residents from obtaining sufficient experience and developing an appropriate sense of autonomy.[1, 2, 3, 4] However, pediatric hospital medicine (PHM) has seen dramatic increases in evening and nighttime in‐house attending coverage, and the trend is expected to continue.[5, 6] Whether it be for financial, educational, or patient‐centered reasons, increased in‐house attending coverage at an academic medical setting, almost by definition, increases direct resident supervision.[7]
Increased supervision may result in better educational outcomes,[8] but many forces, such as night float systems and electronic medical records (EMRs), pull residents away from the bedside, leaving them with fewer opportunities to make decisions and a reduced sense of personal responsibility and patient ownership. Experiential learning is of great value in medical training, and without this, residents may exit their training with less confidence and competence, only rarely having been able to make important medical decisions on their own.[9, 10]
Counter to the shift toward increased supervision, we recently amended our process for pediatric admissions to the PHM service by transitioning from mandatory to on‐demand attending input during the admissions process. We hypothesized that this would improve its efficiency by encouraging residents to develop an increased sense of patient ownership and would not significantly impact patient care.
METHODS
Setting
This cohort study was conducted at the Golisano Children's Hospital (GCH) at the University of Rochester in Rochester, New York. The pediatric residency program at this tertiary care center includes 48 pediatric residents and 21 medicinepediatric residents. The PHM division, comprised of 8 pediatric hospitalists, provides care to approximately one‐third of the children with medical illnesses admitted to GCH. During the daytime, PHM attendings provide in‐house supervision for 2 resident teams, each consisting of a senior resident and 2 interns. At night, PHM attendings take calls from home. Residents are encouraged to contact attendings, available by cell phone and pager, with questions or concerns regarding patient care. The institutional review board of the University of Rochester Medical Center approved this study and informed consent was waived.
Process Change
Prior to the change, a pediatric emergency department (ED) provider at GCH directly contacted the PHM attending for all admissions to the PHM service (Figure 1). If the PHM attending accepted the admission, the ED provider then notified the pediatric admitting officer (PAO), a third‐year pediatric or fourth‐year medicinepediatric resident, who either performed or delegated the admission duties (eg, history and physical exam, admission orders).
On June 18, 2012, a new process for pediatric admissions was implemented (Figure 1). The ED provider now called the PAO, and not the attending, to discuss an admission to the PHM service. The PAO was empowered to accept the patient on behalf of the PHM attending, and perform or delegate the admission duties. During daytime hours (7:00 am5:00 pm), the PAO was expected to alert the PHM attending of the admission to allow the attending to see the patient on the day of admission. The PHM attending discussed the case with the admitting resident after the resident had an opportunity to assess the patient and formulate a management plan. During evening hours (5:00 pm10:00 pm), the admitting resident was expected to contact the PHM attending on call after evaluating the patient and developing a plan. Overnight (10:00 pm7:00 am), the PAO was given discretion as to whether she/he needed to contact the PHM attending on call; the PHM service attending then saw the patient in the morning. Residents were strongly encouraged to call the PHM attending with any questions or concerns or if they did not feel an admission was appropriate to the PHM service.
Study Population
The study population included all patients <19 years of age admitted to the PHM service from the ED. The pre‐ and post‐intervention cohorts included patients admitted from July 1, 2011 to September 30, 2011 and July 1, 2012 to September 30, 2012, respectively. These dates were chosen because residents are least experienced in the summer months, and hence we would predict the greatest disparity during this time. Patients who were directly admitted via transport from an outside facility, office or from home, or who were transferred from another service within GCH were excluded. Patients were identified from administrative databases.
Data Collection
Date and time of admission, severity of illness (SOI) scores, and risk of mortality (ROM) scores were obtained from the administrative dataset. The EMR was then used to extract the following variables: gender; date and time of the ED provider's admission request and first inpatient resident order; date and time of patient discharge, defined as the time the after‐visit summary was finalized by an inpatient provider; and the number of rapid response team (RRT) activations within 24 hours of the first inpatient resident order. The order time difference was calculated by subtracting the date and time of the ED provider admission request from the first inpatient order. Cases in which the order time difference was negative were excluded from the order time analysis due to the possibility that some extenuating circumstance for these patients, not related to the admission process, caused the early inpatient order. Length of stay (LOS) was calculated as the difference between the date and time of ED admission request and date and time of patient discharge.
The first 24 hours of each admission were reviewed independently by 3 PHM attending investigators. Neither reviewer evaluated a chart for which he had cosigned the admission note. Charts were assessed to determine whether a reasonable standard of care (SOC) was provided by the inpatient resident during admission. For instances in which SOC was not felt to have been provided by the resident, the chart was reviewed by the second investigator. If there was disagreement between the 2 investigators, a third PHM attending was used to determine the majority opinion. Due to the nature of data collected, it was not possible to blind reviewers.
PHM attending investigators also assessed how often the inpatient resident's antibiotic choice was changed by the admitting PHM attending. This evaluation excluded topical antibiotics and antibiotics not related to the admitting diagnosis (eg, continuation of outpatient antibiotics for otitis media). A change in antibiotics was defined as a change in class or a change within classes, initiation, or discontinuation of an antibiotic by the attending. Switching the route of administration was considered a change if it was not done as part of the transition to discharge. Antibiotic choice was considered in agreement if a change was made by the PHM attending based on new patient information that was not available to the admitting inpatient resident if it could be reasonably concluded that the attending would have otherwise agreed with the original choice. If this determination could not be made, the antibiotic agreement was classified as unknown. Data regarding antibiotic agreement were analyzed in 2 ways. The first included all patients for which agreement could be determined. For this analysis, if a patient was not prescribed an antibiotic by the resident or attending, there was considered to have been antibiotic agreement. The second analysis included only the patients for whom an antibiotic was started by the inpatient resident or admitting attending.
Finally, RRT activations within the first 24 hours of admission in the 2012 cohort were evaluated to determine whether the RRT could have been prevented by the original admission process. This determination was made via majority opinion of 3 PHM attendings who each independently reviewed the cases.
Statistical Analysis
The distributions of continuous variables (eg, order time difference, LOS) and the ordinal variables (ROM and SOI) were compared using Wilcoxon rank sum tests. 2 tests or Fisher exact tests were used to assess the differences in categorical variables (eg, SOC, gender). All tests were 2‐sided, and the significance level was set at 0.05. Analyses were conducted using the SAS statistical package version 9.3 (SAS Institute Inc., Cary, NC) and SPSS version 21 (IBM/SPSS, Armonk, NY).
RESULTS
The initial search identified 532 admissions. Of these, 140 were excluded (72 were via route other than the ED, 44 were not admitted to PHM, 14 were outside the study period, and 10 did not meet age criteria). Therefore, 182 admissions in the 2011 cohort and 210 admissions in the 2012 cohort were included. For all patients in the 2012 cohort, the correct admission process was followed.
Demographic characteristics between cohorts were similar (Table 1). Data for ROM and SOI were available for 141 (78%) 2011 patients and for 169 (81%) 2012 patients. The distribution of patients over the study months differed between cohorts. Age, gender, ROM, and SOI were not significantly different.
Variable | 2011 | 2012 | P Value |
---|---|---|---|
| |||
Male gender, n (%) | 107 (59) | 105 (50) | 0.082 |
Median age, y (IQR) | 2 (010) | 2 (07) | 0.689 |
Month admitted, n (%) | 0.002 | ||
July | 60 (33) | 87 (41) | |
August | 57 (31) | 81 (39) | |
September | 65 (36) | 42 (20) | |
Nighttime admission, n (%)* | 71 (39) | 90 (43) | 0.440 |
Risk of mortality, n (%) | 0.910 | ||
1, lowest risk | 114 (81) | 138 (82) | |
2 | 22 (16) | 23 (14) | |
3 | 5 (4) | 6 (4) | |
4, highest risk | 0 (0) | 2 (1) | |
Severity of illness, n (%) | 0.095 | ||
1, lowest severity | 60 (43) | 86 (51) | |
2 | 54 (38) | 62 (37) | |
3 | 25 (18) | 15 (9) | |
4, highest severity | 2 (1) | 6 (4) |
The median difference in time from the ED provider admission request to the first inpatient resident order was roughly half as long in 2012 than in 2011 (123 vs 62 minutes, P<0.001) (Table 2). There were 12 cases in which the inpatient order came prior to the ED admission request in 2012 and 2 cases in 2011, and these were excluded from the order time difference analysis. LOS was not significantly different between groups (P=0.348). There were no differences in the frequency of antibiotic changes when all patients were considered or in the subgroup in whom antibiotics were prescribed by either the resident or attending. The number of cases for which the admitting resident's plan was deemed not to have met standard of care were few and not significantly different (P=1). None of these patients experienced harm as a result, and in all cases, SOC was determined to have been provided by the admitting PHM attending. The frequency of RRT calls within the first 24 hours of admission on PHM patients was not significantly different (P=0.114).
Variable | 2011 | 2012 | P Value |
---|---|---|---|
| |||
Time from admission decision to first inpatient order, min, median (IQR)a | 123 (70188) | 62 (30105) | <0.001 |
Length of stay, h, median (IQR)b | 44 (3167) | 41 (2271) | 0.348 |
Change by attending to resident's antibiotic choice in all patients, n (%) | 13/182 (7) | 18/210 (9) | 0.617 |
Change by attending to resident's antibiotic choice in patients who received antibiotics, n (%) | 13/97 (13) | 18/96 (19) | 0.312 |
Resident met standard of care, n (%) | 180/182 (99) | 207/210 (99) | 1 |
RRT called within first 24 hours, n (%) | 2/182 (1) | 8/210 (4) | 0.114 |
When only patients admitted during the night in 2011 and 2012 were compared, results were consistent with the overall finding that there was a shorter time to inpatient admission order without a difference in other studied variables (Table 3).
Variable | 2011 | 2012 | P Value |
---|---|---|---|
| |||
Time from admission decision to first inpatient order, min, median (IQR)ab | 90 (40151) | 42 (1767) | 0.002 |
Length of stay, h, median (IQR)b | 53 (3461) | 36 (1769) | 0.307 |
Change by attending to resident's antibiotic choice in all patients, n (%) | 7/70 (10) | 7/88 (8) | 1 |
Resident met standard of care, n (%) | 70/71 (99) | 88/90 (98) | 1 |
RRT called within first 24 hours, n (%) | 2/71 (3) | 6/90 (7) | 0.468 |
DISCUSSION
The purpose of this study was to evaluate an admission process that removed an ineffective method of attending oversight and allowed residents an opportunity to develop patient care plans prior to attending input. The key change from the original process was removing the step in which the ED provider contacted the PHM attending for new admissions, thus eliminating mandatory inpatient attending input, removing an impediment to workflow, and empowering inpatient pediatric residents to assess new patients and develop management plans. Our data show a reduction in the time difference between the ED admission request and the inpatient resident's first order by more than an hour, indicating a more efficient admission process. Although one might expect that eliminating the act of a phone call would shorten this time by a few minutes, it cannot account for the extent of the difference we found. We postulate that an increased sense of accountability motivated inpatient residents to evaluate and begin management sooner, a topic that requires further exploration.
A more efficient admission process benefits emergency medicine residents and other ED providers as well. It is well documented that ED crowding is associated with decreased quality of care,[11, 12] and ED efficiency is receiving increased attention with newly reportable quality metrics such as Admit Decision Time to Emergency Department Departure Time for Admitted Patients.[13]
Our data do not attenuate the importance of hospitalists in patient care, as evidenced by the fact that PHM attendings continued to frequently amend the residents' antibiotic choicethe only variable we evaluated in terms of change in planand recognized several cases in which the residents' plan did not meet standard of care. Furthermore, attendings continued to be available by phone and pager for guidance and education when needed or requested by the residents. Instead, our data show that removing mandated attending input at the time of admission did not significantly impact major patient outcomes, which may partly be attributable to the general safety of the inpatient pediatric wards.[14, 15] In our study, a comprehensive analysis of patient harm was not possible given the variable list and infrequency with which SOC was not met or RRTs were called. Furthermore, our residency program continues to comply with national pediatric residency requirements for nighttime supervision.[7]
Our PHM division, which had previously allocated 2 hours of attending clinical time per call night, now averages <15 minutes. These data conflict with the current trend in PHM toward more, rather than less, direct attending oversight. Many PHM divisions have moved toward 24/7 in‐house coverage,[5] a situation that often results in shiftwork and multiple handoffs. Removing the in‐house attending overnight would allow for the rapidly growing PHM subspecialty to allocate hospitalists elsewhere depending on their scholarly needs, particularly as divisions seek to become increasingly involved in medical education, research, and hospital leadership.[16, 17] Although one might posit a financial benefit to having in‐house attendings determine the appropriateness of an admission overnight, we identified no case in which the insurance denied an admission.
Safety equivalence of an in‐house to on‐call attending is poorly studied in PHM. However, even in intensive care units, where the majority of morbidity and mortality occur, it is unclear that the presence of an attending, let alone mandating phone calls, positively impacts survival. One prospective trial failed to demonstrate a difference in patient outcomes in the critical care setting when comparing mandated attending in‐house involvement to optional attending availability by phone.[18] Furthermore, several studies have found no association with time of admission and mortality, implying there is no criticality specifically requiring nighttime coverage.[19, 20]
One adult study of nocturnists showed that residents felt they had more contact with attendings who were in‐house than attendings taking home calls.[21] However, when the residents were asked why they did not contact the attending, the only difference between at‐home and in‐house attendings was that for attendings available by phone, residents were less likely to know who to call and were hesitant to wake the attending.
This study had several limitations. First, we could not effectively blind reviewers; a salient point given that the reviewers benefited from the new system with a reduced nighttime workload. We attempted to minimize this bias by employing multiple independent evaluations followed by group consensus whenever possible. Second, even though we had 3 hospitalists independently review each 2012 RRT to determine whether it was preventable by the prior system, this task was prone to retrospective bias. Third, there was a significant difference in the month of admission between cohorts. Rather than biasing toward our observed time difference, the fact that more patients were admitted in July 2012the beginning of the academic yearmay have decreased our observed difference given that residents were less experienced. Forth, this study used certain measurable outcomes as proxies for quality of care and patient harm and was likely underpowered to truly detect a difference in some of the more infrequent variables. Furthermore, we did not evaluate other potential harms, such as cost. Fifth, we did not evaluate whether or not the new process changed ED provider behavior (ie, an ED provider may wait longer to request admission overnight given that the PHM attending is not mandated to provide input until the morning). Finally, although LOS was used as a balancing measure, it would likely have taken major events or omissions during the admission process to cause it to change significantly, and therefore the lack of statistical difference in this metric does not necessarily imply that more subtle aspects of care were the same between groups. We also chose not to include readmission rate for this reason, as any change could not conclusively be attributed to the new admission process.
CONCLUSION
Increasing resident autonomy by removing mandated input during PHM admissions makes the process more efficient and results in no significant changes to major patient outcomes. These data may be used by rapidly growing PHM divisions to redefine faculty clinical responsibilities, particularly at night.
ACKNOWLEDGMENTS
Disclosures: This project was supported by the University of Rochester CTSA award number UL1 TR000042 from the National Center for Advancing Translational Sciences of the National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no conflicts of interest.
- Accreditation Council for Graduate Medical Education Task Force on Quality Care and Professionalism. The ACGME 2011 duty hour standards: enhancing quality of care, supervision, and resident professional development. Accreditation Council for Graduate Medical Education, Chicago, IL; 2011. Available at: http://www.acgme.org/acgmeweb/Portals/0/PDFs/jgme‐monograph[1].pdf. Last accessed on December 18, 2013.
- Impact of reduction in working hours for doctors in training on postgraduate medical education and patients' outcomes: systemic review. BMJ. 2011;342:d1580. , , , , .
- ACGME 2011 duty‐hour guidelines: consequences expected by radiology residency directors and chief residents. J Am Coll Radiol. 2012;9(11):820–827. , .
- Justifying patient risks associated with medical education. JAMA. 2007;298(9):1046–1048. .
- Survey of academic pediatric hospitalist programs in the U.S.: organizational, administrative and financial factors. J Hosp Med. 2013;8(6):285–291. , , , , , .
- Inpatient staffing within pediatric residency programs: work hour restrictions and the evolving role of the pediatric hospitalist. J Hosp Med. 2012;7(4):299–303. , , , .
- ACGME Program Requirements for Graduate Medical Education in Pediatrics. ACGME Approved: September 30, 2012; Effective: July 1, 2013. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/2013‐PR‐FAQ‐PIF/320_pediatrics_07012013.pdf. Accessed September 17, 2013.
- A systematic review: the effect of clinical supervision on patient and residency education outcomes. Acad Med. 2012;87(4):428–442. , , , et al.
- Twenty‐four‐hour intensivist staffing in teaching hospitals: tension between safety today and safety tomorrow. Chest. 2012;141(5):1315–1320. , .
- Medical education on the brink: 62 years of front‐line observations and opinions. Tex Heart Inst J. 2012;39(3):322–329. .
- Emergency department crowding is associated with poor care for patients with severe pain. Ann Emerg Med. 2008;51:6–7. , .
- The effect of emergency department crowding on clinically oriented outcomes. Acad Emerg Med. 2009;16(1):1–10. , , , et al.
- The Specifications Manual for National Hospital Inpatient Quality Measures. A Collaboration of the Centers for Medicare 128(1):72–78.
- Effect of a rapid response team on hospital‐wide mortality and code rates outside the ICU in a Children's Hospital. JAMA. 2007;298(19):2267–2274. , , , et al.
- Section on Hospital Medicine. Guiding principles for Pediatric Hospital Medicine programs. Pediatrics. 2013;132(4):782–786. SHM fact sheet: about hospital medicine. http://www.hospitalmedicine.org/AM/Template.cfm?Section=Media_Kit42(5):120–126.
- A randomized trial of nighttime physician staffing in an intensive care unit. N Engl J Med. 2013;368(23):2201–2209. , , , et al.
- Association between time of admission to the ICU and mortality: a systematic review and meta‐analysis. Chest. 2010;138(1):68–75. , , , , , .
- After‐hours admissions are not associated with increased risk‐adjusted mortality in pediatric intensive care. Intensive Care Med. 2008;34(1):148–151. , , , .
- Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7(8):606–610. , , , , , .
- Accreditation Council for Graduate Medical Education Task Force on Quality Care and Professionalism. The ACGME 2011 duty hour standards: enhancing quality of care, supervision, and resident professional development. Accreditation Council for Graduate Medical Education, Chicago, IL; 2011. Available at: http://www.acgme.org/acgmeweb/Portals/0/PDFs/jgme‐monograph[1].pdf. Last accessed on December 18, 2013.
- Impact of reduction in working hours for doctors in training on postgraduate medical education and patients' outcomes: systemic review. BMJ. 2011;342:d1580. , , , , .
- ACGME 2011 duty‐hour guidelines: consequences expected by radiology residency directors and chief residents. J Am Coll Radiol. 2012;9(11):820–827. , .
- Justifying patient risks associated with medical education. JAMA. 2007;298(9):1046–1048. .
- Survey of academic pediatric hospitalist programs in the U.S.: organizational, administrative and financial factors. J Hosp Med. 2013;8(6):285–291. , , , , , .
- Inpatient staffing within pediatric residency programs: work hour restrictions and the evolving role of the pediatric hospitalist. J Hosp Med. 2012;7(4):299–303. , , , .
- ACGME Program Requirements for Graduate Medical Education in Pediatrics. ACGME Approved: September 30, 2012; Effective: July 1, 2013. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/2013‐PR‐FAQ‐PIF/320_pediatrics_07012013.pdf. Accessed September 17, 2013.
- A systematic review: the effect of clinical supervision on patient and residency education outcomes. Acad Med. 2012;87(4):428–442. , , , et al.
- Twenty‐four‐hour intensivist staffing in teaching hospitals: tension between safety today and safety tomorrow. Chest. 2012;141(5):1315–1320. , .
- Medical education on the brink: 62 years of front‐line observations and opinions. Tex Heart Inst J. 2012;39(3):322–329. .
- Emergency department crowding is associated with poor care for patients with severe pain. Ann Emerg Med. 2008;51:6–7. , .
- The effect of emergency department crowding on clinically oriented outcomes. Acad Emerg Med. 2009;16(1):1–10. , , , et al.
- The Specifications Manual for National Hospital Inpatient Quality Measures. A Collaboration of the Centers for Medicare 128(1):72–78.
- Effect of a rapid response team on hospital‐wide mortality and code rates outside the ICU in a Children's Hospital. JAMA. 2007;298(19):2267–2274. , , , et al.
- Section on Hospital Medicine. Guiding principles for Pediatric Hospital Medicine programs. Pediatrics. 2013;132(4):782–786. SHM fact sheet: about hospital medicine. http://www.hospitalmedicine.org/AM/Template.cfm?Section=Media_Kit42(5):120–126.
- A randomized trial of nighttime physician staffing in an intensive care unit. N Engl J Med. 2013;368(23):2201–2209. , , , et al.
- Association between time of admission to the ICU and mortality: a systematic review and meta‐analysis. Chest. 2010;138(1):68–75. , , , , , .
- After‐hours admissions are not associated with increased risk‐adjusted mortality in pediatric intensive care. Intensive Care Med. 2008;34(1):148–151. , , , .
- Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7(8):606–610. , , , , , .
© 2013 Society of Hospital Medicine