Affiliations
Division of Hospital Medicine, Johns Hopkins Bayview Medical Center, Johns Hopkins University School of Medicine
Given name(s)
Regina
Family name
Landis
Degrees
BA

Comportment and Communication Score

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Developing a comportment and communication tool for use in hospital medicine

In 2014, there were more than 40,000 hospitalists in the United States, and approximately 20% were employed by academic medical centers.[1] Hospitalist physicians groups are committed to delivering excellent patient care. However, the published literature is limited with respect to defining optimal care in hospital medicine.

Patient satisfaction surveys, such as Press Ganey (PG)[2] and Hospital Consumer Assessment of Healthcare Providers and Systems,[3] are being used to assess patients' contentment with the quality of care they receive while hospitalized. The Society of Hospital Medicine, the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] There are, however, several problems with the current methods. First, the attribution to specific providers is questionable. Second, recall about the provider by the patients may be poor because surveys are sent to patients days after they return home. Third, the patients' recovery and health outcomes are likely to influence their assessment of the doctor. Finally, feedback is known to be most valuable and transformative when it is specific and given in real time. Thus, a tool that is able to provide feedback at the encounter level should be more helpful than a tool that offers assessment at the level of the admission, particularly when it can be also delivered immediately after the data are collected.

Comportment has been used to describe both the way a person behaves and also the way she carries herself (ie, her general manner).[5] Excellent comportment and communication can serve as the foundation for delivering patient‐centered care.[6, 7, 8] Patient centeredness has been shown to improve the patient experience and clinical outcomes, including compliance with therapeutic plans.[9, 10, 11] Respectful behavior, etiquette‐based medicine, and effective communication also lay the foundation upon which the therapeutic alliance between a doctor and patient can be built.

The goal of this study was to establish a metric that could comprehensively assess a hospitalist provider's comportment and communication skills during an encounter with a hospitalized patient.

METHODS

Study Design and Setting

An observational study of hospitalist physicians was conducted between June 2013 and December 2013 at 5 hospitals in Maryland and Washington DC. Two are academic medical centers (Johns Hopkins Hospital and Johns Hopkins Bayview Medical Center [JHBMC]), and the others are community hospitals (Howard County General Hospital [HCGH], Sibley Memorial Hospital [SMC], and Suburban Hospital). These 5 hospitals, across 2 large cities, have distinct culture and leadership, each serving different populations.

Subjects

In developing a tool to measure communication and comportment, we needed to observe physicianpatient encounters wherein there would be a good deal of variability in performance. During pilot testing, when following a few of the most senior and respected hospitalists, we noted encounters during which they excelled and others where they performed less optimally. Further, in following some less‐experienced providers, their skills were less developed and they were uniformly missing most of the behaviors on the tool that were believed to be associated with optimal communication and comportment. Because of this, we decided to purposively sample the strongest clinicians at each of the 5 hospitals in hopes of seeing a range of scores on the tool.

The chiefs of hospital medicine at the 5 hospitals were contacted and asked to identify their most clinically excellent hospitalists, namely those who they thought were most clinically skilled within their groups. Because our goal was to observe the top tier (approximately 20%) of the hospitalists within each group, we asked each chief to name a specific number of physicians (eg, 3 names for 1 group with 15 hospitalists, and 8 from another group with 40 physicians). No precise definition of most clinically excellent hospitalists was provided to the chiefs. It was believed that they were well positioned to select their best clinicians because of both subjective feedback and objective data that flow to them. This postulate may have been corroborated by the fact that each of them efficiently sent a list of their top choices without any questions being asked.

The 29 hospitalists (named by their chiefs) were in turn emailed and invited to participate in the study. All but 3 hospitalists consented to participate in the study; this resulted in a cohort of 26 who would be observed.

Tool Development

A team was assembled to develop the hospital medicine comportment and communication observation tool (HMCCOT). All team members had extensive clinical experience, several had published articles on clinical excellence, had won clinical awards, and all had been teaching clinical skills for many years. The team's development of the HMCCOT was extensively informed by a review of the literature. Two articles that most heavily influenced the HMCCOT's development were Christmas et al.'s paper describing 7 core domains of excellence, 2 of which are intimately linked to communication and comportment,[12] and Kahn's text that delineates behaviors to be performed upon entering the patient's room, termed etiquette‐based medicine.[6] The team also considered the work from prior timemotion studies in hospital medicine,[7, 13] which led to the inclusion of temporal measurements during the observations. The tool was also presented at academic conferences in the Division of General Internal Medicine at Johns Hopkins and iteratively revised based on the feedback. Feedback was sought from people who have spent their entire career studying physicianpatient relationships and who are members of the American Academy on Communication in Healthcare. These methods established content validity evidence for the tool under development. The goal of the HMCCOT was to assess behaviors believed to be associated with optimal comportment and communication in hospital medicine.

The HMCCOT was pilot tested by observing different JHBMC hospitalists patient encounters and it was iteratively revised. On multiple occasions, 2 authors/emnvestigators spent time observing JHBMC hospitalists together and compared data capture and levels of agreement across all elements. Then, for formal assessment of inter‐rater reliability, 2 authors observed 5 different hospitalists across 25 patient encounters; the coefficient was 0.91 (standard error = 0.04). This step helped to establish internal structure validity evidence for the tool.

The initial version of the HMCCOT contained 36 elements, and it was organized sequentially to allow the observer to document behaviors in the order that they were likely to occur so as to facilitate the process and to minimize oversight. A few examples of the elements were as follows: open‐ended versus a close‐ended statement at the beginning of the encounter, hospitalist introduces himself/herself, and whether the provider smiles at any point during the patient encounter.

Data Collection

One author scheduled a time to observe each hospitalist physician during their routine clinical care of patients when they were not working with medical learners. Hospitalists were naturally aware that they were being observed but were not aware of the specific data elements or behaviors that were being recorded.

The study was approved by the institutional review board at the Johns Hopkins University School of Medicine, and by each of the research review committees at HCGH, SMC, and Suburban hospitals.

Data Analysis

After data collection, all data were deidentified so that the researchers were blinded to the identities of the physicians. Respondent characteristics are presented as proportions and means. Unpaired t test and 2 tests were used to compare demographic information, and stratified by mean HMCCOT score. The survey data were analyzed using Stata statistical software version 12.1 (StataCorp LP, College Station, TX).

Further Validation of the HMCCOT

Upon reviewing the distribution of data after observing the 26 physicians with their patients, we excluded 13 variables from the initial version of the tool that lacked discriminatory value (eg, 100% or 0% of physicians performed the observed behavior during the encounters); this left 23 variables that were judged to be most clinically relevant in the final version of the HMCCOT. Two examples of the variables that were excluded were: uses technology/literature to educate patients (not witnessed in any encounter), and obeys posted contact precautions (done uniformly by all). The HMCCOT score represents the proportion of observed behaviors (out of the 23 behaviors). It was computed for each hospitalist for every patient encounter. Finally, relation to other variables validity evidence would be established by comparing the mean HMCCOT scores of the physicians to their PG scores from the same time period to evaluate the correlation between the 2 scores. This association was assessed using Pearson correlations.

RESULTS

The average clinical experience of the 26 hospitalist physicians studied was 6 years (Table 1). Their mean age was 38 years, 13 (50%) were female, and 16 (62%) were of nonwhite race. Fourteen hospitalists (54%) worked at 1 of the nonacademic hospitals. In terms of clinical workload, most physicians (n = 17, 65%) devoted more than 70% of their time working in direct patient care. Mean time spent observing each physician was 280 minutes. During this time, the 26 physicians were observed for 181 separate clinical encounters; 54% of these patients were new encounters, patients who were not previously known to the physician. The average time each physician spent in a patient room was 10.8 minutes. Mean number of observed patient encounters per hospitalist was 7.

Characteristics of the Hospitalist Physicians Based on Their Hospital Medicine Comportment and Communication Observation Tool Score
Total Study Population, n = 26 HMCCOT Score 60, n = 14 HMCCOT Score >60, n = 12 P Value*
  • NOTE: Abbreviations: HCGH, Howard County General Hospital; HMCCOT, Hospital Medicine Comportment and Communication Observation Tool; JHBMC, Johns Hopkins Bayview Medical Center; JHH, Johns Hopkins Hospital; SD, standard deviation; SMC, Sibley Memorial Hospital. *2 with Yates‐corrected P value where at least 20% of frequencies were <5. Unpaired t test statistic

Age, mean (SD) 38 (5.6) 37.9 (5.6) 38.1 (5.7) 0.95
Female, n (%) 13 (50) 6 (43) 7 (58) 0.43
Race, n (%)
Caucasian 10 (38) 5 (36) 5 (41) 0.31
Asian 13 (50) 8 (57) 5 (41)
African/African American 2 (8) 0 (0) 2 (17)
Other 1 (4) 1 (7) 0 (0)
Clinical experience >6 years, n (%) 12 (46) 6 (43) 6 (50) 0.72
Clinical workload >70% 17 (65) 10 (71) 7 (58) 0.48
Academic hospitalist, n (%) 12 (46) 5 (36) 7 (58) 0.25
Hospital 0.47
JHBMC 8 (31) 3 (21.4) 5 (41)
JHH 4 (15) 2 (14.3) 2 (17)
HCGH 5 (19) 3 (21.4) 2 (17)
Suburban 6 (23) 3 (21.4) 3 (25)
SMC 3 (12) 3 (21.4) 0 (0)
Minutes spent observing hospitalist per shift, mean (SD) 280 (104.5) 280.4 (115.5) 281.4 (95.3) 0.98
Average time spent per patient encounter in minutes, mean (SD) 10.8 (8.9) 8.7 (9.1) 13 (8.1) 0.001
Proportion of observed patients who were new to provider, % 97 (53.5) 37 (39.7) 60 (68.1) 0.001

The distribution of HMCCOT scores was not statistically significantly different when analyzed by age, gender, race, amount of clinical experience, clinical workload of the hospitalist, hospital, time spent observing the hospitalist (all P > 0.05). The distribution of HMCCOT scores was statistically different in new patient encounters compared to follow‐ups (68.1% vs 39.7%, P 0.001). Encounters with patients that generated HMCCOT scores above versus below the mean were longer (13 minutes vs 8.7 minutes, P 0.001).

The mean HMCCOT score was 61 (standard deviation [SD] = 10.6), and it was normally distributed (Figure 1). Table 2 shows the data for the 23 behaviors that were objectively assessed as part of the HMCCOT for the 181 patient encounters. The most frequently observed behaviors were physicians washing hands after leaving the patient's room in 170 (94%) of the encounters and smiling (83%). The behaviors that were observed with the least regularity were using an empathic statement (26% of encounters), and employing teach‐back (13% of encounters). A common method of demonstrating interest in the patient as a person, seen in 41% of encounters, involved physicians asking about patients' personal histories and their interests.

Objective and Subjective Data Making Up the Hospital Medicine Comportment and Communication Observation Tool Score Assessed While Observing 26 Hospitalist Physicians
Variables All Visits Combined, n = 181 HMCCOT Score <60, n = 93 HMCCOT Score >60, n = 88 P Value*
  • NOTE: Abbreviations: HMCCOT, Hospital Medicine Comportment and Communication Observation Tool. *2 with Yates‐corrected P value where at least 20% of frequencies were <5.

Objective observations, n (%)
Washes hands after leaving room 170 (94) 83 (89) 87 (99) 0.007
Discusses plan for the day 163 (91) 78 (84) 85 (99) <0.001
Does not interrupt the patient 159 (88) 79 (85) 80 (91) 0.21
Smiles 149 (83) 71 (77) 78 (89) 0.04
Washes hands before entering 139 (77) 64 (69) 75 (85) 0.009
Begins with open‐ended question 134 (77) 68 (76) 66 (78) 0.74
Knocks before entering the room 127 (76) 57 (65) 70 (89) <0.001
Introduces him/herself to the patient 122 (67) 45 (48) 77 (88) <0.001
Explains his/her role 120 (66) 44 (47) 76 (86) <0.001
Asks about pain 110 (61) 45 (49) 65 (74) 0.001
Asks permission prior to examining 106 (61) 43 (50) 63 (72) 0.002
Uncovers body area for the physical exam 100 (57) 34 (38) 66 (77) <0.001
Discusses discharge plan 99 (55) 38 (41) 61 (71) <0.001
Sits down in the patient room 74 (41) 24 (26) 50 (57) <0.001
Asks about patient's feelings 58 (33) 17 (19) 41 (47) <0.001
Shakes hands with the patient 57 (32) 17 (18) 40 (46) <0.001
Uses teach‐back 24 (13) 4 (4.3) 20 (24) <0.001
Subjective observations, n (%)
Avoids medical jargon 160 (89) 85 (91) 83 (95) 0.28
Demonstrates interest in patient as a person 72 (41) 16 (18) 56 (66) <0.001
Touches appropriately 62 (34) 21 (23) 41 (47) 0.001
Shows sensitivity to patient modesty 57 (93) 15 (79) 42 (100) 0.002
Engages in nonmedical conversation 54 (30) 10 (11) 44 (51) <0.001
Uses empathic statement 47 (26) 9 (10) 38 (43) <0.001
jhm2647-fig-0001-m.png
Distribution of mean hospital medicine comportment and communication tool (HMCCOT) scores for the 26 hospitalist providers who were observed.

The average composite PG scores for the physician sample was 38.95 (SD=39.64). A moderate correlation was found between the HMCCOT score and PG score (adjusted Pearson correlation: 0.45, P = 0.047).

DISCUSSION

In this study, we followed 26 hospitalist physicians during routine clinical care, and we focused intently on their communication and their comportment with patients at the bedside. Even among clinically respected hospitalists, the results reveal that there is wide variability in comportment and communication practices and behaviors at the bedside. The physicians' HMCCOT scores were associated with their PG scores. These findings suggest that improved bedside communication and comportment with patients might translate into enhanced patient satisfaction.

This is the first study that honed in on hospitalist communication and comportment. With validity evidence established for the HMCCOT, some may elect to more explicitly perform these behaviors themselves, and others may wish to watch other hospitalists to give them feedback that is tied to specific behaviors. Beginning with the basics, the hospitalists we studied introduced themselves to their patients at the initial encounter 78% of the time, less frequently than is done by primary care clinicians (89%) but more consistently than do emergency department providers (64%).[7] Other variables that stood out in the HMCCOT was that teach‐back was employed in only 13% of the encounters. Previous studies have shown that teach‐back corroborates patient comprehension and can be used to engage patients (and caregivers) in realistic goal setting and optimal health service utilization.[14] Further, patients who clearly understand their postdischarge plan are 30% less likely to be readmitted or visit the emergency department.[14] The data for our group have helped us to see areas of strengths, such as hand washing, where we are above compliance rates across hospitals in the United States,[15] as well as those matters that represent opportunities for improvement such as connecting more deeply with our patients.

Tackett et al. have looked at encounter length and its association with performance of etiquette‐based medicine behaviors.[7] Similar to their study, we found a positive correlation between spending more time with patients and higher HMCCOT scores. We also found that HMCCOT scores were higher when providers were caring for new patients. Patients' complaints about doctors often relate to feeling rushed, that their physicians did not listen to them, or that information was not conveyed in a clear manner.[16] Such challenges in physicianpatient communication are ubiquitous across clinical settings.[16] When successfully achieved, patient‐centered communication has been associated with improved clinical outcomes, including adherence to recommended treatment and better self‐management of chronic disease.[17, 18, 19, 20, 21, 22, 23, 24, 25, 26] Many of the components of the HMCCOT described in this article are at the heart of patient‐centered care.

Several limitations of the study should be considered. First, physicians may have behaved differently while they were being observed, which is known as the Hawthorne effect. We observed them for many hours and across multiple patient encounters, and the physicians were not aware of the specific types of data that we were collecting. These factors may have limited the biases along such lines. Second, there may be elements of optimal comportment and communication that were not captured by the HMCCOT. Hopefully, there are not big gaps, as we used multiple methods and an iterative process in the refinement of the HMCCOT metric. Third, one investigator did all of the observing, and it is possible that he might have missed certain behaviors. Through extensive pilot testing and comparisons with other raters, the observer became very skilled and facile with such data collection and the tool. Fourth, we did not survey the same patients that were cared for to compare their perspectives to the HMCCOT scores following the clinical encounters. For patient perspectives, we relied only on PG scores. Fifth, quality of care is a broad and multidimensional construct. The HMCCOT focuses exclusively on hospitalists' comportment and communication at the bedside; therefore, it does not comprehensively assess care quality. Sixth, with our goal to optimally validate the HMCCOT, we tested it on the top tier of hospitalists within each group. We may have observed different results had we randomly selected hospitalists from each hospital or had we conducted the study at hospitals in other geographic regions. Finally, all of the doctors observed worked at hospitals in the Mid‐Atlantic region. However, these five distinct hospitals each have their own cultures, and they are led by different administrators. We purposively chose to sample both academic as well as community settings.

In conclusion, this study reports on the development of a comportment and communication tool that was established and validated by following clinically excellent hospitalists at the bedside. Future studies are necessary to determine whether hospitalists of all levels of experience and clinical skill can improve when given data and feedback using the HMCCOT. Larger studies will then be needed to assess whether enhancing comportment and communication can truly improve patient satisfaction and clinical outcomes in the hospital.

Disclosures: Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. Susrutha Kotwal, MD, and Waseem Khaliq, MD, contributed equally to this work. The authors report no conflicts of interest.

Files
References
  1. 2014 state of hospital medicine report. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org/Web/Practice_Management/State_of_HM_Surveys/2014.aspx. Accessed January 10, 2015.
  2. Press Ganey website. Available at: http://www.pressganey.com/home. Accessed December 15, 2015.
  3. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed February 2, 2016.
  4. Membership committee guidelines for hospitalists patient satisfaction surveys. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org. Accessed February 2, 2016.
  5. Definition of comportment. Available at: http://www.vocabulary.com/dictionary/comportment. Accessed December 15, 2015.
  6. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358(19):19881989.
  7. Tackett S, Tad‐y D, Rios R, Kisuule F, Wright S. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  8. Levinson W, Lesser CS, Epstein RM. Developing physician communication skills for patient‐centered care. Health Aff (Millwood). 2010;29(7):13101318.
  9. Auerbach SM. The impact on patient health outcomes of interventions targeting the patient–physician relationship. Patient. 2009;2(2):7784.
  10. Griffin SJ, Kinmonth AL, Veltman MW, Gillard S, Grant J, Stewart M. Effect on health‐related outcomes of interventions to alter the interaction between patients and practitioners: a systematic review of trials. Ann Fam Med. 2004;2(6):595608.
  11. Street RL, Makoul G, Arora NK, Epstein RM. How does communication heal? Pathways linking clinician–patient communication to health outcomes. Patient Educ Couns. 2009;74(3):295301.
  12. Christmas C, Kravet SJ, Durso SC, Wright SM. Clinical excellence in academia: perspectives from masterful academic clinicians. Mayo Clin Proc. 2008;83(9):989994.
  13. Tipping MD, Forth VE, O'Leary KJ, et al. Where did the day go?—a time‐motion study of hospitalists. J Hosp Med. 2010;5(6):323328.
  14. Peter D, Robinson P, Jordan M, et al. Reducing readmissions using teach‐back: enhancing patient and family education. J Nurs Adm. 2015;45(1):3542.
  15. McGuckin M, Waterman R, Govednik J. Hand hygiene compliance rates in the United States—a one‐year multicenter collaboration using product/volume usage measurement and feedback. Am J Med Qual. 2009;24(3):205213.
  16. Hickson GB, Clayton EW, Entman SS, et al. Obstetricians' prior malpractice experience and patients' satisfaction with care. JAMA. 1994;272(20):15831587.
  17. Epstein RM, Street RL. Patient‐Centered Communication in Cancer Care: Promoting Healing and Reducing Suffering. NIH publication no. 07–6225. Bethesda, MD: National Cancer Institute; 2007.
  18. Arora NK. Interacting with cancer patients: the significance of physicians' communication behavior. Soc Sci Med. 2003;57(5):791806.
  19. Greenfield S, Kaplan S, Ware JE. Expanding patient involvement in care: effects on patient outcomes. Ann Intern Med. 1985;102(4):520528.
  20. Mead N, Bower P. Measuring patient‐centeredness: a comparison of three observation‐based instruments. Patient Educ Couns. 2000;39(1):7180.
  21. Ong LM, Haes JC, Hoos AM, Lammes FB. Doctor‐patient communication: a review of the literature. Soc Sci Med. 1995;40(7):903918.
  22. Safran DG, Taira DA, Rogers WH, Kosinski M, Ware JE, Tarlov AR. Linking primary care performance to outcomes of care. J Fam Pract. 1998;47(3):213220.
  23. Stewart M, Brown JB, Donner A, et al. The impact of patient‐centered care on outcomes. J Fam Pract. 2000;49(9):796804.
  24. Epstein RM, Franks P, Fiscella K, et al. Measuring patient‐centered communication in patient‐physician consultations: theoretical and practical issues. Soc Sci Med. 2005;61(7):15161528.
  25. Mead N, Bower P. Patient‐centered consultations and outcomes in primary care: a review of the literature. Patient Educ Couns. 2002;48(1):5161.
  26. Bredart A, Bouleuc C, Dolbeault S. Doctor‐patient communication and satisfaction with care in oncology. Curr Opin Oncol. 2005;17(4):351354.
Article PDF
Issue
Journal of Hospital Medicine - 11(12)
Publications
Page Number
853-858
Sections
Files
Files
Article PDF
Article PDF

In 2014, there were more than 40,000 hospitalists in the United States, and approximately 20% were employed by academic medical centers.[1] Hospitalist physicians groups are committed to delivering excellent patient care. However, the published literature is limited with respect to defining optimal care in hospital medicine.

Patient satisfaction surveys, such as Press Ganey (PG)[2] and Hospital Consumer Assessment of Healthcare Providers and Systems,[3] are being used to assess patients' contentment with the quality of care they receive while hospitalized. The Society of Hospital Medicine, the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] There are, however, several problems with the current methods. First, the attribution to specific providers is questionable. Second, recall about the provider by the patients may be poor because surveys are sent to patients days after they return home. Third, the patients' recovery and health outcomes are likely to influence their assessment of the doctor. Finally, feedback is known to be most valuable and transformative when it is specific and given in real time. Thus, a tool that is able to provide feedback at the encounter level should be more helpful than a tool that offers assessment at the level of the admission, particularly when it can be also delivered immediately after the data are collected.

Comportment has been used to describe both the way a person behaves and also the way she carries herself (ie, her general manner).[5] Excellent comportment and communication can serve as the foundation for delivering patient‐centered care.[6, 7, 8] Patient centeredness has been shown to improve the patient experience and clinical outcomes, including compliance with therapeutic plans.[9, 10, 11] Respectful behavior, etiquette‐based medicine, and effective communication also lay the foundation upon which the therapeutic alliance between a doctor and patient can be built.

The goal of this study was to establish a metric that could comprehensively assess a hospitalist provider's comportment and communication skills during an encounter with a hospitalized patient.

METHODS

Study Design and Setting

An observational study of hospitalist physicians was conducted between June 2013 and December 2013 at 5 hospitals in Maryland and Washington DC. Two are academic medical centers (Johns Hopkins Hospital and Johns Hopkins Bayview Medical Center [JHBMC]), and the others are community hospitals (Howard County General Hospital [HCGH], Sibley Memorial Hospital [SMC], and Suburban Hospital). These 5 hospitals, across 2 large cities, have distinct culture and leadership, each serving different populations.

Subjects

In developing a tool to measure communication and comportment, we needed to observe physicianpatient encounters wherein there would be a good deal of variability in performance. During pilot testing, when following a few of the most senior and respected hospitalists, we noted encounters during which they excelled and others where they performed less optimally. Further, in following some less‐experienced providers, their skills were less developed and they were uniformly missing most of the behaviors on the tool that were believed to be associated with optimal communication and comportment. Because of this, we decided to purposively sample the strongest clinicians at each of the 5 hospitals in hopes of seeing a range of scores on the tool.

The chiefs of hospital medicine at the 5 hospitals were contacted and asked to identify their most clinically excellent hospitalists, namely those who they thought were most clinically skilled within their groups. Because our goal was to observe the top tier (approximately 20%) of the hospitalists within each group, we asked each chief to name a specific number of physicians (eg, 3 names for 1 group with 15 hospitalists, and 8 from another group with 40 physicians). No precise definition of most clinically excellent hospitalists was provided to the chiefs. It was believed that they were well positioned to select their best clinicians because of both subjective feedback and objective data that flow to them. This postulate may have been corroborated by the fact that each of them efficiently sent a list of their top choices without any questions being asked.

The 29 hospitalists (named by their chiefs) were in turn emailed and invited to participate in the study. All but 3 hospitalists consented to participate in the study; this resulted in a cohort of 26 who would be observed.

Tool Development

A team was assembled to develop the hospital medicine comportment and communication observation tool (HMCCOT). All team members had extensive clinical experience, several had published articles on clinical excellence, had won clinical awards, and all had been teaching clinical skills for many years. The team's development of the HMCCOT was extensively informed by a review of the literature. Two articles that most heavily influenced the HMCCOT's development were Christmas et al.'s paper describing 7 core domains of excellence, 2 of which are intimately linked to communication and comportment,[12] and Kahn's text that delineates behaviors to be performed upon entering the patient's room, termed etiquette‐based medicine.[6] The team also considered the work from prior timemotion studies in hospital medicine,[7, 13] which led to the inclusion of temporal measurements during the observations. The tool was also presented at academic conferences in the Division of General Internal Medicine at Johns Hopkins and iteratively revised based on the feedback. Feedback was sought from people who have spent their entire career studying physicianpatient relationships and who are members of the American Academy on Communication in Healthcare. These methods established content validity evidence for the tool under development. The goal of the HMCCOT was to assess behaviors believed to be associated with optimal comportment and communication in hospital medicine.

The HMCCOT was pilot tested by observing different JHBMC hospitalists patient encounters and it was iteratively revised. On multiple occasions, 2 authors/emnvestigators spent time observing JHBMC hospitalists together and compared data capture and levels of agreement across all elements. Then, for formal assessment of inter‐rater reliability, 2 authors observed 5 different hospitalists across 25 patient encounters; the coefficient was 0.91 (standard error = 0.04). This step helped to establish internal structure validity evidence for the tool.

The initial version of the HMCCOT contained 36 elements, and it was organized sequentially to allow the observer to document behaviors in the order that they were likely to occur so as to facilitate the process and to minimize oversight. A few examples of the elements were as follows: open‐ended versus a close‐ended statement at the beginning of the encounter, hospitalist introduces himself/herself, and whether the provider smiles at any point during the patient encounter.

Data Collection

One author scheduled a time to observe each hospitalist physician during their routine clinical care of patients when they were not working with medical learners. Hospitalists were naturally aware that they were being observed but were not aware of the specific data elements or behaviors that were being recorded.

The study was approved by the institutional review board at the Johns Hopkins University School of Medicine, and by each of the research review committees at HCGH, SMC, and Suburban hospitals.

Data Analysis

After data collection, all data were deidentified so that the researchers were blinded to the identities of the physicians. Respondent characteristics are presented as proportions and means. Unpaired t test and 2 tests were used to compare demographic information, and stratified by mean HMCCOT score. The survey data were analyzed using Stata statistical software version 12.1 (StataCorp LP, College Station, TX).

Further Validation of the HMCCOT

Upon reviewing the distribution of data after observing the 26 physicians with their patients, we excluded 13 variables from the initial version of the tool that lacked discriminatory value (eg, 100% or 0% of physicians performed the observed behavior during the encounters); this left 23 variables that were judged to be most clinically relevant in the final version of the HMCCOT. Two examples of the variables that were excluded were: uses technology/literature to educate patients (not witnessed in any encounter), and obeys posted contact precautions (done uniformly by all). The HMCCOT score represents the proportion of observed behaviors (out of the 23 behaviors). It was computed for each hospitalist for every patient encounter. Finally, relation to other variables validity evidence would be established by comparing the mean HMCCOT scores of the physicians to their PG scores from the same time period to evaluate the correlation between the 2 scores. This association was assessed using Pearson correlations.

RESULTS

The average clinical experience of the 26 hospitalist physicians studied was 6 years (Table 1). Their mean age was 38 years, 13 (50%) were female, and 16 (62%) were of nonwhite race. Fourteen hospitalists (54%) worked at 1 of the nonacademic hospitals. In terms of clinical workload, most physicians (n = 17, 65%) devoted more than 70% of their time working in direct patient care. Mean time spent observing each physician was 280 minutes. During this time, the 26 physicians were observed for 181 separate clinical encounters; 54% of these patients were new encounters, patients who were not previously known to the physician. The average time each physician spent in a patient room was 10.8 minutes. Mean number of observed patient encounters per hospitalist was 7.

Characteristics of the Hospitalist Physicians Based on Their Hospital Medicine Comportment and Communication Observation Tool Score
Total Study Population, n = 26 HMCCOT Score 60, n = 14 HMCCOT Score >60, n = 12 P Value*
  • NOTE: Abbreviations: HCGH, Howard County General Hospital; HMCCOT, Hospital Medicine Comportment and Communication Observation Tool; JHBMC, Johns Hopkins Bayview Medical Center; JHH, Johns Hopkins Hospital; SD, standard deviation; SMC, Sibley Memorial Hospital. *2 with Yates‐corrected P value where at least 20% of frequencies were <5. Unpaired t test statistic

Age, mean (SD) 38 (5.6) 37.9 (5.6) 38.1 (5.7) 0.95
Female, n (%) 13 (50) 6 (43) 7 (58) 0.43
Race, n (%)
Caucasian 10 (38) 5 (36) 5 (41) 0.31
Asian 13 (50) 8 (57) 5 (41)
African/African American 2 (8) 0 (0) 2 (17)
Other 1 (4) 1 (7) 0 (0)
Clinical experience >6 years, n (%) 12 (46) 6 (43) 6 (50) 0.72
Clinical workload >70% 17 (65) 10 (71) 7 (58) 0.48
Academic hospitalist, n (%) 12 (46) 5 (36) 7 (58) 0.25
Hospital 0.47
JHBMC 8 (31) 3 (21.4) 5 (41)
JHH 4 (15) 2 (14.3) 2 (17)
HCGH 5 (19) 3 (21.4) 2 (17)
Suburban 6 (23) 3 (21.4) 3 (25)
SMC 3 (12) 3 (21.4) 0 (0)
Minutes spent observing hospitalist per shift, mean (SD) 280 (104.5) 280.4 (115.5) 281.4 (95.3) 0.98
Average time spent per patient encounter in minutes, mean (SD) 10.8 (8.9) 8.7 (9.1) 13 (8.1) 0.001
Proportion of observed patients who were new to provider, % 97 (53.5) 37 (39.7) 60 (68.1) 0.001

The distribution of HMCCOT scores was not statistically significantly different when analyzed by age, gender, race, amount of clinical experience, clinical workload of the hospitalist, hospital, time spent observing the hospitalist (all P > 0.05). The distribution of HMCCOT scores was statistically different in new patient encounters compared to follow‐ups (68.1% vs 39.7%, P 0.001). Encounters with patients that generated HMCCOT scores above versus below the mean were longer (13 minutes vs 8.7 minutes, P 0.001).

The mean HMCCOT score was 61 (standard deviation [SD] = 10.6), and it was normally distributed (Figure 1). Table 2 shows the data for the 23 behaviors that were objectively assessed as part of the HMCCOT for the 181 patient encounters. The most frequently observed behaviors were physicians washing hands after leaving the patient's room in 170 (94%) of the encounters and smiling (83%). The behaviors that were observed with the least regularity were using an empathic statement (26% of encounters), and employing teach‐back (13% of encounters). A common method of demonstrating interest in the patient as a person, seen in 41% of encounters, involved physicians asking about patients' personal histories and their interests.

Objective and Subjective Data Making Up the Hospital Medicine Comportment and Communication Observation Tool Score Assessed While Observing 26 Hospitalist Physicians
Variables All Visits Combined, n = 181 HMCCOT Score <60, n = 93 HMCCOT Score >60, n = 88 P Value*
  • NOTE: Abbreviations: HMCCOT, Hospital Medicine Comportment and Communication Observation Tool. *2 with Yates‐corrected P value where at least 20% of frequencies were <5.

Objective observations, n (%)
Washes hands after leaving room 170 (94) 83 (89) 87 (99) 0.007
Discusses plan for the day 163 (91) 78 (84) 85 (99) <0.001
Does not interrupt the patient 159 (88) 79 (85) 80 (91) 0.21
Smiles 149 (83) 71 (77) 78 (89) 0.04
Washes hands before entering 139 (77) 64 (69) 75 (85) 0.009
Begins with open‐ended question 134 (77) 68 (76) 66 (78) 0.74
Knocks before entering the room 127 (76) 57 (65) 70 (89) <0.001
Introduces him/herself to the patient 122 (67) 45 (48) 77 (88) <0.001
Explains his/her role 120 (66) 44 (47) 76 (86) <0.001
Asks about pain 110 (61) 45 (49) 65 (74) 0.001
Asks permission prior to examining 106 (61) 43 (50) 63 (72) 0.002
Uncovers body area for the physical exam 100 (57) 34 (38) 66 (77) <0.001
Discusses discharge plan 99 (55) 38 (41) 61 (71) <0.001
Sits down in the patient room 74 (41) 24 (26) 50 (57) <0.001
Asks about patient's feelings 58 (33) 17 (19) 41 (47) <0.001
Shakes hands with the patient 57 (32) 17 (18) 40 (46) <0.001
Uses teach‐back 24 (13) 4 (4.3) 20 (24) <0.001
Subjective observations, n (%)
Avoids medical jargon 160 (89) 85 (91) 83 (95) 0.28
Demonstrates interest in patient as a person 72 (41) 16 (18) 56 (66) <0.001
Touches appropriately 62 (34) 21 (23) 41 (47) 0.001
Shows sensitivity to patient modesty 57 (93) 15 (79) 42 (100) 0.002
Engages in nonmedical conversation 54 (30) 10 (11) 44 (51) <0.001
Uses empathic statement 47 (26) 9 (10) 38 (43) <0.001
jhm2647-fig-0001-m.png
Distribution of mean hospital medicine comportment and communication tool (HMCCOT) scores for the 26 hospitalist providers who were observed.

The average composite PG scores for the physician sample was 38.95 (SD=39.64). A moderate correlation was found between the HMCCOT score and PG score (adjusted Pearson correlation: 0.45, P = 0.047).

DISCUSSION

In this study, we followed 26 hospitalist physicians during routine clinical care, and we focused intently on their communication and their comportment with patients at the bedside. Even among clinically respected hospitalists, the results reveal that there is wide variability in comportment and communication practices and behaviors at the bedside. The physicians' HMCCOT scores were associated with their PG scores. These findings suggest that improved bedside communication and comportment with patients might translate into enhanced patient satisfaction.

This is the first study that honed in on hospitalist communication and comportment. With validity evidence established for the HMCCOT, some may elect to more explicitly perform these behaviors themselves, and others may wish to watch other hospitalists to give them feedback that is tied to specific behaviors. Beginning with the basics, the hospitalists we studied introduced themselves to their patients at the initial encounter 78% of the time, less frequently than is done by primary care clinicians (89%) but more consistently than do emergency department providers (64%).[7] Other variables that stood out in the HMCCOT was that teach‐back was employed in only 13% of the encounters. Previous studies have shown that teach‐back corroborates patient comprehension and can be used to engage patients (and caregivers) in realistic goal setting and optimal health service utilization.[14] Further, patients who clearly understand their postdischarge plan are 30% less likely to be readmitted or visit the emergency department.[14] The data for our group have helped us to see areas of strengths, such as hand washing, where we are above compliance rates across hospitals in the United States,[15] as well as those matters that represent opportunities for improvement such as connecting more deeply with our patients.

Tackett et al. have looked at encounter length and its association with performance of etiquette‐based medicine behaviors.[7] Similar to their study, we found a positive correlation between spending more time with patients and higher HMCCOT scores. We also found that HMCCOT scores were higher when providers were caring for new patients. Patients' complaints about doctors often relate to feeling rushed, that their physicians did not listen to them, or that information was not conveyed in a clear manner.[16] Such challenges in physicianpatient communication are ubiquitous across clinical settings.[16] When successfully achieved, patient‐centered communication has been associated with improved clinical outcomes, including adherence to recommended treatment and better self‐management of chronic disease.[17, 18, 19, 20, 21, 22, 23, 24, 25, 26] Many of the components of the HMCCOT described in this article are at the heart of patient‐centered care.

Several limitations of the study should be considered. First, physicians may have behaved differently while they were being observed, which is known as the Hawthorne effect. We observed them for many hours and across multiple patient encounters, and the physicians were not aware of the specific types of data that we were collecting. These factors may have limited the biases along such lines. Second, there may be elements of optimal comportment and communication that were not captured by the HMCCOT. Hopefully, there are not big gaps, as we used multiple methods and an iterative process in the refinement of the HMCCOT metric. Third, one investigator did all of the observing, and it is possible that he might have missed certain behaviors. Through extensive pilot testing and comparisons with other raters, the observer became very skilled and facile with such data collection and the tool. Fourth, we did not survey the same patients that were cared for to compare their perspectives to the HMCCOT scores following the clinical encounters. For patient perspectives, we relied only on PG scores. Fifth, quality of care is a broad and multidimensional construct. The HMCCOT focuses exclusively on hospitalists' comportment and communication at the bedside; therefore, it does not comprehensively assess care quality. Sixth, with our goal to optimally validate the HMCCOT, we tested it on the top tier of hospitalists within each group. We may have observed different results had we randomly selected hospitalists from each hospital or had we conducted the study at hospitals in other geographic regions. Finally, all of the doctors observed worked at hospitals in the Mid‐Atlantic region. However, these five distinct hospitals each have their own cultures, and they are led by different administrators. We purposively chose to sample both academic as well as community settings.

In conclusion, this study reports on the development of a comportment and communication tool that was established and validated by following clinically excellent hospitalists at the bedside. Future studies are necessary to determine whether hospitalists of all levels of experience and clinical skill can improve when given data and feedback using the HMCCOT. Larger studies will then be needed to assess whether enhancing comportment and communication can truly improve patient satisfaction and clinical outcomes in the hospital.

Disclosures: Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. Susrutha Kotwal, MD, and Waseem Khaliq, MD, contributed equally to this work. The authors report no conflicts of interest.

In 2014, there were more than 40,000 hospitalists in the United States, and approximately 20% were employed by academic medical centers.[1] Hospitalist physicians groups are committed to delivering excellent patient care. However, the published literature is limited with respect to defining optimal care in hospital medicine.

Patient satisfaction surveys, such as Press Ganey (PG)[2] and Hospital Consumer Assessment of Healthcare Providers and Systems,[3] are being used to assess patients' contentment with the quality of care they receive while hospitalized. The Society of Hospital Medicine, the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] There are, however, several problems with the current methods. First, the attribution to specific providers is questionable. Second, recall about the provider by the patients may be poor because surveys are sent to patients days after they return home. Third, the patients' recovery and health outcomes are likely to influence their assessment of the doctor. Finally, feedback is known to be most valuable and transformative when it is specific and given in real time. Thus, a tool that is able to provide feedback at the encounter level should be more helpful than a tool that offers assessment at the level of the admission, particularly when it can be also delivered immediately after the data are collected.

Comportment has been used to describe both the way a person behaves and also the way she carries herself (ie, her general manner).[5] Excellent comportment and communication can serve as the foundation for delivering patient‐centered care.[6, 7, 8] Patient centeredness has been shown to improve the patient experience and clinical outcomes, including compliance with therapeutic plans.[9, 10, 11] Respectful behavior, etiquette‐based medicine, and effective communication also lay the foundation upon which the therapeutic alliance between a doctor and patient can be built.

The goal of this study was to establish a metric that could comprehensively assess a hospitalist provider's comportment and communication skills during an encounter with a hospitalized patient.

METHODS

Study Design and Setting

An observational study of hospitalist physicians was conducted between June 2013 and December 2013 at 5 hospitals in Maryland and Washington DC. Two are academic medical centers (Johns Hopkins Hospital and Johns Hopkins Bayview Medical Center [JHBMC]), and the others are community hospitals (Howard County General Hospital [HCGH], Sibley Memorial Hospital [SMC], and Suburban Hospital). These 5 hospitals, across 2 large cities, have distinct culture and leadership, each serving different populations.

Subjects

In developing a tool to measure communication and comportment, we needed to observe physicianpatient encounters wherein there would be a good deal of variability in performance. During pilot testing, when following a few of the most senior and respected hospitalists, we noted encounters during which they excelled and others where they performed less optimally. Further, in following some less‐experienced providers, their skills were less developed and they were uniformly missing most of the behaviors on the tool that were believed to be associated with optimal communication and comportment. Because of this, we decided to purposively sample the strongest clinicians at each of the 5 hospitals in hopes of seeing a range of scores on the tool.

The chiefs of hospital medicine at the 5 hospitals were contacted and asked to identify their most clinically excellent hospitalists, namely those who they thought were most clinically skilled within their groups. Because our goal was to observe the top tier (approximately 20%) of the hospitalists within each group, we asked each chief to name a specific number of physicians (eg, 3 names for 1 group with 15 hospitalists, and 8 from another group with 40 physicians). No precise definition of most clinically excellent hospitalists was provided to the chiefs. It was believed that they were well positioned to select their best clinicians because of both subjective feedback and objective data that flow to them. This postulate may have been corroborated by the fact that each of them efficiently sent a list of their top choices without any questions being asked.

The 29 hospitalists (named by their chiefs) were in turn emailed and invited to participate in the study. All but 3 hospitalists consented to participate in the study; this resulted in a cohort of 26 who would be observed.

Tool Development

A team was assembled to develop the hospital medicine comportment and communication observation tool (HMCCOT). All team members had extensive clinical experience, several had published articles on clinical excellence, had won clinical awards, and all had been teaching clinical skills for many years. The team's development of the HMCCOT was extensively informed by a review of the literature. Two articles that most heavily influenced the HMCCOT's development were Christmas et al.'s paper describing 7 core domains of excellence, 2 of which are intimately linked to communication and comportment,[12] and Kahn's text that delineates behaviors to be performed upon entering the patient's room, termed etiquette‐based medicine.[6] The team also considered the work from prior timemotion studies in hospital medicine,[7, 13] which led to the inclusion of temporal measurements during the observations. The tool was also presented at academic conferences in the Division of General Internal Medicine at Johns Hopkins and iteratively revised based on the feedback. Feedback was sought from people who have spent their entire career studying physicianpatient relationships and who are members of the American Academy on Communication in Healthcare. These methods established content validity evidence for the tool under development. The goal of the HMCCOT was to assess behaviors believed to be associated with optimal comportment and communication in hospital medicine.

The HMCCOT was pilot tested by observing different JHBMC hospitalists patient encounters and it was iteratively revised. On multiple occasions, 2 authors/emnvestigators spent time observing JHBMC hospitalists together and compared data capture and levels of agreement across all elements. Then, for formal assessment of inter‐rater reliability, 2 authors observed 5 different hospitalists across 25 patient encounters; the coefficient was 0.91 (standard error = 0.04). This step helped to establish internal structure validity evidence for the tool.

The initial version of the HMCCOT contained 36 elements, and it was organized sequentially to allow the observer to document behaviors in the order that they were likely to occur so as to facilitate the process and to minimize oversight. A few examples of the elements were as follows: open‐ended versus a close‐ended statement at the beginning of the encounter, hospitalist introduces himself/herself, and whether the provider smiles at any point during the patient encounter.

Data Collection

One author scheduled a time to observe each hospitalist physician during their routine clinical care of patients when they were not working with medical learners. Hospitalists were naturally aware that they were being observed but were not aware of the specific data elements or behaviors that were being recorded.

The study was approved by the institutional review board at the Johns Hopkins University School of Medicine, and by each of the research review committees at HCGH, SMC, and Suburban hospitals.

Data Analysis

After data collection, all data were deidentified so that the researchers were blinded to the identities of the physicians. Respondent characteristics are presented as proportions and means. Unpaired t test and 2 tests were used to compare demographic information, and stratified by mean HMCCOT score. The survey data were analyzed using Stata statistical software version 12.1 (StataCorp LP, College Station, TX).

Further Validation of the HMCCOT

Upon reviewing the distribution of data after observing the 26 physicians with their patients, we excluded 13 variables from the initial version of the tool that lacked discriminatory value (eg, 100% or 0% of physicians performed the observed behavior during the encounters); this left 23 variables that were judged to be most clinically relevant in the final version of the HMCCOT. Two examples of the variables that were excluded were: uses technology/literature to educate patients (not witnessed in any encounter), and obeys posted contact precautions (done uniformly by all). The HMCCOT score represents the proportion of observed behaviors (out of the 23 behaviors). It was computed for each hospitalist for every patient encounter. Finally, relation to other variables validity evidence would be established by comparing the mean HMCCOT scores of the physicians to their PG scores from the same time period to evaluate the correlation between the 2 scores. This association was assessed using Pearson correlations.

RESULTS

The average clinical experience of the 26 hospitalist physicians studied was 6 years (Table 1). Their mean age was 38 years, 13 (50%) were female, and 16 (62%) were of nonwhite race. Fourteen hospitalists (54%) worked at 1 of the nonacademic hospitals. In terms of clinical workload, most physicians (n = 17, 65%) devoted more than 70% of their time working in direct patient care. Mean time spent observing each physician was 280 minutes. During this time, the 26 physicians were observed for 181 separate clinical encounters; 54% of these patients were new encounters, patients who were not previously known to the physician. The average time each physician spent in a patient room was 10.8 minutes. Mean number of observed patient encounters per hospitalist was 7.

Characteristics of the Hospitalist Physicians Based on Their Hospital Medicine Comportment and Communication Observation Tool Score
Total Study Population, n = 26 HMCCOT Score 60, n = 14 HMCCOT Score >60, n = 12 P Value*
  • NOTE: Abbreviations: HCGH, Howard County General Hospital; HMCCOT, Hospital Medicine Comportment and Communication Observation Tool; JHBMC, Johns Hopkins Bayview Medical Center; JHH, Johns Hopkins Hospital; SD, standard deviation; SMC, Sibley Memorial Hospital. *2 with Yates‐corrected P value where at least 20% of frequencies were <5. Unpaired t test statistic

Age, mean (SD) 38 (5.6) 37.9 (5.6) 38.1 (5.7) 0.95
Female, n (%) 13 (50) 6 (43) 7 (58) 0.43
Race, n (%)
Caucasian 10 (38) 5 (36) 5 (41) 0.31
Asian 13 (50) 8 (57) 5 (41)
African/African American 2 (8) 0 (0) 2 (17)
Other 1 (4) 1 (7) 0 (0)
Clinical experience >6 years, n (%) 12 (46) 6 (43) 6 (50) 0.72
Clinical workload >70% 17 (65) 10 (71) 7 (58) 0.48
Academic hospitalist, n (%) 12 (46) 5 (36) 7 (58) 0.25
Hospital 0.47
JHBMC 8 (31) 3 (21.4) 5 (41)
JHH 4 (15) 2 (14.3) 2 (17)
HCGH 5 (19) 3 (21.4) 2 (17)
Suburban 6 (23) 3 (21.4) 3 (25)
SMC 3 (12) 3 (21.4) 0 (0)
Minutes spent observing hospitalist per shift, mean (SD) 280 (104.5) 280.4 (115.5) 281.4 (95.3) 0.98
Average time spent per patient encounter in minutes, mean (SD) 10.8 (8.9) 8.7 (9.1) 13 (8.1) 0.001
Proportion of observed patients who were new to provider, % 97 (53.5) 37 (39.7) 60 (68.1) 0.001

The distribution of HMCCOT scores was not statistically significantly different when analyzed by age, gender, race, amount of clinical experience, clinical workload of the hospitalist, hospital, time spent observing the hospitalist (all P > 0.05). The distribution of HMCCOT scores was statistically different in new patient encounters compared to follow‐ups (68.1% vs 39.7%, P 0.001). Encounters with patients that generated HMCCOT scores above versus below the mean were longer (13 minutes vs 8.7 minutes, P 0.001).

The mean HMCCOT score was 61 (standard deviation [SD] = 10.6), and it was normally distributed (Figure 1). Table 2 shows the data for the 23 behaviors that were objectively assessed as part of the HMCCOT for the 181 patient encounters. The most frequently observed behaviors were physicians washing hands after leaving the patient's room in 170 (94%) of the encounters and smiling (83%). The behaviors that were observed with the least regularity were using an empathic statement (26% of encounters), and employing teach‐back (13% of encounters). A common method of demonstrating interest in the patient as a person, seen in 41% of encounters, involved physicians asking about patients' personal histories and their interests.

Objective and Subjective Data Making Up the Hospital Medicine Comportment and Communication Observation Tool Score Assessed While Observing 26 Hospitalist Physicians
Variables All Visits Combined, n = 181 HMCCOT Score <60, n = 93 HMCCOT Score >60, n = 88 P Value*
  • NOTE: Abbreviations: HMCCOT, Hospital Medicine Comportment and Communication Observation Tool. *2 with Yates‐corrected P value where at least 20% of frequencies were <5.

Objective observations, n (%)
Washes hands after leaving room 170 (94) 83 (89) 87 (99) 0.007
Discusses plan for the day 163 (91) 78 (84) 85 (99) <0.001
Does not interrupt the patient 159 (88) 79 (85) 80 (91) 0.21
Smiles 149 (83) 71 (77) 78 (89) 0.04
Washes hands before entering 139 (77) 64 (69) 75 (85) 0.009
Begins with open‐ended question 134 (77) 68 (76) 66 (78) 0.74
Knocks before entering the room 127 (76) 57 (65) 70 (89) <0.001
Introduces him/herself to the patient 122 (67) 45 (48) 77 (88) <0.001
Explains his/her role 120 (66) 44 (47) 76 (86) <0.001
Asks about pain 110 (61) 45 (49) 65 (74) 0.001
Asks permission prior to examining 106 (61) 43 (50) 63 (72) 0.002
Uncovers body area for the physical exam 100 (57) 34 (38) 66 (77) <0.001
Discusses discharge plan 99 (55) 38 (41) 61 (71) <0.001
Sits down in the patient room 74 (41) 24 (26) 50 (57) <0.001
Asks about patient's feelings 58 (33) 17 (19) 41 (47) <0.001
Shakes hands with the patient 57 (32) 17 (18) 40 (46) <0.001
Uses teach‐back 24 (13) 4 (4.3) 20 (24) <0.001
Subjective observations, n (%)
Avoids medical jargon 160 (89) 85 (91) 83 (95) 0.28
Demonstrates interest in patient as a person 72 (41) 16 (18) 56 (66) <0.001
Touches appropriately 62 (34) 21 (23) 41 (47) 0.001
Shows sensitivity to patient modesty 57 (93) 15 (79) 42 (100) 0.002
Engages in nonmedical conversation 54 (30) 10 (11) 44 (51) <0.001
Uses empathic statement 47 (26) 9 (10) 38 (43) <0.001
jhm2647-fig-0001-m.png
Distribution of mean hospital medicine comportment and communication tool (HMCCOT) scores for the 26 hospitalist providers who were observed.

The average composite PG scores for the physician sample was 38.95 (SD=39.64). A moderate correlation was found between the HMCCOT score and PG score (adjusted Pearson correlation: 0.45, P = 0.047).

DISCUSSION

In this study, we followed 26 hospitalist physicians during routine clinical care, and we focused intently on their communication and their comportment with patients at the bedside. Even among clinically respected hospitalists, the results reveal that there is wide variability in comportment and communication practices and behaviors at the bedside. The physicians' HMCCOT scores were associated with their PG scores. These findings suggest that improved bedside communication and comportment with patients might translate into enhanced patient satisfaction.

This is the first study that honed in on hospitalist communication and comportment. With validity evidence established for the HMCCOT, some may elect to more explicitly perform these behaviors themselves, and others may wish to watch other hospitalists to give them feedback that is tied to specific behaviors. Beginning with the basics, the hospitalists we studied introduced themselves to their patients at the initial encounter 78% of the time, less frequently than is done by primary care clinicians (89%) but more consistently than do emergency department providers (64%).[7] Other variables that stood out in the HMCCOT was that teach‐back was employed in only 13% of the encounters. Previous studies have shown that teach‐back corroborates patient comprehension and can be used to engage patients (and caregivers) in realistic goal setting and optimal health service utilization.[14] Further, patients who clearly understand their postdischarge plan are 30% less likely to be readmitted or visit the emergency department.[14] The data for our group have helped us to see areas of strengths, such as hand washing, where we are above compliance rates across hospitals in the United States,[15] as well as those matters that represent opportunities for improvement such as connecting more deeply with our patients.

Tackett et al. have looked at encounter length and its association with performance of etiquette‐based medicine behaviors.[7] Similar to their study, we found a positive correlation between spending more time with patients and higher HMCCOT scores. We also found that HMCCOT scores were higher when providers were caring for new patients. Patients' complaints about doctors often relate to feeling rushed, that their physicians did not listen to them, or that information was not conveyed in a clear manner.[16] Such challenges in physicianpatient communication are ubiquitous across clinical settings.[16] When successfully achieved, patient‐centered communication has been associated with improved clinical outcomes, including adherence to recommended treatment and better self‐management of chronic disease.[17, 18, 19, 20, 21, 22, 23, 24, 25, 26] Many of the components of the HMCCOT described in this article are at the heart of patient‐centered care.

Several limitations of the study should be considered. First, physicians may have behaved differently while they were being observed, which is known as the Hawthorne effect. We observed them for many hours and across multiple patient encounters, and the physicians were not aware of the specific types of data that we were collecting. These factors may have limited the biases along such lines. Second, there may be elements of optimal comportment and communication that were not captured by the HMCCOT. Hopefully, there are not big gaps, as we used multiple methods and an iterative process in the refinement of the HMCCOT metric. Third, one investigator did all of the observing, and it is possible that he might have missed certain behaviors. Through extensive pilot testing and comparisons with other raters, the observer became very skilled and facile with such data collection and the tool. Fourth, we did not survey the same patients that were cared for to compare their perspectives to the HMCCOT scores following the clinical encounters. For patient perspectives, we relied only on PG scores. Fifth, quality of care is a broad and multidimensional construct. The HMCCOT focuses exclusively on hospitalists' comportment and communication at the bedside; therefore, it does not comprehensively assess care quality. Sixth, with our goal to optimally validate the HMCCOT, we tested it on the top tier of hospitalists within each group. We may have observed different results had we randomly selected hospitalists from each hospital or had we conducted the study at hospitals in other geographic regions. Finally, all of the doctors observed worked at hospitals in the Mid‐Atlantic region. However, these five distinct hospitals each have their own cultures, and they are led by different administrators. We purposively chose to sample both academic as well as community settings.

In conclusion, this study reports on the development of a comportment and communication tool that was established and validated by following clinically excellent hospitalists at the bedside. Future studies are necessary to determine whether hospitalists of all levels of experience and clinical skill can improve when given data and feedback using the HMCCOT. Larger studies will then be needed to assess whether enhancing comportment and communication can truly improve patient satisfaction and clinical outcomes in the hospital.

Disclosures: Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. Susrutha Kotwal, MD, and Waseem Khaliq, MD, contributed equally to this work. The authors report no conflicts of interest.

References
  1. 2014 state of hospital medicine report. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org/Web/Practice_Management/State_of_HM_Surveys/2014.aspx. Accessed January 10, 2015.
  2. Press Ganey website. Available at: http://www.pressganey.com/home. Accessed December 15, 2015.
  3. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed February 2, 2016.
  4. Membership committee guidelines for hospitalists patient satisfaction surveys. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org. Accessed February 2, 2016.
  5. Definition of comportment. Available at: http://www.vocabulary.com/dictionary/comportment. Accessed December 15, 2015.
  6. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358(19):19881989.
  7. Tackett S, Tad‐y D, Rios R, Kisuule F, Wright S. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  8. Levinson W, Lesser CS, Epstein RM. Developing physician communication skills for patient‐centered care. Health Aff (Millwood). 2010;29(7):13101318.
  9. Auerbach SM. The impact on patient health outcomes of interventions targeting the patient–physician relationship. Patient. 2009;2(2):7784.
  10. Griffin SJ, Kinmonth AL, Veltman MW, Gillard S, Grant J, Stewart M. Effect on health‐related outcomes of interventions to alter the interaction between patients and practitioners: a systematic review of trials. Ann Fam Med. 2004;2(6):595608.
  11. Street RL, Makoul G, Arora NK, Epstein RM. How does communication heal? Pathways linking clinician–patient communication to health outcomes. Patient Educ Couns. 2009;74(3):295301.
  12. Christmas C, Kravet SJ, Durso SC, Wright SM. Clinical excellence in academia: perspectives from masterful academic clinicians. Mayo Clin Proc. 2008;83(9):989994.
  13. Tipping MD, Forth VE, O'Leary KJ, et al. Where did the day go?—a time‐motion study of hospitalists. J Hosp Med. 2010;5(6):323328.
  14. Peter D, Robinson P, Jordan M, et al. Reducing readmissions using teach‐back: enhancing patient and family education. J Nurs Adm. 2015;45(1):3542.
  15. McGuckin M, Waterman R, Govednik J. Hand hygiene compliance rates in the United States—a one‐year multicenter collaboration using product/volume usage measurement and feedback. Am J Med Qual. 2009;24(3):205213.
  16. Hickson GB, Clayton EW, Entman SS, et al. Obstetricians' prior malpractice experience and patients' satisfaction with care. JAMA. 1994;272(20):15831587.
  17. Epstein RM, Street RL. Patient‐Centered Communication in Cancer Care: Promoting Healing and Reducing Suffering. NIH publication no. 07–6225. Bethesda, MD: National Cancer Institute; 2007.
  18. Arora NK. Interacting with cancer patients: the significance of physicians' communication behavior. Soc Sci Med. 2003;57(5):791806.
  19. Greenfield S, Kaplan S, Ware JE. Expanding patient involvement in care: effects on patient outcomes. Ann Intern Med. 1985;102(4):520528.
  20. Mead N, Bower P. Measuring patient‐centeredness: a comparison of three observation‐based instruments. Patient Educ Couns. 2000;39(1):7180.
  21. Ong LM, Haes JC, Hoos AM, Lammes FB. Doctor‐patient communication: a review of the literature. Soc Sci Med. 1995;40(7):903918.
  22. Safran DG, Taira DA, Rogers WH, Kosinski M, Ware JE, Tarlov AR. Linking primary care performance to outcomes of care. J Fam Pract. 1998;47(3):213220.
  23. Stewart M, Brown JB, Donner A, et al. The impact of patient‐centered care on outcomes. J Fam Pract. 2000;49(9):796804.
  24. Epstein RM, Franks P, Fiscella K, et al. Measuring patient‐centered communication in patient‐physician consultations: theoretical and practical issues. Soc Sci Med. 2005;61(7):15161528.
  25. Mead N, Bower P. Patient‐centered consultations and outcomes in primary care: a review of the literature. Patient Educ Couns. 2002;48(1):5161.
  26. Bredart A, Bouleuc C, Dolbeault S. Doctor‐patient communication and satisfaction with care in oncology. Curr Opin Oncol. 2005;17(4):351354.
References
  1. 2014 state of hospital medicine report. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org/Web/Practice_Management/State_of_HM_Surveys/2014.aspx. Accessed January 10, 2015.
  2. Press Ganey website. Available at: http://www.pressganey.com/home. Accessed December 15, 2015.
  3. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed February 2, 2016.
  4. Membership committee guidelines for hospitalists patient satisfaction surveys. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org. Accessed February 2, 2016.
  5. Definition of comportment. Available at: http://www.vocabulary.com/dictionary/comportment. Accessed December 15, 2015.
  6. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358(19):19881989.
  7. Tackett S, Tad‐y D, Rios R, Kisuule F, Wright S. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  8. Levinson W, Lesser CS, Epstein RM. Developing physician communication skills for patient‐centered care. Health Aff (Millwood). 2010;29(7):13101318.
  9. Auerbach SM. The impact on patient health outcomes of interventions targeting the patient–physician relationship. Patient. 2009;2(2):7784.
  10. Griffin SJ, Kinmonth AL, Veltman MW, Gillard S, Grant J, Stewart M. Effect on health‐related outcomes of interventions to alter the interaction between patients and practitioners: a systematic review of trials. Ann Fam Med. 2004;2(6):595608.
  11. Street RL, Makoul G, Arora NK, Epstein RM. How does communication heal? Pathways linking clinician–patient communication to health outcomes. Patient Educ Couns. 2009;74(3):295301.
  12. Christmas C, Kravet SJ, Durso SC, Wright SM. Clinical excellence in academia: perspectives from masterful academic clinicians. Mayo Clin Proc. 2008;83(9):989994.
  13. Tipping MD, Forth VE, O'Leary KJ, et al. Where did the day go?—a time‐motion study of hospitalists. J Hosp Med. 2010;5(6):323328.
  14. Peter D, Robinson P, Jordan M, et al. Reducing readmissions using teach‐back: enhancing patient and family education. J Nurs Adm. 2015;45(1):3542.
  15. McGuckin M, Waterman R, Govednik J. Hand hygiene compliance rates in the United States—a one‐year multicenter collaboration using product/volume usage measurement and feedback. Am J Med Qual. 2009;24(3):205213.
  16. Hickson GB, Clayton EW, Entman SS, et al. Obstetricians' prior malpractice experience and patients' satisfaction with care. JAMA. 1994;272(20):15831587.
  17. Epstein RM, Street RL. Patient‐Centered Communication in Cancer Care: Promoting Healing and Reducing Suffering. NIH publication no. 07–6225. Bethesda, MD: National Cancer Institute; 2007.
  18. Arora NK. Interacting with cancer patients: the significance of physicians' communication behavior. Soc Sci Med. 2003;57(5):791806.
  19. Greenfield S, Kaplan S, Ware JE. Expanding patient involvement in care: effects on patient outcomes. Ann Intern Med. 1985;102(4):520528.
  20. Mead N, Bower P. Measuring patient‐centeredness: a comparison of three observation‐based instruments. Patient Educ Couns. 2000;39(1):7180.
  21. Ong LM, Haes JC, Hoos AM, Lammes FB. Doctor‐patient communication: a review of the literature. Soc Sci Med. 1995;40(7):903918.
  22. Safran DG, Taira DA, Rogers WH, Kosinski M, Ware JE, Tarlov AR. Linking primary care performance to outcomes of care. J Fam Pract. 1998;47(3):213220.
  23. Stewart M, Brown JB, Donner A, et al. The impact of patient‐centered care on outcomes. J Fam Pract. 2000;49(9):796804.
  24. Epstein RM, Franks P, Fiscella K, et al. Measuring patient‐centered communication in patient‐physician consultations: theoretical and practical issues. Soc Sci Med. 2005;61(7):15161528.
  25. Mead N, Bower P. Patient‐centered consultations and outcomes in primary care: a review of the literature. Patient Educ Couns. 2002;48(1):5161.
  26. Bredart A, Bouleuc C, Dolbeault S. Doctor‐patient communication and satisfaction with care in oncology. Curr Opin Oncol. 2005;17(4):351354.
Issue
Journal of Hospital Medicine - 11(12)
Issue
Journal of Hospital Medicine - 11(12)
Page Number
853-858
Page Number
853-858
Publications
Publications
Article Type
Display Headline
Developing a comportment and communication tool for use in hospital medicine
Display Headline
Developing a comportment and communication tool for use in hospital medicine
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Susrutha Kotwal, MD, Johns Hopkins University School of Medicine, Johns Hopkins Bayview Medical Center, 200 Eastern Avenue, MFL Building West Tower, 6th Floor CIMS Suite, Baltimore, MD 21224; Telephone: 410‐550‐5018; Fax: 410‐550‐2972; E‐mail: skotwal1@jhmi.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Image
Disable zoom
Off
Media Files
Image
Disable zoom
Off
Image
Disable zoom
Off
Image
Disable zoom
Off
Image
Disable zoom
Off
Image
Disable zoom
Off

Patients' Sleep Quality and Duration

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Pilot study aiming to support sleep quality and duration during hospitalizations

Approximately 70 million adults within the United States have sleep disorders,[1] and up to 30% of adults report sleeping less than 6 hours per night.[2] Poor sleep has been associated with undesirable health outcomes.[1] Suboptimal sleep duration and sleep quality has been associated with a higher prevalence of chronic health conditions including hypertension, type 2 diabetes, coronary artery disease, stroke, and obesity, as well as increased overall mortality.[3, 4, 5, 6, 7]

Sleep plays an important role in restoration of wellness. Poor sleep is associated with physiological disturbances that may result in poor healing.[8, 9, 10] In the literature, prevalence of insomnia among elderly hospitalized patients was 36.7%,[11] whereas in younger hospitalized patients it was 50%.[12] Hospitalized patients frequently cite their acute illness, hospital‐related environmental factors, and disruptions that are part of routine care as causes for poor sleep during hospitalization.[13, 14, 15] Although the pervasiveness of poor sleep among hospitalized patients is high, interventions that prioritize sleep optimization as routine care, are uncommon. Few studies have reviewed the effect of sleep‐promoting measures on both sleep quality and sleep duration among patients hospitalized on general medicine units.

In this study, we aimed to assess the feasibility of incorporating sleep‐promoting interventions on a general medicine unit. We sought to identify differences in sleep measures between intervention and control groups. The primary outcome that we hoped to influence and lengthen in the intervention group was sleep duration. This outcome was measured both by sleep diary and with actigraphy. Secondary outcomes that we hypothesized should improve in the intervention group included feeling more refreshed in the mornings, sleep efficiency, and fewer sleep disruptions. As a feasibility pilot, we also wanted to explore the ease or difficulty with which sleep‐promoting interventions could be incorporated to the team's workflow.

METHODS

Study Design

A quasi‐experimental prospective pilot study was conducted at a single academic center, the Johns Hopkins Bayview Medical Center. Participants included adult patients admitted to the general medicine ward from July 2013 through January 2014. Patients with dementia; inability to complete survey questionnaires due to delirium, disability, or a language barrier; active withdrawal from alcohol or controlled substances; or acute psychiatric illness were excluded in this study.

The medicine ward at our medical center is comprised of 2 structurally identical units that admit patients with similar diagnoses, disease severity, and case‐mix disease groups. Nursing and support staff are unit specific. Pertaining to the sleep environment, the units both have semiprivate and private rooms. Visitors are encouraged to leave by 10 pm. Patients admitted from the emergency room to the medicine ward are assigned haphazardly to either unit based on bed availability. For the purpose of this study, we selected 1 unit to be a control unit and identified the other as the sleep‐promoting intervention unit.

Study Procedure

Upon arrival to the medicine unit, the research team approached all patients who met study eligibility criteria for study participation. Patients were provided full disclosure of the study using institutional research guidelines, and those interested in participating were consented. Participants were not explicitly told about their group assignment. This study was approved by the Johns Hopkins Institutional Review Board for human subject research.

In this study, the control group participants received standard of care as it pertains to sleep promotion. No additional sleep‐promoting measures were implemented to routine medical care, medication administration, nursing care, and overnight monitoring. Patients who used sleep medications at home, prior to admission, had those medicines continued only if they requested them and they were not contraindicated given their acute illness. Participants on the intervention unit were exposed to a nurse‐delivered sleep‐promoting protocol aimed at transforming the culture of care such that helping patients to sleep soundly was made a top priority. Environmental changes included unit‐wide efforts to minimize light and noise disturbances by dimming hallway lights, turning off room lights, and encouraging care teams to be as quiet as possible. Other strategies focused largely on minimizing care‐related disruptions. These included, when appropriate, administering nighttime medications in the early evening, minimizing fluids overnight, and closing patient room doors where appropriate. Further, patients were offered the following sleep‐promoting items to choose from: ear plugs, eye masks, warm blankets, and relaxation music. The final component of our intervention was 30‐minute sleep hygiene education taught by a physician. It highlighted basic sleep physiology and healthy sleep behavior adapted from Buysse.[16] Patients learned the role of behaviors such as reducing time lying awake in bed, setting standard wake‐up time and sleep time, and going to bed only when sleepy. This behavioral education was supplemented by a handout with sleep‐promoting suggestions.

The care team on the intervention unit received comprehensive study‐focused training in which night nursing teams were familiarized with the sleep‐promoting protocol through in‐service sessions facilitated by 1 of the authors (E.W.G.). To further promote study implementation, sleep‐promoting procedures were supported and encouraged by supervising nurses who made daily reminders to the intervention unit night care team of the goals of the sleep‐promoting study during evening huddles performed at the beginning of each shift. To assess the adherence of the sleep protocol, the nursing staff completed a daily checklist of elements within the protocol that were employed .

Data Collection and Measures

Baseline Measures

At the time of enrollment, study patients' demographic information, including use of chronic sleep medication prior to admission, was collected. Participants were assessed for baseline sleep disturbance prior to admission using standardized, validated sleep assessment tools: Pittsburgh Sleep Quality Index (PSQI), the Insomnia Severity Index (ISI), and the Epworth Sleepiness Scale (ESS). PSQI, a 19‐item tool, assessed self‐rated sleep quality measured over the prior month; a score of 5 or greater indicated poor sleep.[17] ISI, a 7‐item tool, identified the presence, rated the severity, and described the impact of insomnia; a score of 10 or greater indicated insomnia.[18] ESS, an 8‐item self‐rated tool, evaluated the impact of perceived sleepiness on daily functioning in 8 different environments; a score of 9 or greater was linked to burden of sleepiness. Participants were also screened for both obstructive sleep apnea (using the Berlin Sleep Apnea Index) and clinical depression (using Center for Epidemiologic Studies‐Depression 10‐point scale), as these conditions affect sleep patterns. These data are shown in Table 1.

Characteristics of Study Participants (n = 112)
Intervention, n = 48 Control, n = 64 P Value
  • NOTE: The entry for number of sleep diaries per participant in intervention and control groups is presented after capping at 4 diaries. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; ESS, Epworth Sleepiness Scale; ISI, Insomnia Severity Index; PSQI, Pittsburgh Sleep Quality Index; SD, standard deviation.

Age, y, mean (SD) 58.2 (16) 56.9 (17) 0.69
Female, n (%) 26 (54.2) 36 (56.3) 0.83
Race, n (%)
Caucasian 33 (68.8) 46 (71.9) 0.92
African American 13 (27.1) 16 (25.0)
Other 2 (4.2) 2 (3.1)
BMI, mean (SD) 32.1 (9.2) 31.8 (9.3) 0.85
Admitting service, n (%)
Teaching 21 (43.8) 18 (28.1) 0.09
Nonteaching 27 (56.3) 46 (71.9)
Sleep medication prior to admission, n (%) 7 (14.9) 21 (32.8) 0.03
Length of stay, d, mean (SD) 4.9 (3) 5.8 (3.9) 0.19
Number of sleep diaries per participant, mean (SD) 2.2 (0.8) 2.6 (0.9) 0.02
Proportion of hospital days with sleep diaries per participant, (SD) 0.6 (0.2) 0.5 (0.2) 0.71
Number of nights with actigraphy per participant, mean (SD) 1.2 (0.7) 1.4 (0.8) 0.16
Proportion of hospital nights with actigraphy per participant (SD) 0.3 (0.2) 0.3 (0.1) 0.91
Baseline sleep measures
PSQI, mean (SD) 9.9 (4.6) 9.1 (4.5) 0.39
ESS, mean (SD) 7.4 (4.2) 7.7 (4.8) 0.79
ISI, mean (SD) 11.9 (7.6) 10.8 (7.4) 0.44
CESD‐10, mean (SD) 12.2 (7.2) 12.8 (7.6) 0.69
Berlin Sleep Apnea, mean (SD) 0.63 (0.5) 0.61 (0.5) 0.87

Sleep Diary Measures

A sleep diary completed each morning assessed the outcome measures, perceived sleep quality, how refreshing sleep was, and sleep durations. The diary employed a 5‐point Likert rating scale ranging from poor (1) to excellent (5). Perceived sleep duration was calculated from patients' reported time in bed, time to fall asleep, wake time, and number and duration of awakenings after sleep onset on their sleep diary. These data were used to compute total sleep time (TST) and sleep efficiency (SE). The sleep diary also included other pertinent sleep‐related measures including use of sleep medication the night prior and specific sleep disruptions from the prior night. To measure the impact of disruptions due to disturbances the prior night, we created a summed scale score of 4 items that negatively interfered with sleep (light, temperature, noise, and interruptions; 5 point scales from 1 = not at all to 5 = significant). Analysis of principal axis factors with varimax rotation yielded 1 disruption factor accounting for 55% of the variance, and Cronbach's was 0.73.

Actigraphy Measures

Actigraphy outcomes of sleep were recorded using the actigraphy wrist watch (ActiSleep Plus (GT3X+); ActiGraph, Pensacola, FL). Participants wore the monitor from the day of enrollment throughout the hospital stay or until transfer out of the unit. Objective data were analyzed and scored using ActiLife 6 data analysis software (version 6.10.1; Actigraph). Time in bed, given the unique inpatient setting, was calculated using sleep diary responses as the interval between sleep time and reported wake up time. These were entered into the Actilife 6 software for the sleep scoring analysis using a validated algorithm, Cole‐Kripke, to calculate actigraphy TST and SE.

Statistical Analysis

Descriptive and inferential statistics were computed using Statistical Package for the Social Sciences version 22 (IBM, Armonk, NY). We computed means, proportions, and measures of dispersion for all study variables. To test differences in sleep diary and actigraphy outcomes between the intervention and control arms, we used linear mixed models with full maximum likelihood estimation to model each of the 7 continuous sleep outcomes. These statistical methods are appropriate to account for the nonindependence of continuous repeated observations within hospital patients.[19] For all outcomes, the unit of analysis was nightly observations nested within patient‐ level characteristics. The use of full maximum likelihood estimation is a robust and preferred method for handling values missing at random in longitudinal datasets.[20]

To model repeated observations, mixed models included a term representing time in days. For each outcome, we specified unconditional growth models to examine the variability between and within patients by computing intraclass correlations and inspecting variance components. We used model fit indices (‐2LL deviance, Akaike's information criterion, and Schwartz's Bayesian criterion) as appropriate to determine best fitting model specifications in terms of random effects and covariance structure.[21, 22]

We tested the main effect of the intervention on sleep outcomes and the interactive effect of group (intervention vs control) by hospital day, to test whether there were group differences in slopes representing average change in sleep outcomes over hospital days. All models adjusted for age, body mass index, depression, and baseline sleep quality (PSQI) as time‐invariant covariates, and whether participants had taken a sleep medication the day before, as a time‐varying covariate. Adjustment for prehospitalization sleep quality was a matter of particular importance. We used the PSQI to control for sleep quality because it is both a well‐validated, multidimensional measure, and it includes prehospital use of sleep medications. In a series of sensitivity analyses, we also explored whether the dichotomous self‐reported measure of whether or not participants regularly took sleep medications prior to hospitalization, rather than the PSQI, would change our substantive findings. All covariates were centered at the grand‐mean following guidelines for appropriate interpretation of regression coefficients.[23]

RESULTS

Of the 112 study patients, 48 were in the intervention unit and 64 in the control unit. Eighty‐five percent of study participants endorsed poor sleep prior to hospital admission on the PSQI sleep quality measure, which was similar in both groups (Table 1).

Participants completed 1 to 8 sleep diary entries (mean = 2.5, standard deviation = 1.1). Because only 6 participants completed 5 or more diaries, we constrained the number of diaries included in the inferential analysis to 4 to avoid influential outliers identified by scatterplots. Fifty‐seven percent of participants had 1 night of valid actigraphy data (n = 64); 29%, 2 nights (n = 32), 8% had 3 or 4 nights, and 9 participants did not have any usable actigraphy data. The extent to which the intervention was accepted by patients in the intervention group was highly variable. Unit‐wide patient adherence with the 10 pm lights off, telephone off, and TV off policy was 87%, 67%, and 64% of intervention patients, respectively. Uptake of sleep menu items was also highly variable, and not a single element was used by more than half of patients (acceptance rates ranged from 11% to 44%). Eye masks (44%) and ear plugs (32%) were the most commonly utilized items.

A greater proportion of patients in the control arm (33%) had been taking sleep medications prior to hospitalization compared to the intervention arm (15%; 2 = 4.6, P < 0.05). However, hypnotic medication use in the hospital was similar across the both groups (intervention unit patients: 25% and controls: 21%, P = 0.49).

Intraclass correlations for the 7 sleep outcomes ranged from 0.59 to 0.76 on sleep diary outcomes, and from 0.61 to 0.85 on actigraphy. Dependency of sleep measures within patients accounted for 59% to 85% of variance in sleep outcomes. The best‐fit mixed models included random intercepts only. The results of mixed models testing the main effect of intervention versus comparison arm on sleep outcome measures, adjusted for covariates, are presented in Table 2. Total sleep time was the only outcome that was significantly different between groups; the average total sleep time, calculated from sleep diary data, was longer in the intervention group by 49 minutes.

Differences in Subjective and Objective Sleep Outcome Measures From Linear Mixed Models
Intervention, n = 48 Control, n = 64 P Value
  • NOTE: All differences in sleep outcomes adjusted for age, BMI, baseline sleep quality (PSQI), depression (CES‐D), and whether a sleep medication was taken the previous night. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; PSQI, Pittsburgh Sleep Quality Index; SE, standard error.

Sleep diary outcomes
Sleep quality, mean (SE) 3.14 (0.16) 3.08 (0.13) 0.79
Refreshed sleep, mean (SE) 2.94 (0.17) 2.74 (0.14) 0.38
Negative impact of sleep disruptions, mean (SE) 4.39 (0.58) 4.81 (0.48) 0.58
Total sleep time, min, mean (SE) 422 (16.2) 373 (13.2) 0.02
Sleep efficiency, %, mean (SE) 83.5 (2.3) 82.1 (1.9) 0.65
Actigraphy outcomes
Total sleep time, min, mean (SE) 377 (16.8) 356 (13.2) 0.32
Sleep efficiency, %, mean (SE) 72.7 (2.2) 74.8 (1.8) 0.45

Table 3 lists slopes representing average change in sleep measures over hospital days in both groups. The P values represent z tests of interaction terms in mixed models, after adjustment for covariates, testing whether slopes significantly differed between groups. Of the 7 outcomes, 3 sleep diary measures had significant interaction terms. For ratings of sleep quality, refreshing sleep, and sleep disruptions, slopes in the control group were flat, whereas slopes in the intervention group demonstrated improvements in ratings of sleep quality and refreshed sleep, and a decrease in the impact of sleep disruptions over the course of subsequent nights in the hospital. Figure 1 illustrates a plot of the adjusted average slopes for the refreshed sleep score across hospital days in intervention and control groups.

Average Change in Sleep Outcomes Across Hospital Days for Patients in Intervention and Comparison Groups
Intervention, Slope (SE), n = 48 Control, Slope (SE), n = 64 P Value
  • NOTE: Mixed models were adjusted for age, BMI, baseline sleep quality (PSQI), baseline depression (CES‐D), and whether or not a sleep medication was taken the previous night.

  • Each slope represents the average change in sleep diary outcome from night to night in each condition. P values represent the Wald test of the interaction term. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; PSQI, Pittsburgh Sleep Quality Index; SE, standard error.

Refreshed sleep rating 0.55 (0.18) 0.03 (0.13) 0.006
Sleep quality rating 0.52 (0.16) 0.02 (0.11) 0.012
Negative impact of sleep interruptions 1.65 (0.48) 0.05 (0.32) 0.006
Total sleep time, diary 11.2 (18.1) 6.3 (13.0) 0.44
Total sleep time, actigraphy 7.3 (25.5) 1.0 (15.3) 0.83
Sleep efficiency, diary 1.1 (2.3) 1.5 (1.6) 0.89
Sleep efficiency, actigraphy 0.9 (4.0) 0.7 (2.4) 0.74
jhm2578-fig-0001-m.png
Plot of average changes in refreshed sleep over hospital days for intervention to control participants. *Slopes from linear mixed models are adjusted for age, BMI, depression score, prehospital sleep quality, and sleep medication taken the night before during hospitalization.

DISCUSSION

Poor sleep is common among hospitalized adults, both at home prior to the admission and especially when in the hospital. This pilot study demonstrated the feasibility of rolling out a sleep‐promoting intervention on a hospital's general medicine unit. Although participants on the intervention unit reported improved sleep quality and feeling more refreshed, this was not supported by actigraphy data (such as sleep time or sleep efficiency). Although care team engagement and implementation of unit‐wide interventions were high, patient use of individual components was imperfect. Of particular interest, however, the intervention group actually began to have improved sleep quality and fewer disruptions with subsequent nights sleeping in the hospital.

Our findings of the high prevalence of poor sleep among hospitalized patients is congruent with prior studies and supports the great need to screen for and address poor sleep within the hospital setting.[24, 25, 26] Attempts to promote sleep among hospitalized patients may be effective. Prior literature on sleep‐promoting intervention studies demonstrated relaxation techniques improved sleep quality by almost 38%,[27] and ear plugs and eye masks showed some benefit in promoting sleep within the hospital.[28] Our study's multicomponent intervention that attempted to minimize disruptions led to improvement in sleep quality, more restorative sleep, and decreased report of sleep disruptions, especially among patients who had a longer length of stay. As suggested by Thomas et al.[29] and seen in our data, this temporal relationship with improvement across subsequent nights suggests there may be an adaptation to the new environment and that it may take time for the sleep intervention to work.

Hospitalized patients often fail to reclaim the much‐needed restorative sleep at the time when they are most vulnerable. Patients cite routine care as the primary cause of sleep disruption, and often recognize the way that the hospital environment interferes with their ability to sleep.[30, 31, 32] The sleep‐promoting interventions used in our study would be characterized by most as low effort[33] and a potential for high yield, even though our patients only appreciated modest improvements in sleep outcomes.

Several limitations of this study should be considered. First, although we had hoped to collect substantial amounts of objective data, the average time of actigraphy observation was less than 48 hours. This may have constrained the group by time interaction analysis with actigraphy data, as studies have shown increased accuracy in actigraphy measures with longer wear.[34] By contrast, the sleep diary survey collected throughout hospitalization yielded significant improvements in consecutive daily measurements. Second, the proximity of the study units raised concern for study contamination, which could have reduced the differences in the outcome measures that may have been observed. Although the physicians work on both units, the nursing and support care teams are distinct and unit dependent. Finally, this was not a randomized trial. Patient assignment to the treatment arms was haphazard and occurred within the hospital's admitting strategy. Allocation of patients to either the intervention or the control group was based on bed availability at the time of admission. Although both groups were similar in most characteristics, more of the control participants reported taking more sleep medications prior to admission as compared to the intervention participants. Fortunately, hypnotic use was not different between groups during the admission, the time when sleep data were being captured.

Overall, this pilot study suggests that patients admitted to general medical ward fail to realize sufficient restorative sleep when they are in the hospital. Sleep disruption is rather frequent. This study demonstrates the opportunity for and feasibility of sleep‐promoting interventions where facilitating sleep is considered to be a top priority and vital component of the healthcare delivery. When trying to improve patients' sleep in the hospital, it may take several consecutive nights to realize a return on investment.

Acknowledgements

The authors acknowledge the Department of Nursing, Johns Hopkins Bayview Medical Center, and care teams of the Zieve Medicine Units, and the Center for Child and Community Health Research Biostatistics, Epidemiology and Data Management (BEAD) Core group.

Disclosures: Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. Dr. Howell is the chief of the Division of Hospital Medicine at Johns Hopkins Bayview Medical Center and associate professor at Johns Hopkins School of Medicine. He served as the president of the Society of Hospital Medicine (SHM) in 2013 and currently serves as a board member. He is also a senior physician advisor for SHM. He is a coinvestigator grant recipient on an Agency for Healthcare Research and Quality grant on medication reconciliation funded through Baylor University. He was previously a coinvestigator grant recipient of Center for Medicare and Medicaid Innovations grant that ended in June 2015.

Files
References
  1. Institute of Medicine (US) Committee on Sleep Medicine and Research. Sleep disorders and sleep deprivation: an unmet public health problem. Washington, DC: National Academies Press; 2006. Available at: http://www.ncbi.nlm.nih.gov/books/NBK19960. Accessed September 16, 2014.
  2. Schoenborn CA, Adams PE. Health behaviors of adults: United States, 2005–2007. Vital Health Stat 10. 2010;245:1132.
  3. Mallon L, Broman JE, Hetta J. High incidence of diabetes in men with sleep complaints or short sleep duration: a 12‐year follow‐up study of a middle‐aged population. Diabetes Care. 2005;28:27622767.
  4. Donat M, Brown C, Williams N, et al. Linking sleep duration and obesity among black and white US adults. Clin Pract (Lond). 2013;10(5):661667.
  5. Cappuccio FP, Stranges S, Kandala NB, et al. Gender‐specific associations of short sleep duration with prevalent and incident hypertension: the Whitehall II Study. Hypertension. 2007;50:693700.
  6. Rod NH, Kumari M, Lange T, Kivimäki M, Shipley M, Ferrie J. The joint effect of sleep duration and disturbed sleep on cause‐specific mortality: results from the Whitehall II cohort study. PLoS One. 2014;9(4):e91965.
  7. Martin JL, Fiorentino L, Jouldjian S, Mitchell M, Josephson KR, Alessi CA. Poor self‐reported sleep quality predicts mortality within one year of inpatient post‐acute rehabilitation among older adults. Sleep. 2011;34(12):17151721.
  8. Kahn‐Greene ET, Killgore DB, Kamimori GH, Balkin TJ, Killgore WD. The effects of sleep deprivation on symptoms of psychopathology in healthy adults. Sleep Med. 2007;8(3):215221.
  9. Irwin MR, Wang M, Campomayor CO, Collado‐Hidalgo A, Cole S. Sleep deprivation and activation of morning levels of cellular and genomic markers of inflammation. Arch Intern Med. 2006;166:17561762.
  10. Knutson KL, Spiegel K, Penev P, Cauter E. The metabolic consequences of sleep deprivation. Sleep Med Rev. 2007;11(3):163178.
  11. Isaia G, Corsinovi L, Bo M, et al. Insomnia among hospitalized elderly patients: prevalence, clinical characteristics and risk factors. Arch Gerontol Geriatr. 2011;52:133137.
  12. Rocha FL, Hara C, Rodrigues CV, et al. Is insomnia a marker for psychiatric disorders in general hospitals? Sleep Med. 2005;6:549553.
  13. Adachi M, Staisiunas PG, Knutson KL, Beveridge C, Meltzer DO, Arora VM. Perceived control and sleep in hospitalized older adults: a sound hypothesis? J Hosp Med. 2013;8:184190.
  14. Buxton OM, Ellenbogen JM, Wang W, et al. Sleep disruption due to hospital noises: a prospective evaluation. Ann Intern Med. 2012;157:170179.
  15. Redeker NS. Sleep in acute care settings: an integrative review. J Nurs Scholarsh. 2000;32(1):3138.
  16. Buysse D. Physical health as it relates to insomnia. Talk presented at: Center for Behavior and Health, Lecture Series in Johns Hopkins Bayview Medical Center; July 17, 2012; Baltimore, MD.
  17. Buysse DJ, Reynolds CF, Monk TH, Berman SR, Kupfer DJ. The Pittsburgh Sleep Quality Index: a new instrument for psychiatric practice and research. Psychiatry Res. 1989;28:193213.
  18. Smith MT, Wegener ST. Measures of sleep: The Insomnia Severity Index, Medical Outcomes Study (MOS) Sleep Scale, Pittsburgh Sleep Diary (PSD), and Pittsburgh Sleep Quality Index (PSQI). Arthritis Rheumatol. 2003;49:S184S196.
  19. Brown H, Prescott R. Applied Mixed Models in Medicine. 3rd ed. Somerset, NJ: Wiley; 2014:539.
  20. Blackwell E, Leon CF, Miller GE, Applying mixed regression models to the analysis of repeated‐measures data in psychosomatic medicine. Psychosom Med. 2006;68(6):870878.
  21. Peugh JL, Enders CK. Using the SPSS mixed procedure to fit cross‐sectional and longitudinal multilevel models. Educ Psychol Meas. 2005;65(5):717741.
  22. McCoach DB, Black AC. Introduction to estimation issues in multilevel modeling. New Dir Inst Res. 2012;2012(154):2339.
  23. Enders CK, Tofighi D. Centering predictor variables in cross‐sectional multilevel models: a new look at an old issue. Psychol Methods. 2007;12(2):121138.
  24. Manian F, Manian C. Sleep quality in adult hospitalized patients with infection: an observational study. Am J Med Sci. 2015;349(1):5660.
  25. Shear TC, Balachandran JS, Mokhlesi B, et al. Risk of sleep apnea in hospitalized older patients. J Clin Sleep Med. 2014;10:10611066.
  26. Edinger JD, Lipper S, Wheeler B. Hospital ward policy and patients' sleep patterns: a multiple baseline study. Rehabil Psychol. 1989;34(1):4350.
  27. Tamrat R, Huynh‐Le MP, Goyal M. Non‐pharmacologic interventions to improve the sleep of hospitalized patients: a systematic review. J Gen Intern Med. 2014;29:788795.
  28. Le Guen M, Nicolas‐Robin A, Lebard C, Arnulf I, Langeron O, Earplugs and eye masks vs routine care prevent sleep impairment in post‐anaesthesia care unit: a randomized study. Br J Anaesth. 2014;112(1):8995.
  29. Thomas KP, Salas RE, Gamaldo C, et al. Sleep rounds: a multidisciplinary approach to optimize sleep quality and satisfaction in hospitalized patients. J Hosp Med. 2012;7:508512.
  30. Bihari S, McEvoy RD, Kim S, Woodman RJ, Bersten AD. Factors affecting sleep quality of patients in intensive care unit. J Clin Sleep Med. 2012;8(3):301307.
  31. Flaherty JH. Insomnia among hospitalized older persons. Clin Geriatr Med. 2008;24(1):5167.
  32. McDowell JA, Mion LC, Lydon TJ, Inouye SK. A nonpharmacological sleep protocol for hospitalized older patients. J Am Geriatr Soc. 1998;46(6):700705.
  33. The Action Priority Matrix: making the most of your opportunities. TimeAnalyzer website. Available at: http://www.timeanalyzer.com/lib/priority.htm. Published 2006. Accessed July 10, 2015.
  34. Marino M, Li Y, Rueschman MN, et al. Measuring sleep: accuracy, sensitivity, and specificity of wrist actigraphy compared to polysomnography. Sleep. 2013;36(11):17471755.
Article PDF
Issue
Journal of Hospital Medicine - 11(7)
Publications
Page Number
467-472
Sections
Files
Files
Article PDF
Article PDF

Approximately 70 million adults within the United States have sleep disorders,[1] and up to 30% of adults report sleeping less than 6 hours per night.[2] Poor sleep has been associated with undesirable health outcomes.[1] Suboptimal sleep duration and sleep quality has been associated with a higher prevalence of chronic health conditions including hypertension, type 2 diabetes, coronary artery disease, stroke, and obesity, as well as increased overall mortality.[3, 4, 5, 6, 7]

Sleep plays an important role in restoration of wellness. Poor sleep is associated with physiological disturbances that may result in poor healing.[8, 9, 10] In the literature, prevalence of insomnia among elderly hospitalized patients was 36.7%,[11] whereas in younger hospitalized patients it was 50%.[12] Hospitalized patients frequently cite their acute illness, hospital‐related environmental factors, and disruptions that are part of routine care as causes for poor sleep during hospitalization.[13, 14, 15] Although the pervasiveness of poor sleep among hospitalized patients is high, interventions that prioritize sleep optimization as routine care, are uncommon. Few studies have reviewed the effect of sleep‐promoting measures on both sleep quality and sleep duration among patients hospitalized on general medicine units.

In this study, we aimed to assess the feasibility of incorporating sleep‐promoting interventions on a general medicine unit. We sought to identify differences in sleep measures between intervention and control groups. The primary outcome that we hoped to influence and lengthen in the intervention group was sleep duration. This outcome was measured both by sleep diary and with actigraphy. Secondary outcomes that we hypothesized should improve in the intervention group included feeling more refreshed in the mornings, sleep efficiency, and fewer sleep disruptions. As a feasibility pilot, we also wanted to explore the ease or difficulty with which sleep‐promoting interventions could be incorporated to the team's workflow.

METHODS

Study Design

A quasi‐experimental prospective pilot study was conducted at a single academic center, the Johns Hopkins Bayview Medical Center. Participants included adult patients admitted to the general medicine ward from July 2013 through January 2014. Patients with dementia; inability to complete survey questionnaires due to delirium, disability, or a language barrier; active withdrawal from alcohol or controlled substances; or acute psychiatric illness were excluded in this study.

The medicine ward at our medical center is comprised of 2 structurally identical units that admit patients with similar diagnoses, disease severity, and case‐mix disease groups. Nursing and support staff are unit specific. Pertaining to the sleep environment, the units both have semiprivate and private rooms. Visitors are encouraged to leave by 10 pm. Patients admitted from the emergency room to the medicine ward are assigned haphazardly to either unit based on bed availability. For the purpose of this study, we selected 1 unit to be a control unit and identified the other as the sleep‐promoting intervention unit.

Study Procedure

Upon arrival to the medicine unit, the research team approached all patients who met study eligibility criteria for study participation. Patients were provided full disclosure of the study using institutional research guidelines, and those interested in participating were consented. Participants were not explicitly told about their group assignment. This study was approved by the Johns Hopkins Institutional Review Board for human subject research.

In this study, the control group participants received standard of care as it pertains to sleep promotion. No additional sleep‐promoting measures were implemented to routine medical care, medication administration, nursing care, and overnight monitoring. Patients who used sleep medications at home, prior to admission, had those medicines continued only if they requested them and they were not contraindicated given their acute illness. Participants on the intervention unit were exposed to a nurse‐delivered sleep‐promoting protocol aimed at transforming the culture of care such that helping patients to sleep soundly was made a top priority. Environmental changes included unit‐wide efforts to minimize light and noise disturbances by dimming hallway lights, turning off room lights, and encouraging care teams to be as quiet as possible. Other strategies focused largely on minimizing care‐related disruptions. These included, when appropriate, administering nighttime medications in the early evening, minimizing fluids overnight, and closing patient room doors where appropriate. Further, patients were offered the following sleep‐promoting items to choose from: ear plugs, eye masks, warm blankets, and relaxation music. The final component of our intervention was 30‐minute sleep hygiene education taught by a physician. It highlighted basic sleep physiology and healthy sleep behavior adapted from Buysse.[16] Patients learned the role of behaviors such as reducing time lying awake in bed, setting standard wake‐up time and sleep time, and going to bed only when sleepy. This behavioral education was supplemented by a handout with sleep‐promoting suggestions.

The care team on the intervention unit received comprehensive study‐focused training in which night nursing teams were familiarized with the sleep‐promoting protocol through in‐service sessions facilitated by 1 of the authors (E.W.G.). To further promote study implementation, sleep‐promoting procedures were supported and encouraged by supervising nurses who made daily reminders to the intervention unit night care team of the goals of the sleep‐promoting study during evening huddles performed at the beginning of each shift. To assess the adherence of the sleep protocol, the nursing staff completed a daily checklist of elements within the protocol that were employed .

Data Collection and Measures

Baseline Measures

At the time of enrollment, study patients' demographic information, including use of chronic sleep medication prior to admission, was collected. Participants were assessed for baseline sleep disturbance prior to admission using standardized, validated sleep assessment tools: Pittsburgh Sleep Quality Index (PSQI), the Insomnia Severity Index (ISI), and the Epworth Sleepiness Scale (ESS). PSQI, a 19‐item tool, assessed self‐rated sleep quality measured over the prior month; a score of 5 or greater indicated poor sleep.[17] ISI, a 7‐item tool, identified the presence, rated the severity, and described the impact of insomnia; a score of 10 or greater indicated insomnia.[18] ESS, an 8‐item self‐rated tool, evaluated the impact of perceived sleepiness on daily functioning in 8 different environments; a score of 9 or greater was linked to burden of sleepiness. Participants were also screened for both obstructive sleep apnea (using the Berlin Sleep Apnea Index) and clinical depression (using Center for Epidemiologic Studies‐Depression 10‐point scale), as these conditions affect sleep patterns. These data are shown in Table 1.

Characteristics of Study Participants (n = 112)
Intervention, n = 48 Control, n = 64 P Value
  • NOTE: The entry for number of sleep diaries per participant in intervention and control groups is presented after capping at 4 diaries. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; ESS, Epworth Sleepiness Scale; ISI, Insomnia Severity Index; PSQI, Pittsburgh Sleep Quality Index; SD, standard deviation.

Age, y, mean (SD) 58.2 (16) 56.9 (17) 0.69
Female, n (%) 26 (54.2) 36 (56.3) 0.83
Race, n (%)
Caucasian 33 (68.8) 46 (71.9) 0.92
African American 13 (27.1) 16 (25.0)
Other 2 (4.2) 2 (3.1)
BMI, mean (SD) 32.1 (9.2) 31.8 (9.3) 0.85
Admitting service, n (%)
Teaching 21 (43.8) 18 (28.1) 0.09
Nonteaching 27 (56.3) 46 (71.9)
Sleep medication prior to admission, n (%) 7 (14.9) 21 (32.8) 0.03
Length of stay, d, mean (SD) 4.9 (3) 5.8 (3.9) 0.19
Number of sleep diaries per participant, mean (SD) 2.2 (0.8) 2.6 (0.9) 0.02
Proportion of hospital days with sleep diaries per participant, (SD) 0.6 (0.2) 0.5 (0.2) 0.71
Number of nights with actigraphy per participant, mean (SD) 1.2 (0.7) 1.4 (0.8) 0.16
Proportion of hospital nights with actigraphy per participant (SD) 0.3 (0.2) 0.3 (0.1) 0.91
Baseline sleep measures
PSQI, mean (SD) 9.9 (4.6) 9.1 (4.5) 0.39
ESS, mean (SD) 7.4 (4.2) 7.7 (4.8) 0.79
ISI, mean (SD) 11.9 (7.6) 10.8 (7.4) 0.44
CESD‐10, mean (SD) 12.2 (7.2) 12.8 (7.6) 0.69
Berlin Sleep Apnea, mean (SD) 0.63 (0.5) 0.61 (0.5) 0.87

Sleep Diary Measures

A sleep diary completed each morning assessed the outcome measures, perceived sleep quality, how refreshing sleep was, and sleep durations. The diary employed a 5‐point Likert rating scale ranging from poor (1) to excellent (5). Perceived sleep duration was calculated from patients' reported time in bed, time to fall asleep, wake time, and number and duration of awakenings after sleep onset on their sleep diary. These data were used to compute total sleep time (TST) and sleep efficiency (SE). The sleep diary also included other pertinent sleep‐related measures including use of sleep medication the night prior and specific sleep disruptions from the prior night. To measure the impact of disruptions due to disturbances the prior night, we created a summed scale score of 4 items that negatively interfered with sleep (light, temperature, noise, and interruptions; 5 point scales from 1 = not at all to 5 = significant). Analysis of principal axis factors with varimax rotation yielded 1 disruption factor accounting for 55% of the variance, and Cronbach's was 0.73.

Actigraphy Measures

Actigraphy outcomes of sleep were recorded using the actigraphy wrist watch (ActiSleep Plus (GT3X+); ActiGraph, Pensacola, FL). Participants wore the monitor from the day of enrollment throughout the hospital stay or until transfer out of the unit. Objective data were analyzed and scored using ActiLife 6 data analysis software (version 6.10.1; Actigraph). Time in bed, given the unique inpatient setting, was calculated using sleep diary responses as the interval between sleep time and reported wake up time. These were entered into the Actilife 6 software for the sleep scoring analysis using a validated algorithm, Cole‐Kripke, to calculate actigraphy TST and SE.

Statistical Analysis

Descriptive and inferential statistics were computed using Statistical Package for the Social Sciences version 22 (IBM, Armonk, NY). We computed means, proportions, and measures of dispersion for all study variables. To test differences in sleep diary and actigraphy outcomes between the intervention and control arms, we used linear mixed models with full maximum likelihood estimation to model each of the 7 continuous sleep outcomes. These statistical methods are appropriate to account for the nonindependence of continuous repeated observations within hospital patients.[19] For all outcomes, the unit of analysis was nightly observations nested within patient‐ level characteristics. The use of full maximum likelihood estimation is a robust and preferred method for handling values missing at random in longitudinal datasets.[20]

To model repeated observations, mixed models included a term representing time in days. For each outcome, we specified unconditional growth models to examine the variability between and within patients by computing intraclass correlations and inspecting variance components. We used model fit indices (‐2LL deviance, Akaike's information criterion, and Schwartz's Bayesian criterion) as appropriate to determine best fitting model specifications in terms of random effects and covariance structure.[21, 22]

We tested the main effect of the intervention on sleep outcomes and the interactive effect of group (intervention vs control) by hospital day, to test whether there were group differences in slopes representing average change in sleep outcomes over hospital days. All models adjusted for age, body mass index, depression, and baseline sleep quality (PSQI) as time‐invariant covariates, and whether participants had taken a sleep medication the day before, as a time‐varying covariate. Adjustment for prehospitalization sleep quality was a matter of particular importance. We used the PSQI to control for sleep quality because it is both a well‐validated, multidimensional measure, and it includes prehospital use of sleep medications. In a series of sensitivity analyses, we also explored whether the dichotomous self‐reported measure of whether or not participants regularly took sleep medications prior to hospitalization, rather than the PSQI, would change our substantive findings. All covariates were centered at the grand‐mean following guidelines for appropriate interpretation of regression coefficients.[23]

RESULTS

Of the 112 study patients, 48 were in the intervention unit and 64 in the control unit. Eighty‐five percent of study participants endorsed poor sleep prior to hospital admission on the PSQI sleep quality measure, which was similar in both groups (Table 1).

Participants completed 1 to 8 sleep diary entries (mean = 2.5, standard deviation = 1.1). Because only 6 participants completed 5 or more diaries, we constrained the number of diaries included in the inferential analysis to 4 to avoid influential outliers identified by scatterplots. Fifty‐seven percent of participants had 1 night of valid actigraphy data (n = 64); 29%, 2 nights (n = 32), 8% had 3 or 4 nights, and 9 participants did not have any usable actigraphy data. The extent to which the intervention was accepted by patients in the intervention group was highly variable. Unit‐wide patient adherence with the 10 pm lights off, telephone off, and TV off policy was 87%, 67%, and 64% of intervention patients, respectively. Uptake of sleep menu items was also highly variable, and not a single element was used by more than half of patients (acceptance rates ranged from 11% to 44%). Eye masks (44%) and ear plugs (32%) were the most commonly utilized items.

A greater proportion of patients in the control arm (33%) had been taking sleep medications prior to hospitalization compared to the intervention arm (15%; 2 = 4.6, P < 0.05). However, hypnotic medication use in the hospital was similar across the both groups (intervention unit patients: 25% and controls: 21%, P = 0.49).

Intraclass correlations for the 7 sleep outcomes ranged from 0.59 to 0.76 on sleep diary outcomes, and from 0.61 to 0.85 on actigraphy. Dependency of sleep measures within patients accounted for 59% to 85% of variance in sleep outcomes. The best‐fit mixed models included random intercepts only. The results of mixed models testing the main effect of intervention versus comparison arm on sleep outcome measures, adjusted for covariates, are presented in Table 2. Total sleep time was the only outcome that was significantly different between groups; the average total sleep time, calculated from sleep diary data, was longer in the intervention group by 49 minutes.

Differences in Subjective and Objective Sleep Outcome Measures From Linear Mixed Models
Intervention, n = 48 Control, n = 64 P Value
  • NOTE: All differences in sleep outcomes adjusted for age, BMI, baseline sleep quality (PSQI), depression (CES‐D), and whether a sleep medication was taken the previous night. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; PSQI, Pittsburgh Sleep Quality Index; SE, standard error.

Sleep diary outcomes
Sleep quality, mean (SE) 3.14 (0.16) 3.08 (0.13) 0.79
Refreshed sleep, mean (SE) 2.94 (0.17) 2.74 (0.14) 0.38
Negative impact of sleep disruptions, mean (SE) 4.39 (0.58) 4.81 (0.48) 0.58
Total sleep time, min, mean (SE) 422 (16.2) 373 (13.2) 0.02
Sleep efficiency, %, mean (SE) 83.5 (2.3) 82.1 (1.9) 0.65
Actigraphy outcomes
Total sleep time, min, mean (SE) 377 (16.8) 356 (13.2) 0.32
Sleep efficiency, %, mean (SE) 72.7 (2.2) 74.8 (1.8) 0.45

Table 3 lists slopes representing average change in sleep measures over hospital days in both groups. The P values represent z tests of interaction terms in mixed models, after adjustment for covariates, testing whether slopes significantly differed between groups. Of the 7 outcomes, 3 sleep diary measures had significant interaction terms. For ratings of sleep quality, refreshing sleep, and sleep disruptions, slopes in the control group were flat, whereas slopes in the intervention group demonstrated improvements in ratings of sleep quality and refreshed sleep, and a decrease in the impact of sleep disruptions over the course of subsequent nights in the hospital. Figure 1 illustrates a plot of the adjusted average slopes for the refreshed sleep score across hospital days in intervention and control groups.

Average Change in Sleep Outcomes Across Hospital Days for Patients in Intervention and Comparison Groups
Intervention, Slope (SE), n = 48 Control, Slope (SE), n = 64 P Value
  • NOTE: Mixed models were adjusted for age, BMI, baseline sleep quality (PSQI), baseline depression (CES‐D), and whether or not a sleep medication was taken the previous night.

  • Each slope represents the average change in sleep diary outcome from night to night in each condition. P values represent the Wald test of the interaction term. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; PSQI, Pittsburgh Sleep Quality Index; SE, standard error.

Refreshed sleep rating 0.55 (0.18) 0.03 (0.13) 0.006
Sleep quality rating 0.52 (0.16) 0.02 (0.11) 0.012
Negative impact of sleep interruptions 1.65 (0.48) 0.05 (0.32) 0.006
Total sleep time, diary 11.2 (18.1) 6.3 (13.0) 0.44
Total sleep time, actigraphy 7.3 (25.5) 1.0 (15.3) 0.83
Sleep efficiency, diary 1.1 (2.3) 1.5 (1.6) 0.89
Sleep efficiency, actigraphy 0.9 (4.0) 0.7 (2.4) 0.74
jhm2578-fig-0001-m.png
Plot of average changes in refreshed sleep over hospital days for intervention to control participants. *Slopes from linear mixed models are adjusted for age, BMI, depression score, prehospital sleep quality, and sleep medication taken the night before during hospitalization.

DISCUSSION

Poor sleep is common among hospitalized adults, both at home prior to the admission and especially when in the hospital. This pilot study demonstrated the feasibility of rolling out a sleep‐promoting intervention on a hospital's general medicine unit. Although participants on the intervention unit reported improved sleep quality and feeling more refreshed, this was not supported by actigraphy data (such as sleep time or sleep efficiency). Although care team engagement and implementation of unit‐wide interventions were high, patient use of individual components was imperfect. Of particular interest, however, the intervention group actually began to have improved sleep quality and fewer disruptions with subsequent nights sleeping in the hospital.

Our findings of the high prevalence of poor sleep among hospitalized patients is congruent with prior studies and supports the great need to screen for and address poor sleep within the hospital setting.[24, 25, 26] Attempts to promote sleep among hospitalized patients may be effective. Prior literature on sleep‐promoting intervention studies demonstrated relaxation techniques improved sleep quality by almost 38%,[27] and ear plugs and eye masks showed some benefit in promoting sleep within the hospital.[28] Our study's multicomponent intervention that attempted to minimize disruptions led to improvement in sleep quality, more restorative sleep, and decreased report of sleep disruptions, especially among patients who had a longer length of stay. As suggested by Thomas et al.[29] and seen in our data, this temporal relationship with improvement across subsequent nights suggests there may be an adaptation to the new environment and that it may take time for the sleep intervention to work.

Hospitalized patients often fail to reclaim the much‐needed restorative sleep at the time when they are most vulnerable. Patients cite routine care as the primary cause of sleep disruption, and often recognize the way that the hospital environment interferes with their ability to sleep.[30, 31, 32] The sleep‐promoting interventions used in our study would be characterized by most as low effort[33] and a potential for high yield, even though our patients only appreciated modest improvements in sleep outcomes.

Several limitations of this study should be considered. First, although we had hoped to collect substantial amounts of objective data, the average time of actigraphy observation was less than 48 hours. This may have constrained the group by time interaction analysis with actigraphy data, as studies have shown increased accuracy in actigraphy measures with longer wear.[34] By contrast, the sleep diary survey collected throughout hospitalization yielded significant improvements in consecutive daily measurements. Second, the proximity of the study units raised concern for study contamination, which could have reduced the differences in the outcome measures that may have been observed. Although the physicians work on both units, the nursing and support care teams are distinct and unit dependent. Finally, this was not a randomized trial. Patient assignment to the treatment arms was haphazard and occurred within the hospital's admitting strategy. Allocation of patients to either the intervention or the control group was based on bed availability at the time of admission. Although both groups were similar in most characteristics, more of the control participants reported taking more sleep medications prior to admission as compared to the intervention participants. Fortunately, hypnotic use was not different between groups during the admission, the time when sleep data were being captured.

Overall, this pilot study suggests that patients admitted to general medical ward fail to realize sufficient restorative sleep when they are in the hospital. Sleep disruption is rather frequent. This study demonstrates the opportunity for and feasibility of sleep‐promoting interventions where facilitating sleep is considered to be a top priority and vital component of the healthcare delivery. When trying to improve patients' sleep in the hospital, it may take several consecutive nights to realize a return on investment.

Acknowledgements

The authors acknowledge the Department of Nursing, Johns Hopkins Bayview Medical Center, and care teams of the Zieve Medicine Units, and the Center for Child and Community Health Research Biostatistics, Epidemiology and Data Management (BEAD) Core group.

Disclosures: Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. Dr. Howell is the chief of the Division of Hospital Medicine at Johns Hopkins Bayview Medical Center and associate professor at Johns Hopkins School of Medicine. He served as the president of the Society of Hospital Medicine (SHM) in 2013 and currently serves as a board member. He is also a senior physician advisor for SHM. He is a coinvestigator grant recipient on an Agency for Healthcare Research and Quality grant on medication reconciliation funded through Baylor University. He was previously a coinvestigator grant recipient of Center for Medicare and Medicaid Innovations grant that ended in June 2015.

Approximately 70 million adults within the United States have sleep disorders,[1] and up to 30% of adults report sleeping less than 6 hours per night.[2] Poor sleep has been associated with undesirable health outcomes.[1] Suboptimal sleep duration and sleep quality has been associated with a higher prevalence of chronic health conditions including hypertension, type 2 diabetes, coronary artery disease, stroke, and obesity, as well as increased overall mortality.[3, 4, 5, 6, 7]

Sleep plays an important role in restoration of wellness. Poor sleep is associated with physiological disturbances that may result in poor healing.[8, 9, 10] In the literature, prevalence of insomnia among elderly hospitalized patients was 36.7%,[11] whereas in younger hospitalized patients it was 50%.[12] Hospitalized patients frequently cite their acute illness, hospital‐related environmental factors, and disruptions that are part of routine care as causes for poor sleep during hospitalization.[13, 14, 15] Although the pervasiveness of poor sleep among hospitalized patients is high, interventions that prioritize sleep optimization as routine care, are uncommon. Few studies have reviewed the effect of sleep‐promoting measures on both sleep quality and sleep duration among patients hospitalized on general medicine units.

In this study, we aimed to assess the feasibility of incorporating sleep‐promoting interventions on a general medicine unit. We sought to identify differences in sleep measures between intervention and control groups. The primary outcome that we hoped to influence and lengthen in the intervention group was sleep duration. This outcome was measured both by sleep diary and with actigraphy. Secondary outcomes that we hypothesized should improve in the intervention group included feeling more refreshed in the mornings, sleep efficiency, and fewer sleep disruptions. As a feasibility pilot, we also wanted to explore the ease or difficulty with which sleep‐promoting interventions could be incorporated to the team's workflow.

METHODS

Study Design

A quasi‐experimental prospective pilot study was conducted at a single academic center, the Johns Hopkins Bayview Medical Center. Participants included adult patients admitted to the general medicine ward from July 2013 through January 2014. Patients with dementia; inability to complete survey questionnaires due to delirium, disability, or a language barrier; active withdrawal from alcohol or controlled substances; or acute psychiatric illness were excluded in this study.

The medicine ward at our medical center is comprised of 2 structurally identical units that admit patients with similar diagnoses, disease severity, and case‐mix disease groups. Nursing and support staff are unit specific. Pertaining to the sleep environment, the units both have semiprivate and private rooms. Visitors are encouraged to leave by 10 pm. Patients admitted from the emergency room to the medicine ward are assigned haphazardly to either unit based on bed availability. For the purpose of this study, we selected 1 unit to be a control unit and identified the other as the sleep‐promoting intervention unit.

Study Procedure

Upon arrival to the medicine unit, the research team approached all patients who met study eligibility criteria for study participation. Patients were provided full disclosure of the study using institutional research guidelines, and those interested in participating were consented. Participants were not explicitly told about their group assignment. This study was approved by the Johns Hopkins Institutional Review Board for human subject research.

In this study, the control group participants received standard of care as it pertains to sleep promotion. No additional sleep‐promoting measures were implemented to routine medical care, medication administration, nursing care, and overnight monitoring. Patients who used sleep medications at home, prior to admission, had those medicines continued only if they requested them and they were not contraindicated given their acute illness. Participants on the intervention unit were exposed to a nurse‐delivered sleep‐promoting protocol aimed at transforming the culture of care such that helping patients to sleep soundly was made a top priority. Environmental changes included unit‐wide efforts to minimize light and noise disturbances by dimming hallway lights, turning off room lights, and encouraging care teams to be as quiet as possible. Other strategies focused largely on minimizing care‐related disruptions. These included, when appropriate, administering nighttime medications in the early evening, minimizing fluids overnight, and closing patient room doors where appropriate. Further, patients were offered the following sleep‐promoting items to choose from: ear plugs, eye masks, warm blankets, and relaxation music. The final component of our intervention was 30‐minute sleep hygiene education taught by a physician. It highlighted basic sleep physiology and healthy sleep behavior adapted from Buysse.[16] Patients learned the role of behaviors such as reducing time lying awake in bed, setting standard wake‐up time and sleep time, and going to bed only when sleepy. This behavioral education was supplemented by a handout with sleep‐promoting suggestions.

The care team on the intervention unit received comprehensive study‐focused training in which night nursing teams were familiarized with the sleep‐promoting protocol through in‐service sessions facilitated by 1 of the authors (E.W.G.). To further promote study implementation, sleep‐promoting procedures were supported and encouraged by supervising nurses who made daily reminders to the intervention unit night care team of the goals of the sleep‐promoting study during evening huddles performed at the beginning of each shift. To assess the adherence of the sleep protocol, the nursing staff completed a daily checklist of elements within the protocol that were employed .

Data Collection and Measures

Baseline Measures

At the time of enrollment, study patients' demographic information, including use of chronic sleep medication prior to admission, was collected. Participants were assessed for baseline sleep disturbance prior to admission using standardized, validated sleep assessment tools: Pittsburgh Sleep Quality Index (PSQI), the Insomnia Severity Index (ISI), and the Epworth Sleepiness Scale (ESS). PSQI, a 19‐item tool, assessed self‐rated sleep quality measured over the prior month; a score of 5 or greater indicated poor sleep.[17] ISI, a 7‐item tool, identified the presence, rated the severity, and described the impact of insomnia; a score of 10 or greater indicated insomnia.[18] ESS, an 8‐item self‐rated tool, evaluated the impact of perceived sleepiness on daily functioning in 8 different environments; a score of 9 or greater was linked to burden of sleepiness. Participants were also screened for both obstructive sleep apnea (using the Berlin Sleep Apnea Index) and clinical depression (using Center for Epidemiologic Studies‐Depression 10‐point scale), as these conditions affect sleep patterns. These data are shown in Table 1.

Characteristics of Study Participants (n = 112)
Intervention, n = 48 Control, n = 64 P Value
  • NOTE: The entry for number of sleep diaries per participant in intervention and control groups is presented after capping at 4 diaries. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; ESS, Epworth Sleepiness Scale; ISI, Insomnia Severity Index; PSQI, Pittsburgh Sleep Quality Index; SD, standard deviation.

Age, y, mean (SD) 58.2 (16) 56.9 (17) 0.69
Female, n (%) 26 (54.2) 36 (56.3) 0.83
Race, n (%)
Caucasian 33 (68.8) 46 (71.9) 0.92
African American 13 (27.1) 16 (25.0)
Other 2 (4.2) 2 (3.1)
BMI, mean (SD) 32.1 (9.2) 31.8 (9.3) 0.85
Admitting service, n (%)
Teaching 21 (43.8) 18 (28.1) 0.09
Nonteaching 27 (56.3) 46 (71.9)
Sleep medication prior to admission, n (%) 7 (14.9) 21 (32.8) 0.03
Length of stay, d, mean (SD) 4.9 (3) 5.8 (3.9) 0.19
Number of sleep diaries per participant, mean (SD) 2.2 (0.8) 2.6 (0.9) 0.02
Proportion of hospital days with sleep diaries per participant, (SD) 0.6 (0.2) 0.5 (0.2) 0.71
Number of nights with actigraphy per participant, mean (SD) 1.2 (0.7) 1.4 (0.8) 0.16
Proportion of hospital nights with actigraphy per participant (SD) 0.3 (0.2) 0.3 (0.1) 0.91
Baseline sleep measures
PSQI, mean (SD) 9.9 (4.6) 9.1 (4.5) 0.39
ESS, mean (SD) 7.4 (4.2) 7.7 (4.8) 0.79
ISI, mean (SD) 11.9 (7.6) 10.8 (7.4) 0.44
CESD‐10, mean (SD) 12.2 (7.2) 12.8 (7.6) 0.69
Berlin Sleep Apnea, mean (SD) 0.63 (0.5) 0.61 (0.5) 0.87

Sleep Diary Measures

A sleep diary completed each morning assessed the outcome measures, perceived sleep quality, how refreshing sleep was, and sleep durations. The diary employed a 5‐point Likert rating scale ranging from poor (1) to excellent (5). Perceived sleep duration was calculated from patients' reported time in bed, time to fall asleep, wake time, and number and duration of awakenings after sleep onset on their sleep diary. These data were used to compute total sleep time (TST) and sleep efficiency (SE). The sleep diary also included other pertinent sleep‐related measures including use of sleep medication the night prior and specific sleep disruptions from the prior night. To measure the impact of disruptions due to disturbances the prior night, we created a summed scale score of 4 items that negatively interfered with sleep (light, temperature, noise, and interruptions; 5 point scales from 1 = not at all to 5 = significant). Analysis of principal axis factors with varimax rotation yielded 1 disruption factor accounting for 55% of the variance, and Cronbach's was 0.73.

Actigraphy Measures

Actigraphy outcomes of sleep were recorded using the actigraphy wrist watch (ActiSleep Plus (GT3X+); ActiGraph, Pensacola, FL). Participants wore the monitor from the day of enrollment throughout the hospital stay or until transfer out of the unit. Objective data were analyzed and scored using ActiLife 6 data analysis software (version 6.10.1; Actigraph). Time in bed, given the unique inpatient setting, was calculated using sleep diary responses as the interval between sleep time and reported wake up time. These were entered into the Actilife 6 software for the sleep scoring analysis using a validated algorithm, Cole‐Kripke, to calculate actigraphy TST and SE.

Statistical Analysis

Descriptive and inferential statistics were computed using Statistical Package for the Social Sciences version 22 (IBM, Armonk, NY). We computed means, proportions, and measures of dispersion for all study variables. To test differences in sleep diary and actigraphy outcomes between the intervention and control arms, we used linear mixed models with full maximum likelihood estimation to model each of the 7 continuous sleep outcomes. These statistical methods are appropriate to account for the nonindependence of continuous repeated observations within hospital patients.[19] For all outcomes, the unit of analysis was nightly observations nested within patient‐ level characteristics. The use of full maximum likelihood estimation is a robust and preferred method for handling values missing at random in longitudinal datasets.[20]

To model repeated observations, mixed models included a term representing time in days. For each outcome, we specified unconditional growth models to examine the variability between and within patients by computing intraclass correlations and inspecting variance components. We used model fit indices (‐2LL deviance, Akaike's information criterion, and Schwartz's Bayesian criterion) as appropriate to determine best fitting model specifications in terms of random effects and covariance structure.[21, 22]

We tested the main effect of the intervention on sleep outcomes and the interactive effect of group (intervention vs control) by hospital day, to test whether there were group differences in slopes representing average change in sleep outcomes over hospital days. All models adjusted for age, body mass index, depression, and baseline sleep quality (PSQI) as time‐invariant covariates, and whether participants had taken a sleep medication the day before, as a time‐varying covariate. Adjustment for prehospitalization sleep quality was a matter of particular importance. We used the PSQI to control for sleep quality because it is both a well‐validated, multidimensional measure, and it includes prehospital use of sleep medications. In a series of sensitivity analyses, we also explored whether the dichotomous self‐reported measure of whether or not participants regularly took sleep medications prior to hospitalization, rather than the PSQI, would change our substantive findings. All covariates were centered at the grand‐mean following guidelines for appropriate interpretation of regression coefficients.[23]

RESULTS

Of the 112 study patients, 48 were in the intervention unit and 64 in the control unit. Eighty‐five percent of study participants endorsed poor sleep prior to hospital admission on the PSQI sleep quality measure, which was similar in both groups (Table 1).

Participants completed 1 to 8 sleep diary entries (mean = 2.5, standard deviation = 1.1). Because only 6 participants completed 5 or more diaries, we constrained the number of diaries included in the inferential analysis to 4 to avoid influential outliers identified by scatterplots. Fifty‐seven percent of participants had 1 night of valid actigraphy data (n = 64); 29%, 2 nights (n = 32), 8% had 3 or 4 nights, and 9 participants did not have any usable actigraphy data. The extent to which the intervention was accepted by patients in the intervention group was highly variable. Unit‐wide patient adherence with the 10 pm lights off, telephone off, and TV off policy was 87%, 67%, and 64% of intervention patients, respectively. Uptake of sleep menu items was also highly variable, and not a single element was used by more than half of patients (acceptance rates ranged from 11% to 44%). Eye masks (44%) and ear plugs (32%) were the most commonly utilized items.

A greater proportion of patients in the control arm (33%) had been taking sleep medications prior to hospitalization compared to the intervention arm (15%; 2 = 4.6, P < 0.05). However, hypnotic medication use in the hospital was similar across the both groups (intervention unit patients: 25% and controls: 21%, P = 0.49).

Intraclass correlations for the 7 sleep outcomes ranged from 0.59 to 0.76 on sleep diary outcomes, and from 0.61 to 0.85 on actigraphy. Dependency of sleep measures within patients accounted for 59% to 85% of variance in sleep outcomes. The best‐fit mixed models included random intercepts only. The results of mixed models testing the main effect of intervention versus comparison arm on sleep outcome measures, adjusted for covariates, are presented in Table 2. Total sleep time was the only outcome that was significantly different between groups; the average total sleep time, calculated from sleep diary data, was longer in the intervention group by 49 minutes.

Differences in Subjective and Objective Sleep Outcome Measures From Linear Mixed Models
Intervention, n = 48 Control, n = 64 P Value
  • NOTE: All differences in sleep outcomes adjusted for age, BMI, baseline sleep quality (PSQI), depression (CES‐D), and whether a sleep medication was taken the previous night. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; PSQI, Pittsburgh Sleep Quality Index; SE, standard error.

Sleep diary outcomes
Sleep quality, mean (SE) 3.14 (0.16) 3.08 (0.13) 0.79
Refreshed sleep, mean (SE) 2.94 (0.17) 2.74 (0.14) 0.38
Negative impact of sleep disruptions, mean (SE) 4.39 (0.58) 4.81 (0.48) 0.58
Total sleep time, min, mean (SE) 422 (16.2) 373 (13.2) 0.02
Sleep efficiency, %, mean (SE) 83.5 (2.3) 82.1 (1.9) 0.65
Actigraphy outcomes
Total sleep time, min, mean (SE) 377 (16.8) 356 (13.2) 0.32
Sleep efficiency, %, mean (SE) 72.7 (2.2) 74.8 (1.8) 0.45

Table 3 lists slopes representing average change in sleep measures over hospital days in both groups. The P values represent z tests of interaction terms in mixed models, after adjustment for covariates, testing whether slopes significantly differed between groups. Of the 7 outcomes, 3 sleep diary measures had significant interaction terms. For ratings of sleep quality, refreshing sleep, and sleep disruptions, slopes in the control group were flat, whereas slopes in the intervention group demonstrated improvements in ratings of sleep quality and refreshed sleep, and a decrease in the impact of sleep disruptions over the course of subsequent nights in the hospital. Figure 1 illustrates a plot of the adjusted average slopes for the refreshed sleep score across hospital days in intervention and control groups.

Average Change in Sleep Outcomes Across Hospital Days for Patients in Intervention and Comparison Groups
Intervention, Slope (SE), n = 48 Control, Slope (SE), n = 64 P Value
  • NOTE: Mixed models were adjusted for age, BMI, baseline sleep quality (PSQI), baseline depression (CES‐D), and whether or not a sleep medication was taken the previous night.

  • Each slope represents the average change in sleep diary outcome from night to night in each condition. P values represent the Wald test of the interaction term. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; PSQI, Pittsburgh Sleep Quality Index; SE, standard error.

Refreshed sleep rating 0.55 (0.18) 0.03 (0.13) 0.006
Sleep quality rating 0.52 (0.16) 0.02 (0.11) 0.012
Negative impact of sleep interruptions 1.65 (0.48) 0.05 (0.32) 0.006
Total sleep time, diary 11.2 (18.1) 6.3 (13.0) 0.44
Total sleep time, actigraphy 7.3 (25.5) 1.0 (15.3) 0.83
Sleep efficiency, diary 1.1 (2.3) 1.5 (1.6) 0.89
Sleep efficiency, actigraphy 0.9 (4.0) 0.7 (2.4) 0.74
jhm2578-fig-0001-m.png
Plot of average changes in refreshed sleep over hospital days for intervention to control participants. *Slopes from linear mixed models are adjusted for age, BMI, depression score, prehospital sleep quality, and sleep medication taken the night before during hospitalization.

DISCUSSION

Poor sleep is common among hospitalized adults, both at home prior to the admission and especially when in the hospital. This pilot study demonstrated the feasibility of rolling out a sleep‐promoting intervention on a hospital's general medicine unit. Although participants on the intervention unit reported improved sleep quality and feeling more refreshed, this was not supported by actigraphy data (such as sleep time or sleep efficiency). Although care team engagement and implementation of unit‐wide interventions were high, patient use of individual components was imperfect. Of particular interest, however, the intervention group actually began to have improved sleep quality and fewer disruptions with subsequent nights sleeping in the hospital.

Our findings of the high prevalence of poor sleep among hospitalized patients is congruent with prior studies and supports the great need to screen for and address poor sleep within the hospital setting.[24, 25, 26] Attempts to promote sleep among hospitalized patients may be effective. Prior literature on sleep‐promoting intervention studies demonstrated relaxation techniques improved sleep quality by almost 38%,[27] and ear plugs and eye masks showed some benefit in promoting sleep within the hospital.[28] Our study's multicomponent intervention that attempted to minimize disruptions led to improvement in sleep quality, more restorative sleep, and decreased report of sleep disruptions, especially among patients who had a longer length of stay. As suggested by Thomas et al.[29] and seen in our data, this temporal relationship with improvement across subsequent nights suggests there may be an adaptation to the new environment and that it may take time for the sleep intervention to work.

Hospitalized patients often fail to reclaim the much‐needed restorative sleep at the time when they are most vulnerable. Patients cite routine care as the primary cause of sleep disruption, and often recognize the way that the hospital environment interferes with their ability to sleep.[30, 31, 32] The sleep‐promoting interventions used in our study would be characterized by most as low effort[33] and a potential for high yield, even though our patients only appreciated modest improvements in sleep outcomes.

Several limitations of this study should be considered. First, although we had hoped to collect substantial amounts of objective data, the average time of actigraphy observation was less than 48 hours. This may have constrained the group by time interaction analysis with actigraphy data, as studies have shown increased accuracy in actigraphy measures with longer wear.[34] By contrast, the sleep diary survey collected throughout hospitalization yielded significant improvements in consecutive daily measurements. Second, the proximity of the study units raised concern for study contamination, which could have reduced the differences in the outcome measures that may have been observed. Although the physicians work on both units, the nursing and support care teams are distinct and unit dependent. Finally, this was not a randomized trial. Patient assignment to the treatment arms was haphazard and occurred within the hospital's admitting strategy. Allocation of patients to either the intervention or the control group was based on bed availability at the time of admission. Although both groups were similar in most characteristics, more of the control participants reported taking more sleep medications prior to admission as compared to the intervention participants. Fortunately, hypnotic use was not different between groups during the admission, the time when sleep data were being captured.

Overall, this pilot study suggests that patients admitted to general medical ward fail to realize sufficient restorative sleep when they are in the hospital. Sleep disruption is rather frequent. This study demonstrates the opportunity for and feasibility of sleep‐promoting interventions where facilitating sleep is considered to be a top priority and vital component of the healthcare delivery. When trying to improve patients' sleep in the hospital, it may take several consecutive nights to realize a return on investment.

Acknowledgements

The authors acknowledge the Department of Nursing, Johns Hopkins Bayview Medical Center, and care teams of the Zieve Medicine Units, and the Center for Child and Community Health Research Biostatistics, Epidemiology and Data Management (BEAD) Core group.

Disclosures: Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. Dr. Howell is the chief of the Division of Hospital Medicine at Johns Hopkins Bayview Medical Center and associate professor at Johns Hopkins School of Medicine. He served as the president of the Society of Hospital Medicine (SHM) in 2013 and currently serves as a board member. He is also a senior physician advisor for SHM. He is a coinvestigator grant recipient on an Agency for Healthcare Research and Quality grant on medication reconciliation funded through Baylor University. He was previously a coinvestigator grant recipient of Center for Medicare and Medicaid Innovations grant that ended in June 2015.

References
  1. Institute of Medicine (US) Committee on Sleep Medicine and Research. Sleep disorders and sleep deprivation: an unmet public health problem. Washington, DC: National Academies Press; 2006. Available at: http://www.ncbi.nlm.nih.gov/books/NBK19960. Accessed September 16, 2014.
  2. Schoenborn CA, Adams PE. Health behaviors of adults: United States, 2005–2007. Vital Health Stat 10. 2010;245:1132.
  3. Mallon L, Broman JE, Hetta J. High incidence of diabetes in men with sleep complaints or short sleep duration: a 12‐year follow‐up study of a middle‐aged population. Diabetes Care. 2005;28:27622767.
  4. Donat M, Brown C, Williams N, et al. Linking sleep duration and obesity among black and white US adults. Clin Pract (Lond). 2013;10(5):661667.
  5. Cappuccio FP, Stranges S, Kandala NB, et al. Gender‐specific associations of short sleep duration with prevalent and incident hypertension: the Whitehall II Study. Hypertension. 2007;50:693700.
  6. Rod NH, Kumari M, Lange T, Kivimäki M, Shipley M, Ferrie J. The joint effect of sleep duration and disturbed sleep on cause‐specific mortality: results from the Whitehall II cohort study. PLoS One. 2014;9(4):e91965.
  7. Martin JL, Fiorentino L, Jouldjian S, Mitchell M, Josephson KR, Alessi CA. Poor self‐reported sleep quality predicts mortality within one year of inpatient post‐acute rehabilitation among older adults. Sleep. 2011;34(12):17151721.
  8. Kahn‐Greene ET, Killgore DB, Kamimori GH, Balkin TJ, Killgore WD. The effects of sleep deprivation on symptoms of psychopathology in healthy adults. Sleep Med. 2007;8(3):215221.
  9. Irwin MR, Wang M, Campomayor CO, Collado‐Hidalgo A, Cole S. Sleep deprivation and activation of morning levels of cellular and genomic markers of inflammation. Arch Intern Med. 2006;166:17561762.
  10. Knutson KL, Spiegel K, Penev P, Cauter E. The metabolic consequences of sleep deprivation. Sleep Med Rev. 2007;11(3):163178.
  11. Isaia G, Corsinovi L, Bo M, et al. Insomnia among hospitalized elderly patients: prevalence, clinical characteristics and risk factors. Arch Gerontol Geriatr. 2011;52:133137.
  12. Rocha FL, Hara C, Rodrigues CV, et al. Is insomnia a marker for psychiatric disorders in general hospitals? Sleep Med. 2005;6:549553.
  13. Adachi M, Staisiunas PG, Knutson KL, Beveridge C, Meltzer DO, Arora VM. Perceived control and sleep in hospitalized older adults: a sound hypothesis? J Hosp Med. 2013;8:184190.
  14. Buxton OM, Ellenbogen JM, Wang W, et al. Sleep disruption due to hospital noises: a prospective evaluation. Ann Intern Med. 2012;157:170179.
  15. Redeker NS. Sleep in acute care settings: an integrative review. J Nurs Scholarsh. 2000;32(1):3138.
  16. Buysse D. Physical health as it relates to insomnia. Talk presented at: Center for Behavior and Health, Lecture Series in Johns Hopkins Bayview Medical Center; July 17, 2012; Baltimore, MD.
  17. Buysse DJ, Reynolds CF, Monk TH, Berman SR, Kupfer DJ. The Pittsburgh Sleep Quality Index: a new instrument for psychiatric practice and research. Psychiatry Res. 1989;28:193213.
  18. Smith MT, Wegener ST. Measures of sleep: The Insomnia Severity Index, Medical Outcomes Study (MOS) Sleep Scale, Pittsburgh Sleep Diary (PSD), and Pittsburgh Sleep Quality Index (PSQI). Arthritis Rheumatol. 2003;49:S184S196.
  19. Brown H, Prescott R. Applied Mixed Models in Medicine. 3rd ed. Somerset, NJ: Wiley; 2014:539.
  20. Blackwell E, Leon CF, Miller GE, Applying mixed regression models to the analysis of repeated‐measures data in psychosomatic medicine. Psychosom Med. 2006;68(6):870878.
  21. Peugh JL, Enders CK. Using the SPSS mixed procedure to fit cross‐sectional and longitudinal multilevel models. Educ Psychol Meas. 2005;65(5):717741.
  22. McCoach DB, Black AC. Introduction to estimation issues in multilevel modeling. New Dir Inst Res. 2012;2012(154):2339.
  23. Enders CK, Tofighi D. Centering predictor variables in cross‐sectional multilevel models: a new look at an old issue. Psychol Methods. 2007;12(2):121138.
  24. Manian F, Manian C. Sleep quality in adult hospitalized patients with infection: an observational study. Am J Med Sci. 2015;349(1):5660.
  25. Shear TC, Balachandran JS, Mokhlesi B, et al. Risk of sleep apnea in hospitalized older patients. J Clin Sleep Med. 2014;10:10611066.
  26. Edinger JD, Lipper S, Wheeler B. Hospital ward policy and patients' sleep patterns: a multiple baseline study. Rehabil Psychol. 1989;34(1):4350.
  27. Tamrat R, Huynh‐Le MP, Goyal M. Non‐pharmacologic interventions to improve the sleep of hospitalized patients: a systematic review. J Gen Intern Med. 2014;29:788795.
  28. Le Guen M, Nicolas‐Robin A, Lebard C, Arnulf I, Langeron O, Earplugs and eye masks vs routine care prevent sleep impairment in post‐anaesthesia care unit: a randomized study. Br J Anaesth. 2014;112(1):8995.
  29. Thomas KP, Salas RE, Gamaldo C, et al. Sleep rounds: a multidisciplinary approach to optimize sleep quality and satisfaction in hospitalized patients. J Hosp Med. 2012;7:508512.
  30. Bihari S, McEvoy RD, Kim S, Woodman RJ, Bersten AD. Factors affecting sleep quality of patients in intensive care unit. J Clin Sleep Med. 2012;8(3):301307.
  31. Flaherty JH. Insomnia among hospitalized older persons. Clin Geriatr Med. 2008;24(1):5167.
  32. McDowell JA, Mion LC, Lydon TJ, Inouye SK. A nonpharmacological sleep protocol for hospitalized older patients. J Am Geriatr Soc. 1998;46(6):700705.
  33. The Action Priority Matrix: making the most of your opportunities. TimeAnalyzer website. Available at: http://www.timeanalyzer.com/lib/priority.htm. Published 2006. Accessed July 10, 2015.
  34. Marino M, Li Y, Rueschman MN, et al. Measuring sleep: accuracy, sensitivity, and specificity of wrist actigraphy compared to polysomnography. Sleep. 2013;36(11):17471755.
References
  1. Institute of Medicine (US) Committee on Sleep Medicine and Research. Sleep disorders and sleep deprivation: an unmet public health problem. Washington, DC: National Academies Press; 2006. Available at: http://www.ncbi.nlm.nih.gov/books/NBK19960. Accessed September 16, 2014.
  2. Schoenborn CA, Adams PE. Health behaviors of adults: United States, 2005–2007. Vital Health Stat 10. 2010;245:1132.
  3. Mallon L, Broman JE, Hetta J. High incidence of diabetes in men with sleep complaints or short sleep duration: a 12‐year follow‐up study of a middle‐aged population. Diabetes Care. 2005;28:27622767.
  4. Donat M, Brown C, Williams N, et al. Linking sleep duration and obesity among black and white US adults. Clin Pract (Lond). 2013;10(5):661667.
  5. Cappuccio FP, Stranges S, Kandala NB, et al. Gender‐specific associations of short sleep duration with prevalent and incident hypertension: the Whitehall II Study. Hypertension. 2007;50:693700.
  6. Rod NH, Kumari M, Lange T, Kivimäki M, Shipley M, Ferrie J. The joint effect of sleep duration and disturbed sleep on cause‐specific mortality: results from the Whitehall II cohort study. PLoS One. 2014;9(4):e91965.
  7. Martin JL, Fiorentino L, Jouldjian S, Mitchell M, Josephson KR, Alessi CA. Poor self‐reported sleep quality predicts mortality within one year of inpatient post‐acute rehabilitation among older adults. Sleep. 2011;34(12):17151721.
  8. Kahn‐Greene ET, Killgore DB, Kamimori GH, Balkin TJ, Killgore WD. The effects of sleep deprivation on symptoms of psychopathology in healthy adults. Sleep Med. 2007;8(3):215221.
  9. Irwin MR, Wang M, Campomayor CO, Collado‐Hidalgo A, Cole S. Sleep deprivation and activation of morning levels of cellular and genomic markers of inflammation. Arch Intern Med. 2006;166:17561762.
  10. Knutson KL, Spiegel K, Penev P, Cauter E. The metabolic consequences of sleep deprivation. Sleep Med Rev. 2007;11(3):163178.
  11. Isaia G, Corsinovi L, Bo M, et al. Insomnia among hospitalized elderly patients: prevalence, clinical characteristics and risk factors. Arch Gerontol Geriatr. 2011;52:133137.
  12. Rocha FL, Hara C, Rodrigues CV, et al. Is insomnia a marker for psychiatric disorders in general hospitals? Sleep Med. 2005;6:549553.
  13. Adachi M, Staisiunas PG, Knutson KL, Beveridge C, Meltzer DO, Arora VM. Perceived control and sleep in hospitalized older adults: a sound hypothesis? J Hosp Med. 2013;8:184190.
  14. Buxton OM, Ellenbogen JM, Wang W, et al. Sleep disruption due to hospital noises: a prospective evaluation. Ann Intern Med. 2012;157:170179.
  15. Redeker NS. Sleep in acute care settings: an integrative review. J Nurs Scholarsh. 2000;32(1):3138.
  16. Buysse D. Physical health as it relates to insomnia. Talk presented at: Center for Behavior and Health, Lecture Series in Johns Hopkins Bayview Medical Center; July 17, 2012; Baltimore, MD.
  17. Buysse DJ, Reynolds CF, Monk TH, Berman SR, Kupfer DJ. The Pittsburgh Sleep Quality Index: a new instrument for psychiatric practice and research. Psychiatry Res. 1989;28:193213.
  18. Smith MT, Wegener ST. Measures of sleep: The Insomnia Severity Index, Medical Outcomes Study (MOS) Sleep Scale, Pittsburgh Sleep Diary (PSD), and Pittsburgh Sleep Quality Index (PSQI). Arthritis Rheumatol. 2003;49:S184S196.
  19. Brown H, Prescott R. Applied Mixed Models in Medicine. 3rd ed. Somerset, NJ: Wiley; 2014:539.
  20. Blackwell E, Leon CF, Miller GE, Applying mixed regression models to the analysis of repeated‐measures data in psychosomatic medicine. Psychosom Med. 2006;68(6):870878.
  21. Peugh JL, Enders CK. Using the SPSS mixed procedure to fit cross‐sectional and longitudinal multilevel models. Educ Psychol Meas. 2005;65(5):717741.
  22. McCoach DB, Black AC. Introduction to estimation issues in multilevel modeling. New Dir Inst Res. 2012;2012(154):2339.
  23. Enders CK, Tofighi D. Centering predictor variables in cross‐sectional multilevel models: a new look at an old issue. Psychol Methods. 2007;12(2):121138.
  24. Manian F, Manian C. Sleep quality in adult hospitalized patients with infection: an observational study. Am J Med Sci. 2015;349(1):5660.
  25. Shear TC, Balachandran JS, Mokhlesi B, et al. Risk of sleep apnea in hospitalized older patients. J Clin Sleep Med. 2014;10:10611066.
  26. Edinger JD, Lipper S, Wheeler B. Hospital ward policy and patients' sleep patterns: a multiple baseline study. Rehabil Psychol. 1989;34(1):4350.
  27. Tamrat R, Huynh‐Le MP, Goyal M. Non‐pharmacologic interventions to improve the sleep of hospitalized patients: a systematic review. J Gen Intern Med. 2014;29:788795.
  28. Le Guen M, Nicolas‐Robin A, Lebard C, Arnulf I, Langeron O, Earplugs and eye masks vs routine care prevent sleep impairment in post‐anaesthesia care unit: a randomized study. Br J Anaesth. 2014;112(1):8995.
  29. Thomas KP, Salas RE, Gamaldo C, et al. Sleep rounds: a multidisciplinary approach to optimize sleep quality and satisfaction in hospitalized patients. J Hosp Med. 2012;7:508512.
  30. Bihari S, McEvoy RD, Kim S, Woodman RJ, Bersten AD. Factors affecting sleep quality of patients in intensive care unit. J Clin Sleep Med. 2012;8(3):301307.
  31. Flaherty JH. Insomnia among hospitalized older persons. Clin Geriatr Med. 2008;24(1):5167.
  32. McDowell JA, Mion LC, Lydon TJ, Inouye SK. A nonpharmacological sleep protocol for hospitalized older patients. J Am Geriatr Soc. 1998;46(6):700705.
  33. The Action Priority Matrix: making the most of your opportunities. TimeAnalyzer website. Available at: http://www.timeanalyzer.com/lib/priority.htm. Published 2006. Accessed July 10, 2015.
  34. Marino M, Li Y, Rueschman MN, et al. Measuring sleep: accuracy, sensitivity, and specificity of wrist actigraphy compared to polysomnography. Sleep. 2013;36(11):17471755.
Issue
Journal of Hospital Medicine - 11(7)
Issue
Journal of Hospital Medicine - 11(7)
Page Number
467-472
Page Number
467-472
Publications
Publications
Article Type
Display Headline
Pilot study aiming to support sleep quality and duration during hospitalizations
Display Headline
Pilot study aiming to support sleep quality and duration during hospitalizations
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Evelyn Gathecha, MD, Johns Hopkins University School of Medicine, Johns Hopkins Bayview Medical Center, 5200 Eastern Avenue, MFL Building West Tower, 6th Floor CIMS Suite, Baltimore, MD 21224; Telephone: 410‐550‐5018; Fax: 410‐550‐2972; E‐mail: egathec1@jhmi.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Image
Disable zoom
Off
Media Files
Image
Disable zoom
Off

Development and Validation of TAISCH

Article Type
Changed
Sun, 05/21/2017 - 14:00
Display Headline
Development and validation of the tool to assess inpatient satisfaction with care from hospitalists

Patient satisfaction scores are being reported publicly and will affect hospital reimbursement rates under Hospital Value Based Purchasing.[1] Patient satisfaction scores are currently obtained through metrics such as Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS)[2] and Press Ganey (PG)[3] surveys. Such surveys are mailed to a variable proportion of patients following their discharge from the hospital, and ask patients about the quality of care they received during their admission. Domains assessed regarding the patients' inpatient experiences range from room cleanliness to the amount of time the physician spent with them.

The Society of Hospital Medicine (SHM), the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] Ideally, accurate information would be delivered as feedback to individual providers in a timely manner in hopes of improving performance; however, the current methodology has shortcomings that limit its usefulness. First, several hospitalists and consultants may be involved in the care of 1 patient during the hospital stay, but the score can only be tied to a single physician. Current survey methods attribute all responses to that particular doctor, usually the attending of record, although patients may very well be thinking of other physicians when responding to questions. Second, only a few questions on the surveys ask about doctors' performance. Aforementioned surveys have 3 to 8 questions about doctors' care, which limits the ability to assess physician performance comprehensively. Finally, the surveys are mailed approximately 1 week after the patient's discharge, usually without a name or photograph of the physician to facilitate patient/caregiver recall. This time lag and lack of information to prompt patient recall likely lead to impreciseness in assessment. In addition, the response rates to these surveys are typically low, around 25% (personal oral communication with our division's service excellence stakeholder Dr. L.P. in September 2013). These deficiencies limit the usefulness of such data in coaching individual providers about their performance because they cannot be delivered in a timely fashion, and the reliability of the attribution is suspect.

With these considerations in mind, we developed and validated a new survey metric, the Tool to Assess Inpatient Satisfaction with Care from Hospitalists (TAISCH). We hypothesized that the results would be different from those collected using conventional methodologies.

PATIENTS AND METHODS

Study Design and Subjects

Our cross‐sectional study surveyed inpatients under the care of hospitalist physicians working without the support of trainees or allied health professionals (such as nurse practitioners or physician assistants). The subjects were hospitalized at a 560‐bed academic medical center on a general medical floor between September 2012 and December 2012. All participating hospitalist physicians were members of a division of hospital medicine.

TAISCH Development

Several steps were taken to establish content validity evidence.[5] We developed TAISCH by building upon the theoretical underpinnings of the quality of care measures that are endorsed by the SHM Membership Committee Guidelines for Hospitalists Patient Satisfaction.[4] This directive recommends that patient satisfaction with hospitalist care should be assessed across 6 domains: physician availability, physician concern for patients, physician communication skills, physician courteousness, physician clinical skills, and physician involvement of patients' families. Other existing validated measures tied to the quality of patient care were reviewed, and items related to the physician's care were considered for inclusion to further substantiate content validity.[6, 7, 8, 9, 10, 11, 12] Input from colleagues with expertise in clinical excellence and service excellence was also solicited. This included the director of Hopkins' Miller Coulson Academy of Clinical Excellence and the grant review committee members of the Johns Hopkins Osler Center for Clinical Excellence (who funded this study).[13, 14]

The preliminary instrument contained 17 items, including 2 conditional questions, and was first pilot tested on 5 hospitalized patients. We assessed the time it took to administer the surveys as well as patients' comments and questions about each survey item. This resulted in minor wording changes for clarification and changes in the order of the questions. We then pursued a second phase of piloting using the revised survey, which was administered to >20 patients. There were no further adjustments as patients reported that TAISCH was clear and concise.

From interviews with patients after pilot testing, it became clear that respondents were carefully reflecting on the quality of care and performance of their treating physician, thereby generating response process validity evidence.[5]

Data Collection

To ensure that patients had perspective upon which to base their assessment, they were only asked to appraise physicians after being cared for by the same hospitalist provider for at least 2 consecutive days. Patients who were on isolation, those who were non‐English speaking, and those with impaired decision‐making capacity (such as mental status change or dementia) were excluded. Patients were enrolled only if they could correctly name their doctor or at least identify a photograph of their hospitalist provider on a page that included pictures of all division members. Those patients who were able to name the provider or correctly select the provider from the page of photographs were considered to have correctly identified their provider. In order to ensure the confidentiality of the patients and their responses, all data collections were performed by a trained research assistant who had no patient‐care responsibilities. The survey was confidential, did not include any patient identifiers, and patients were assured that providers would never see their individual responses. The patients were given options to complete TAISCH either by verbally responding to the research assistant's questions, filling out the paper survey, or completing the survey online using an iPad at the bedside. TAISCH specifically asked the patients to rate their hospitalist provider's performance along several domains: communication skills, clinical skills, availability, empathy, courteousness, and discharge planning; 5‐point Likert scales were used exclusively.

In addition to the TAISCH questions, we asked patients (1) an overall satisfaction question, I would recommend Dr. X to my loved ones should he or she need hospitalization in the future (response options: strongly disagree, disagree, neutral, agree, strongly agree), (2) their pain level using the Wong‐Baker pain scale,[15] and (3) the Jefferson Scale of Patient's Perceptions of Physician Empathy (JSPPPE).[16, 17] Associations between TAISCH and these variables (as well as PG data) would be examined to ascertain relations to other variables validity evidence.[5] Specifically, we sought to ascertain discriminant and convergent validity where the TAISCH is associated positively with constructs where we expect positive associations (convergent) and negatively with those we expect negative associations (discriminant).[18] The Wong‐Baker pain scale is a recommended pain‐assessment tool by the Joint Commission on Accreditation of Healthcare Organizations, and is widely used in hospitals and various healthcare settings.[19] The scale has a range from 0 to 10 (0 for no pain and 10 indicating the worst pain). The hypothesis was that the patients' pain levels would adversely affect their perception of the physician's performance (discriminant validity). JSPPPE is a 5‐item validated scale developed to measure patients' perceptions of their physicians' empathic engagement. It has significant correlations with the American Board of Internal Medicine's patient rating surveys, and it is used in standardized patient examinations for medical students.[20] The hypothesis was that patient perception about the quality of physician care would correlate positively with their assessment of the physician's empathy (convergent validity).

Although all of the hospitalist providers in the division consented to participate in this study, only hospitalist providers for whom at least 4 patient surveys were collected were included in the analysis. The study was approved by our institutional review board.

Data Analysis

All data were analyzed using Stata 11 (StataCorp, College Station, TX). Data were analyzed to determine the potential for a single comprehensive assessment of physician performance with confirmatory factor analysis (CFA) using maximum likelihood extraction. Additional factor analyses examined the potential for a multiple factor solution using exploratory factor analysis (EFA) with principle component factor analysis and varimax rotation. Examination of scree plots, factor loadings for individual items greater than 0.40, eigenvalues greater than 1.0, and substantive meaning of the factors were all taken into consideration when determining the number of factors to retain from factor analytic models.[21] Cronbach's s were calculated for each factor to assess reliability. These data provided internal structure validity evidence (demonstrated by acceptable reliability and factor structure) to TAISCH.[5]

After arriving at the final TAISCH scale, composite TAISCH scores were computed. Associations between composite TAISCH scores with the Wong‐Baker pain scale, the JSPPPE, and the overall satisfaction question were assessed using linear regression with the svy command in Stata to account for the nested design of having each patient report on a single hospitalist provider. Correlation between composite TAISCH score and PG physician care scores (comprised of 5 questions: time physician spent with you, physician concern with questions/worries, physician kept you informed, friendliness/courtesy of physician, and skill of physician) were assessed at the provider level when both data were available.

RESULTS

A total of 330 patients were considered to be eligible through medical record screening. Of those patients, 73 (22%) were already discharged by the time the research assistant attempted to enroll them after 2 days of care by a single physician. Of 257 inpatients approached, 30 patients (12%) refused to participate. Among the 227 consented patients, 24 (9%) were excluded as they were unable to correctly identify their hospitalist provider. A total of 203 patients were enrolled, and each patient rated a single hospitalist; a total of 29 unique hospitalists were assessed by these patients. The patients' mean age was 60 years, 114 (56%) were female, and 61 (30%) were of nonwhite race (Table 1). The hospitalist physicians' demographic information is also shown in Table 1. Two hospitalists with fewer than 4 surveys collected were excluded from the analysis. Thus, final analysis included 200 unique patients assessing 1 of the 27 hospitalists (mean=7.4 surveys per hospitalist).

Characteristics of the 203 Patients and 29 Hospitalist Physicians Studied
CharacteristicsValue
  • NOTE: Abbreviations: SD, standard deviation.

Patients, N=203 
Age, y, mean (SD)60.0 (17.2)
Female, n (%)114 (56.1)
Nonwhite race, n (%)61 (30.5)
Observation stay, n (%)45 (22.1)
How are you feeling today? n (%) 
Very poor11 (5.5)
Poor14 (7.0)
Fair67 (33.5)
Good71 (35.5)
Very good33 (16.5)
Excellent4 (2.0)
Hospitalists, N=29 
Age, n (%) 
2630 years7 (24.1)
3135 years8 (27.6)
3640 years12 (41.4)
4145 years2 (6.9)
Female, n (%)11 (37.9)
International medical graduate, n (%)18 (62.1)
Years in current practice, n (%) 
<19 (31.0)
127 (24.1)
346 (20.7)
565 (17.2)
7 or more2 (6.9)
Race, n (%) 
Caucasian4 (13.8)
Asian19 (65.5)
African/African American5 (17.2)
Other1 (3.4)
Academic rank, n (%) 
Assistant professor9 (31.0)
Clinical instructor10 (34.5)
Clinical associate/nonfaculty10 (34.5)
Percentage of clinical effort, n (%) 
>70%6 (20.7)
50%70%19 (65.5)
<50%4 (13.8)

Validation of TAISCH

On the 17‐item TAISCH administered, the 2 conditional questions (When I asked to see Dr. X, s/he came within a reasonable amount of time. and If Dr. X interacted with your family, how well did s/he deal with them?) were applicable to fewer than 40% of patients. As such, they were not included in the analysis.

Internal Structure Validity Evidence

Results from factor analyses are shown in Table 2. The CFA modeling of a single factor solution with 15 items explained 42% of the total variance. The 27 hospitalists' average 15‐item TAISCH score ranged from 3.25 to 4.28 (mean [standard deviation]=3.82 [0.24]; possible score range: 15). Reliability of the 15‐item TAISCH was appropriate (Cronbach's =0.88).

Factor Loadings for 15‐Item TAISCH Measure Based on Confirmatory Factor Analysis
TAISCH (Cronbach's =0.88)Factor Loading
  • NOTE: Abbreviations: TAISCH, Tool to Assess Inpatient Satisfaction with Care from Hospitalists. *Response category: below average, average, above average, top 10% of all doctors, the very best of any doctor I have come across. Response category: none, a little, some, a lot, tremendously. Response category: strongly disagree, disagree, neutral, agree, strongly agree. Response category: poor, fair, good, very good, excellent. Response category: never, rarely, sometimes, most of the time, every single time.

Compared to all other physicians that you know, how do you rate Dr. X's compassion, empathy, and concern for you?*0.91
Compared to all other physicians that you know, how do you rate Dr. X's ability to communicate with you?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's skill in diagnosing and treating your medical conditions?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's fund of knowledge?*0.80
How much confidence do you have in Dr. X's plan for your care?0.71
Dr. X kept me informed of the plans for my care.0.69
Effectively preparing patients for discharge is an important part of what doctors in the hospital do. How well has Dr. X done in getting you ready to be discharged from the hospital?0.67
Dr. X let me talk without interrupting.0.60
Dr. X encouraged me to ask questions.0.59
Dr. X checks to be sure I understood everything.0.55
I sensed Dr. X was in a rush when s/he was with me. (reverse coded)0.55
Dr. X showed interest in my views and opinions about my health.0.54
Dr. X discusses options with me and involves me in decision making.0.47
Dr. X asked permission to enter the room and waited for an answer.0.25
Dr. X sat down when s/he visited my bedside.0.14

As shown in Table 2, 2 variables had factor loadings below the minimum threshold of 0.40 in the CFA for the 15‐item TAISCH when modeling a single factor solution. Both items were related to physician etiquette: Dr. X asked permission to enter the room and waited for an answer. and Dr. X sat down when he/she visited my bedside.

When CFA was executed again, as a single factor omitting the 2 items that demonstrated lower factor loadings, the 13‐item single factor solution explained 47% of the total variance, and the Cronbach's was 0.92.

EFA models were also explored for potential alternate solutions. These analyses resulted in lesser reliability (low Cronbach's ), weak construct operationalization, and poor face validity (as judged by the research team).

Both the 13‐ and 15‐item single factor solutions were examined further to determine whether associations with criterion variables (pain, empathy) differed substantively. Given that results were similar across both solutions, subsequent analyses were completed with the 15‐item single factor solution, which included the etiquette‐related variables.

Relationship to Other Variables Validity Evidence

The association between the 15‐item TAISCH and JSPPPE was significantly positive (=12.2, P<0.001). Additionally, there was a positive and significant association between TAISCH and the overall satisfaction question: I would recommend Dr. X to my loved ones should they need hospitalization in the future. (=11.2, P<0.001). This overall satisfaction question was also associated positively with JSPPPE (=13.2, P<0.001). There was a statistically significant negative association between TAISCH and Wong‐Baker pain scale (=2.42, P<0.05).

The PG data from the same period were available for 24 out of 27 hospitalists. The number of PG surveys collected per provider ranged from 5 to 30 (mean=14). At the provider level, there was not a statistically significant correlation between PG and the 15‐item TAISCH (P=0.51). Of note, PG was also not significantly correlated with the overall satisfaction question, JSPPPE, or the Wong‐Baker pain scale (all P>0.10).

DISCUSSION

Our new metric, TAISCH, was found to be a reliable and valid measurement tool to assess patient satisfaction with the hospitalist physician's care. Because we only surveyed patients who could correctly identify their hospitalist physicians after interacting for at least 2 consecutive days, the attribution of the data to the individual hospitalist is almost certainly correct. The high participation rate indicates that the patients were not hesitant about rating their hospitalist provider's quality of care, even when asked while they were still in the hospital.

The majority of the patients approached were able to correctly identify their hospitalist provider. This rate (91%) was much higher than the rate previously reported in the literature where a picture card was used to improve provider recognition.[22] It is also likely that 1 physician, rather than a team of physicians, taking care of patients make it easier for patients to recall the name and recognize the face of their inpatient provider.

The CFA of TAISCH showed good fit but suggests that 2 variables, both from Kahn's etiquette‐based medicine (EtBM) checklist,[9] may not load in the same way as the other items. Tackett and colleagues reported that hospitalists who performed more EtBM behaviors scored higher on PG evaluations.[23] Such results, along with the comparable explanation of variance and reliability, convinced us to retain these 2 items in the final 15‐item TAISCH as dictated by the CFA. Although the literature supports the fact that physician etiquette is related to perception of high‐quality care, it is possible that these 2 questions were answered differently (and thereby failed to load the same way), because environmental limitations may be preventing physicians' ability to perform them consistently. We prefer the 15‐item version of TAISCH and future studies may provide additional information about its performance as compared to the 13‐item adaptation.

The significantly negative association between the Wong‐Baker pain scale and TAISCH stresses the importance of adequately addressing and treating the patient's pain. Hanna et al. showed that the patients' perceptions of pain control was associated with their overall satisfaction score measured by HCAHPS.[24] The association seen in our study was not unexpected, because TAISCH is administered while the patients are acutely ill in the hospital, when pain is likely more prevalent and severe than it is during the postdischarge settings (when the HCAHPS or PG surveys are administered). Interestingly, Hanna et al. discovered that the team's attention to controlling pain was more strongly correlated with overall satisfaction than was the actual pain control.[24] These data, now confirmed by our study, should serve to remind us that a hospitalist's concern and effort to relieve pain may augment patient satisfaction with the quality of care, even when eliminating the pain may be difficult or impossible.

TAISCH was found not to be correlated with PG scores. Several explanations for this deserve consideration. First, the postdischarge PG survey that is used for our institution does not list the name of the specific hospitalist providers for the patients to evaluate. Because patients encounter multiple physicians during their hospital stay (eg, emergency department physicians, hospitalist providers, consultants), it is possible that patients are not reflecting on the named doctor when assessing the the attending of record on the PG mailed questionnaire. Second, the representation of patients who responded to TAISCH and PG were different; almost all patients completed TAISCH as opposed to a small minority who decide to respond to the PG survey. Third, TAISCH measures the physicians' performance more comprehensively with a larger number of variables. Last, it is possible that we were underpowered to detect significant correlation, because there were only 24 providers who had data from both TAISCH and PG. However, our results endorse using caution in interpreting PG scores for individual hospitalist's performance, particularly for high‐stakes consequences (including the provision of incentives to high performer and the insistence on remediation for low performers).

Several limitations of this study should be considered. First, only hospitalist providers from a single division were assessed. This may limit the generalizability of our findings. Second, although patients were assured about confidentiality of their responses, they might have provided more favorable answers, because they may have felt uncomfortable rating their physician poorly. One review article of the measurement of healthcare satisfaction indicated that impersonal (mailed) methods result in more criticism and lower satisfaction than assessments made in person using interviews. As the trade‐off, the mailed surveys yield lower response rates that may introduce other forms of bias.[25] Even on the HCHAPS survey report for the same period from our institution, 78% of patients gave top box ratings for our doctors' communication skills, which is at the state average.[26] Similarly, a study that used postdischarge telephone interviews to collect patients' satisfaction with hospitalists' care quality reported an average score of 4.20 out of 5.[27] These findings confirm that highly skewed ratings are common for these types of surveys, irrespective of how or when the data are collected.

Despite the aforementioned limitations, TAISCH use need not be limited to hospitalist physicians. It may also be used to assess allied health professionals or trainees performance, which cannot be assessed by HCHAPS or PG. Applying TAISCH in different hospital settings (eg, emergency department or critical care units), assessing hospitalists' reactions to TAISCH, learning whether TAISCH leads to hospitalists' behavior changes or appraising whether performance can improve in response to coaching interventions for those performing poorly are all research questions that merit additional consideration.

CONCLUSION

TAISCH allows for obtaining patient satisfaction data that are highly attributable to specific hospitalist providers. The data collection method also permits high response rates so that input comes from almost all patients. The timeliness of the TAISCH assessments also makes it possible for real‐time service recovery, which is impossible with other commonly used metrics assessing patient satisfaction. Our next step will include testing the most effective way to provide feedback to providers and to coach these individuals so as to improve performance.

Acknowledgements

The authors would like to thank Po‐Han Chen at the BEAD Core for his statistical analysis support.

Disclosures: This study was supported by the Johns Hopkins Osler Center for Clinical Excellence. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. The authors report no conflicts of interest.

Files
References
  1. Brumenthal D, Jena AB. Hospital value‐based purchasing. J Hosp Med. 2013;8:271277.
  2. HCAHPS survey. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed August 27, 2011.
  3. Press Ganey survey. Press Ganey website. Available at: http://www.pressganey.com/index.aspx. Accessed February 12, 2013.
  4. Society of Hospital Medicine. Membership Committee Guidelines for Hospitalists Patient Satisfaction Surveys. Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=Practice_Resources119:166.e7e16.
  5. Makoul G, Krupat E, Chang CH. Measuring patient views of physician communication skills: development and testing of the Communication Assessment Tool. Patient Educ Couns. 2007;67:333342.
  6. Jenkinson C, Coulter A, Bruster S. The Picker Patient Experience Questionnaire: development and validation using data from in‐patient surveys in five countries. Int J Qual Health Care. 2002;14:353358.
  7. The Patient Satisfaction Questionnaire from RAND Health. RAND Health website. Available at: http://www.rand.org/health/surveys_tools/psq.html. Accessed December 30, 2011.
  8. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358:19881989.
  9. Christmas C, Kravet S, Durso C, Wright SM. Defining clinical excellence in academic medicine: a qualitative study of the master clinicians. Mayo Clin Proc. 2008;83:989994.
  10. Wright SM, Christmas C, Burkhart K, Kravet S, Durso C. Creating an academy of clinical excellence at Johns Hopkins Bayview Medical Center: a 3‐year experience. Acad Med. 2010;85:18331839.
  11. Bendapudi NM, Berry LL, Keith FA, Turner Parish J, Rayburn WL. Patients' perspectives on ideal physician behaviors. Mayo Clin Proc. 2006;81(3):338344.
  12. The Miller‐Coulson Academy of Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/innovative/signature_programs/academy_of_clinical_excellence/. Accessed April 25, 2014.
  13. Osler Center for Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/johns_hopkins_bayview/education_training/continuing_education/osler_center_for_clinical_excellence. Accessed April 25, 2014.
  14. Wong‐Baker FACES Foundation. Available at: http://www.wongbakerfaces.org. Accessed July 8, 2013.
  15. Kane GC, Gotto JL, Mangione S, West S, Hojat M. Jefferson Scale of Patient's Perceptions of Physician Empathy: preliminary psychometric data. Croat Med J. 2007;48:8186.
  16. Glaser KM, Markham FW, Adler HM, McManus PR, Hojat M. Relationships between scores on the Jefferson Scale of Physician Empathy, patient perceptions of physician empathy, and humanistic approaches to patient care: a validity study. Med Sci Monit. 2007;13(7):CR291CR294.
  17. Campbell DT, Fiske DW. Convergent and discriminant validation by the multitrait‐multimethod matrix. Psychol Bul. 1959;56(2):81105.
  18. The Joint Commission. Facts about pain management. Available at: http://www.jointcommission.org/pain_management. Accessed April 25, 2014.
  19. Berg K, Majdan JF, Berg D, et al. Medical students' self‐reported empathy and simulated patients' assessments of student empathy: an analysis by gender and ethnicity. Acad Med. 2011;86(8):984988.
  20. Gorsuch RL. Factor Analysis. Hillsdale, NJ: Lawrence Erlbaum Associates; 1983.
  21. Arora VM, Schaninger C, D'Arcy M, et al. Improving inpatients' identification of their doctors: use of FACE cards. Jt Comm J Qual Pateint Saf. 1009;35(12):613619.
  22. Tackett S, Tad‐y D, Rios R et al. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  23. Hanna MN, Gonzalez‐Fernandez M, Barrett AD, et al. Does patient perception of pain control affect patient satisfaction across surgical units in a tertiary teaching hospital? Am J Med Qual. 2012;27:411416.
  24. Crow R, Gage H, Hampson S, et al. The measurement of satisfaction with health care: implications for practice from a systematic review of the literature. Health Technol Assess. 2002;6(32):1244.
  25. Centers for Medicare 7(2):131136.
Article PDF
Issue
Journal of Hospital Medicine - 9(9)
Publications
Page Number
553-558
Sections
Files
Files
Article PDF
Article PDF

Patient satisfaction scores are being reported publicly and will affect hospital reimbursement rates under Hospital Value Based Purchasing.[1] Patient satisfaction scores are currently obtained through metrics such as Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS)[2] and Press Ganey (PG)[3] surveys. Such surveys are mailed to a variable proportion of patients following their discharge from the hospital, and ask patients about the quality of care they received during their admission. Domains assessed regarding the patients' inpatient experiences range from room cleanliness to the amount of time the physician spent with them.

The Society of Hospital Medicine (SHM), the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] Ideally, accurate information would be delivered as feedback to individual providers in a timely manner in hopes of improving performance; however, the current methodology has shortcomings that limit its usefulness. First, several hospitalists and consultants may be involved in the care of 1 patient during the hospital stay, but the score can only be tied to a single physician. Current survey methods attribute all responses to that particular doctor, usually the attending of record, although patients may very well be thinking of other physicians when responding to questions. Second, only a few questions on the surveys ask about doctors' performance. Aforementioned surveys have 3 to 8 questions about doctors' care, which limits the ability to assess physician performance comprehensively. Finally, the surveys are mailed approximately 1 week after the patient's discharge, usually without a name or photograph of the physician to facilitate patient/caregiver recall. This time lag and lack of information to prompt patient recall likely lead to impreciseness in assessment. In addition, the response rates to these surveys are typically low, around 25% (personal oral communication with our division's service excellence stakeholder Dr. L.P. in September 2013). These deficiencies limit the usefulness of such data in coaching individual providers about their performance because they cannot be delivered in a timely fashion, and the reliability of the attribution is suspect.

With these considerations in mind, we developed and validated a new survey metric, the Tool to Assess Inpatient Satisfaction with Care from Hospitalists (TAISCH). We hypothesized that the results would be different from those collected using conventional methodologies.

PATIENTS AND METHODS

Study Design and Subjects

Our cross‐sectional study surveyed inpatients under the care of hospitalist physicians working without the support of trainees or allied health professionals (such as nurse practitioners or physician assistants). The subjects were hospitalized at a 560‐bed academic medical center on a general medical floor between September 2012 and December 2012. All participating hospitalist physicians were members of a division of hospital medicine.

TAISCH Development

Several steps were taken to establish content validity evidence.[5] We developed TAISCH by building upon the theoretical underpinnings of the quality of care measures that are endorsed by the SHM Membership Committee Guidelines for Hospitalists Patient Satisfaction.[4] This directive recommends that patient satisfaction with hospitalist care should be assessed across 6 domains: physician availability, physician concern for patients, physician communication skills, physician courteousness, physician clinical skills, and physician involvement of patients' families. Other existing validated measures tied to the quality of patient care were reviewed, and items related to the physician's care were considered for inclusion to further substantiate content validity.[6, 7, 8, 9, 10, 11, 12] Input from colleagues with expertise in clinical excellence and service excellence was also solicited. This included the director of Hopkins' Miller Coulson Academy of Clinical Excellence and the grant review committee members of the Johns Hopkins Osler Center for Clinical Excellence (who funded this study).[13, 14]

The preliminary instrument contained 17 items, including 2 conditional questions, and was first pilot tested on 5 hospitalized patients. We assessed the time it took to administer the surveys as well as patients' comments and questions about each survey item. This resulted in minor wording changes for clarification and changes in the order of the questions. We then pursued a second phase of piloting using the revised survey, which was administered to >20 patients. There were no further adjustments as patients reported that TAISCH was clear and concise.

From interviews with patients after pilot testing, it became clear that respondents were carefully reflecting on the quality of care and performance of their treating physician, thereby generating response process validity evidence.[5]

Data Collection

To ensure that patients had perspective upon which to base their assessment, they were only asked to appraise physicians after being cared for by the same hospitalist provider for at least 2 consecutive days. Patients who were on isolation, those who were non‐English speaking, and those with impaired decision‐making capacity (such as mental status change or dementia) were excluded. Patients were enrolled only if they could correctly name their doctor or at least identify a photograph of their hospitalist provider on a page that included pictures of all division members. Those patients who were able to name the provider or correctly select the provider from the page of photographs were considered to have correctly identified their provider. In order to ensure the confidentiality of the patients and their responses, all data collections were performed by a trained research assistant who had no patient‐care responsibilities. The survey was confidential, did not include any patient identifiers, and patients were assured that providers would never see their individual responses. The patients were given options to complete TAISCH either by verbally responding to the research assistant's questions, filling out the paper survey, or completing the survey online using an iPad at the bedside. TAISCH specifically asked the patients to rate their hospitalist provider's performance along several domains: communication skills, clinical skills, availability, empathy, courteousness, and discharge planning; 5‐point Likert scales were used exclusively.

In addition to the TAISCH questions, we asked patients (1) an overall satisfaction question, I would recommend Dr. X to my loved ones should he or she need hospitalization in the future (response options: strongly disagree, disagree, neutral, agree, strongly agree), (2) their pain level using the Wong‐Baker pain scale,[15] and (3) the Jefferson Scale of Patient's Perceptions of Physician Empathy (JSPPPE).[16, 17] Associations between TAISCH and these variables (as well as PG data) would be examined to ascertain relations to other variables validity evidence.[5] Specifically, we sought to ascertain discriminant and convergent validity where the TAISCH is associated positively with constructs where we expect positive associations (convergent) and negatively with those we expect negative associations (discriminant).[18] The Wong‐Baker pain scale is a recommended pain‐assessment tool by the Joint Commission on Accreditation of Healthcare Organizations, and is widely used in hospitals and various healthcare settings.[19] The scale has a range from 0 to 10 (0 for no pain and 10 indicating the worst pain). The hypothesis was that the patients' pain levels would adversely affect their perception of the physician's performance (discriminant validity). JSPPPE is a 5‐item validated scale developed to measure patients' perceptions of their physicians' empathic engagement. It has significant correlations with the American Board of Internal Medicine's patient rating surveys, and it is used in standardized patient examinations for medical students.[20] The hypothesis was that patient perception about the quality of physician care would correlate positively with their assessment of the physician's empathy (convergent validity).

Although all of the hospitalist providers in the division consented to participate in this study, only hospitalist providers for whom at least 4 patient surveys were collected were included in the analysis. The study was approved by our institutional review board.

Data Analysis

All data were analyzed using Stata 11 (StataCorp, College Station, TX). Data were analyzed to determine the potential for a single comprehensive assessment of physician performance with confirmatory factor analysis (CFA) using maximum likelihood extraction. Additional factor analyses examined the potential for a multiple factor solution using exploratory factor analysis (EFA) with principle component factor analysis and varimax rotation. Examination of scree plots, factor loadings for individual items greater than 0.40, eigenvalues greater than 1.0, and substantive meaning of the factors were all taken into consideration when determining the number of factors to retain from factor analytic models.[21] Cronbach's s were calculated for each factor to assess reliability. These data provided internal structure validity evidence (demonstrated by acceptable reliability and factor structure) to TAISCH.[5]

After arriving at the final TAISCH scale, composite TAISCH scores were computed. Associations between composite TAISCH scores with the Wong‐Baker pain scale, the JSPPPE, and the overall satisfaction question were assessed using linear regression with the svy command in Stata to account for the nested design of having each patient report on a single hospitalist provider. Correlation between composite TAISCH score and PG physician care scores (comprised of 5 questions: time physician spent with you, physician concern with questions/worries, physician kept you informed, friendliness/courtesy of physician, and skill of physician) were assessed at the provider level when both data were available.

RESULTS

A total of 330 patients were considered to be eligible through medical record screening. Of those patients, 73 (22%) were already discharged by the time the research assistant attempted to enroll them after 2 days of care by a single physician. Of 257 inpatients approached, 30 patients (12%) refused to participate. Among the 227 consented patients, 24 (9%) were excluded as they were unable to correctly identify their hospitalist provider. A total of 203 patients were enrolled, and each patient rated a single hospitalist; a total of 29 unique hospitalists were assessed by these patients. The patients' mean age was 60 years, 114 (56%) were female, and 61 (30%) were of nonwhite race (Table 1). The hospitalist physicians' demographic information is also shown in Table 1. Two hospitalists with fewer than 4 surveys collected were excluded from the analysis. Thus, final analysis included 200 unique patients assessing 1 of the 27 hospitalists (mean=7.4 surveys per hospitalist).

Characteristics of the 203 Patients and 29 Hospitalist Physicians Studied
CharacteristicsValue
  • NOTE: Abbreviations: SD, standard deviation.

Patients, N=203 
Age, y, mean (SD)60.0 (17.2)
Female, n (%)114 (56.1)
Nonwhite race, n (%)61 (30.5)
Observation stay, n (%)45 (22.1)
How are you feeling today? n (%) 
Very poor11 (5.5)
Poor14 (7.0)
Fair67 (33.5)
Good71 (35.5)
Very good33 (16.5)
Excellent4 (2.0)
Hospitalists, N=29 
Age, n (%) 
2630 years7 (24.1)
3135 years8 (27.6)
3640 years12 (41.4)
4145 years2 (6.9)
Female, n (%)11 (37.9)
International medical graduate, n (%)18 (62.1)
Years in current practice, n (%) 
<19 (31.0)
127 (24.1)
346 (20.7)
565 (17.2)
7 or more2 (6.9)
Race, n (%) 
Caucasian4 (13.8)
Asian19 (65.5)
African/African American5 (17.2)
Other1 (3.4)
Academic rank, n (%) 
Assistant professor9 (31.0)
Clinical instructor10 (34.5)
Clinical associate/nonfaculty10 (34.5)
Percentage of clinical effort, n (%) 
>70%6 (20.7)
50%70%19 (65.5)
<50%4 (13.8)

Validation of TAISCH

On the 17‐item TAISCH administered, the 2 conditional questions (When I asked to see Dr. X, s/he came within a reasonable amount of time. and If Dr. X interacted with your family, how well did s/he deal with them?) were applicable to fewer than 40% of patients. As such, they were not included in the analysis.

Internal Structure Validity Evidence

Results from factor analyses are shown in Table 2. The CFA modeling of a single factor solution with 15 items explained 42% of the total variance. The 27 hospitalists' average 15‐item TAISCH score ranged from 3.25 to 4.28 (mean [standard deviation]=3.82 [0.24]; possible score range: 15). Reliability of the 15‐item TAISCH was appropriate (Cronbach's =0.88).

Factor Loadings for 15‐Item TAISCH Measure Based on Confirmatory Factor Analysis
TAISCH (Cronbach's =0.88)Factor Loading
  • NOTE: Abbreviations: TAISCH, Tool to Assess Inpatient Satisfaction with Care from Hospitalists. *Response category: below average, average, above average, top 10% of all doctors, the very best of any doctor I have come across. Response category: none, a little, some, a lot, tremendously. Response category: strongly disagree, disagree, neutral, agree, strongly agree. Response category: poor, fair, good, very good, excellent. Response category: never, rarely, sometimes, most of the time, every single time.

Compared to all other physicians that you know, how do you rate Dr. X's compassion, empathy, and concern for you?*0.91
Compared to all other physicians that you know, how do you rate Dr. X's ability to communicate with you?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's skill in diagnosing and treating your medical conditions?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's fund of knowledge?*0.80
How much confidence do you have in Dr. X's plan for your care?0.71
Dr. X kept me informed of the plans for my care.0.69
Effectively preparing patients for discharge is an important part of what doctors in the hospital do. How well has Dr. X done in getting you ready to be discharged from the hospital?0.67
Dr. X let me talk without interrupting.0.60
Dr. X encouraged me to ask questions.0.59
Dr. X checks to be sure I understood everything.0.55
I sensed Dr. X was in a rush when s/he was with me. (reverse coded)0.55
Dr. X showed interest in my views and opinions about my health.0.54
Dr. X discusses options with me and involves me in decision making.0.47
Dr. X asked permission to enter the room and waited for an answer.0.25
Dr. X sat down when s/he visited my bedside.0.14

As shown in Table 2, 2 variables had factor loadings below the minimum threshold of 0.40 in the CFA for the 15‐item TAISCH when modeling a single factor solution. Both items were related to physician etiquette: Dr. X asked permission to enter the room and waited for an answer. and Dr. X sat down when he/she visited my bedside.

When CFA was executed again, as a single factor omitting the 2 items that demonstrated lower factor loadings, the 13‐item single factor solution explained 47% of the total variance, and the Cronbach's was 0.92.

EFA models were also explored for potential alternate solutions. These analyses resulted in lesser reliability (low Cronbach's ), weak construct operationalization, and poor face validity (as judged by the research team).

Both the 13‐ and 15‐item single factor solutions were examined further to determine whether associations with criterion variables (pain, empathy) differed substantively. Given that results were similar across both solutions, subsequent analyses were completed with the 15‐item single factor solution, which included the etiquette‐related variables.

Relationship to Other Variables Validity Evidence

The association between the 15‐item TAISCH and JSPPPE was significantly positive (=12.2, P<0.001). Additionally, there was a positive and significant association between TAISCH and the overall satisfaction question: I would recommend Dr. X to my loved ones should they need hospitalization in the future. (=11.2, P<0.001). This overall satisfaction question was also associated positively with JSPPPE (=13.2, P<0.001). There was a statistically significant negative association between TAISCH and Wong‐Baker pain scale (=2.42, P<0.05).

The PG data from the same period were available for 24 out of 27 hospitalists. The number of PG surveys collected per provider ranged from 5 to 30 (mean=14). At the provider level, there was not a statistically significant correlation between PG and the 15‐item TAISCH (P=0.51). Of note, PG was also not significantly correlated with the overall satisfaction question, JSPPPE, or the Wong‐Baker pain scale (all P>0.10).

DISCUSSION

Our new metric, TAISCH, was found to be a reliable and valid measurement tool to assess patient satisfaction with the hospitalist physician's care. Because we only surveyed patients who could correctly identify their hospitalist physicians after interacting for at least 2 consecutive days, the attribution of the data to the individual hospitalist is almost certainly correct. The high participation rate indicates that the patients were not hesitant about rating their hospitalist provider's quality of care, even when asked while they were still in the hospital.

The majority of the patients approached were able to correctly identify their hospitalist provider. This rate (91%) was much higher than the rate previously reported in the literature where a picture card was used to improve provider recognition.[22] It is also likely that 1 physician, rather than a team of physicians, taking care of patients make it easier for patients to recall the name and recognize the face of their inpatient provider.

The CFA of TAISCH showed good fit but suggests that 2 variables, both from Kahn's etiquette‐based medicine (EtBM) checklist,[9] may not load in the same way as the other items. Tackett and colleagues reported that hospitalists who performed more EtBM behaviors scored higher on PG evaluations.[23] Such results, along with the comparable explanation of variance and reliability, convinced us to retain these 2 items in the final 15‐item TAISCH as dictated by the CFA. Although the literature supports the fact that physician etiquette is related to perception of high‐quality care, it is possible that these 2 questions were answered differently (and thereby failed to load the same way), because environmental limitations may be preventing physicians' ability to perform them consistently. We prefer the 15‐item version of TAISCH and future studies may provide additional information about its performance as compared to the 13‐item adaptation.

The significantly negative association between the Wong‐Baker pain scale and TAISCH stresses the importance of adequately addressing and treating the patient's pain. Hanna et al. showed that the patients' perceptions of pain control was associated with their overall satisfaction score measured by HCAHPS.[24] The association seen in our study was not unexpected, because TAISCH is administered while the patients are acutely ill in the hospital, when pain is likely more prevalent and severe than it is during the postdischarge settings (when the HCAHPS or PG surveys are administered). Interestingly, Hanna et al. discovered that the team's attention to controlling pain was more strongly correlated with overall satisfaction than was the actual pain control.[24] These data, now confirmed by our study, should serve to remind us that a hospitalist's concern and effort to relieve pain may augment patient satisfaction with the quality of care, even when eliminating the pain may be difficult or impossible.

TAISCH was found not to be correlated with PG scores. Several explanations for this deserve consideration. First, the postdischarge PG survey that is used for our institution does not list the name of the specific hospitalist providers for the patients to evaluate. Because patients encounter multiple physicians during their hospital stay (eg, emergency department physicians, hospitalist providers, consultants), it is possible that patients are not reflecting on the named doctor when assessing the the attending of record on the PG mailed questionnaire. Second, the representation of patients who responded to TAISCH and PG were different; almost all patients completed TAISCH as opposed to a small minority who decide to respond to the PG survey. Third, TAISCH measures the physicians' performance more comprehensively with a larger number of variables. Last, it is possible that we were underpowered to detect significant correlation, because there were only 24 providers who had data from both TAISCH and PG. However, our results endorse using caution in interpreting PG scores for individual hospitalist's performance, particularly for high‐stakes consequences (including the provision of incentives to high performer and the insistence on remediation for low performers).

Several limitations of this study should be considered. First, only hospitalist providers from a single division were assessed. This may limit the generalizability of our findings. Second, although patients were assured about confidentiality of their responses, they might have provided more favorable answers, because they may have felt uncomfortable rating their physician poorly. One review article of the measurement of healthcare satisfaction indicated that impersonal (mailed) methods result in more criticism and lower satisfaction than assessments made in person using interviews. As the trade‐off, the mailed surveys yield lower response rates that may introduce other forms of bias.[25] Even on the HCHAPS survey report for the same period from our institution, 78% of patients gave top box ratings for our doctors' communication skills, which is at the state average.[26] Similarly, a study that used postdischarge telephone interviews to collect patients' satisfaction with hospitalists' care quality reported an average score of 4.20 out of 5.[27] These findings confirm that highly skewed ratings are common for these types of surveys, irrespective of how or when the data are collected.

Despite the aforementioned limitations, TAISCH use need not be limited to hospitalist physicians. It may also be used to assess allied health professionals or trainees performance, which cannot be assessed by HCHAPS or PG. Applying TAISCH in different hospital settings (eg, emergency department or critical care units), assessing hospitalists' reactions to TAISCH, learning whether TAISCH leads to hospitalists' behavior changes or appraising whether performance can improve in response to coaching interventions for those performing poorly are all research questions that merit additional consideration.

CONCLUSION

TAISCH allows for obtaining patient satisfaction data that are highly attributable to specific hospitalist providers. The data collection method also permits high response rates so that input comes from almost all patients. The timeliness of the TAISCH assessments also makes it possible for real‐time service recovery, which is impossible with other commonly used metrics assessing patient satisfaction. Our next step will include testing the most effective way to provide feedback to providers and to coach these individuals so as to improve performance.

Acknowledgements

The authors would like to thank Po‐Han Chen at the BEAD Core for his statistical analysis support.

Disclosures: This study was supported by the Johns Hopkins Osler Center for Clinical Excellence. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. The authors report no conflicts of interest.

Patient satisfaction scores are being reported publicly and will affect hospital reimbursement rates under Hospital Value Based Purchasing.[1] Patient satisfaction scores are currently obtained through metrics such as Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS)[2] and Press Ganey (PG)[3] surveys. Such surveys are mailed to a variable proportion of patients following their discharge from the hospital, and ask patients about the quality of care they received during their admission. Domains assessed regarding the patients' inpatient experiences range from room cleanliness to the amount of time the physician spent with them.

The Society of Hospital Medicine (SHM), the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] Ideally, accurate information would be delivered as feedback to individual providers in a timely manner in hopes of improving performance; however, the current methodology has shortcomings that limit its usefulness. First, several hospitalists and consultants may be involved in the care of 1 patient during the hospital stay, but the score can only be tied to a single physician. Current survey methods attribute all responses to that particular doctor, usually the attending of record, although patients may very well be thinking of other physicians when responding to questions. Second, only a few questions on the surveys ask about doctors' performance. Aforementioned surveys have 3 to 8 questions about doctors' care, which limits the ability to assess physician performance comprehensively. Finally, the surveys are mailed approximately 1 week after the patient's discharge, usually without a name or photograph of the physician to facilitate patient/caregiver recall. This time lag and lack of information to prompt patient recall likely lead to impreciseness in assessment. In addition, the response rates to these surveys are typically low, around 25% (personal oral communication with our division's service excellence stakeholder Dr. L.P. in September 2013). These deficiencies limit the usefulness of such data in coaching individual providers about their performance because they cannot be delivered in a timely fashion, and the reliability of the attribution is suspect.

With these considerations in mind, we developed and validated a new survey metric, the Tool to Assess Inpatient Satisfaction with Care from Hospitalists (TAISCH). We hypothesized that the results would be different from those collected using conventional methodologies.

PATIENTS AND METHODS

Study Design and Subjects

Our cross‐sectional study surveyed inpatients under the care of hospitalist physicians working without the support of trainees or allied health professionals (such as nurse practitioners or physician assistants). The subjects were hospitalized at a 560‐bed academic medical center on a general medical floor between September 2012 and December 2012. All participating hospitalist physicians were members of a division of hospital medicine.

TAISCH Development

Several steps were taken to establish content validity evidence.[5] We developed TAISCH by building upon the theoretical underpinnings of the quality of care measures that are endorsed by the SHM Membership Committee Guidelines for Hospitalists Patient Satisfaction.[4] This directive recommends that patient satisfaction with hospitalist care should be assessed across 6 domains: physician availability, physician concern for patients, physician communication skills, physician courteousness, physician clinical skills, and physician involvement of patients' families. Other existing validated measures tied to the quality of patient care were reviewed, and items related to the physician's care were considered for inclusion to further substantiate content validity.[6, 7, 8, 9, 10, 11, 12] Input from colleagues with expertise in clinical excellence and service excellence was also solicited. This included the director of Hopkins' Miller Coulson Academy of Clinical Excellence and the grant review committee members of the Johns Hopkins Osler Center for Clinical Excellence (who funded this study).[13, 14]

The preliminary instrument contained 17 items, including 2 conditional questions, and was first pilot tested on 5 hospitalized patients. We assessed the time it took to administer the surveys as well as patients' comments and questions about each survey item. This resulted in minor wording changes for clarification and changes in the order of the questions. We then pursued a second phase of piloting using the revised survey, which was administered to >20 patients. There were no further adjustments as patients reported that TAISCH was clear and concise.

From interviews with patients after pilot testing, it became clear that respondents were carefully reflecting on the quality of care and performance of their treating physician, thereby generating response process validity evidence.[5]

Data Collection

To ensure that patients had perspective upon which to base their assessment, they were only asked to appraise physicians after being cared for by the same hospitalist provider for at least 2 consecutive days. Patients who were on isolation, those who were non‐English speaking, and those with impaired decision‐making capacity (such as mental status change or dementia) were excluded. Patients were enrolled only if they could correctly name their doctor or at least identify a photograph of their hospitalist provider on a page that included pictures of all division members. Those patients who were able to name the provider or correctly select the provider from the page of photographs were considered to have correctly identified their provider. In order to ensure the confidentiality of the patients and their responses, all data collections were performed by a trained research assistant who had no patient‐care responsibilities. The survey was confidential, did not include any patient identifiers, and patients were assured that providers would never see their individual responses. The patients were given options to complete TAISCH either by verbally responding to the research assistant's questions, filling out the paper survey, or completing the survey online using an iPad at the bedside. TAISCH specifically asked the patients to rate their hospitalist provider's performance along several domains: communication skills, clinical skills, availability, empathy, courteousness, and discharge planning; 5‐point Likert scales were used exclusively.

In addition to the TAISCH questions, we asked patients (1) an overall satisfaction question, I would recommend Dr. X to my loved ones should he or she need hospitalization in the future (response options: strongly disagree, disagree, neutral, agree, strongly agree), (2) their pain level using the Wong‐Baker pain scale,[15] and (3) the Jefferson Scale of Patient's Perceptions of Physician Empathy (JSPPPE).[16, 17] Associations between TAISCH and these variables (as well as PG data) would be examined to ascertain relations to other variables validity evidence.[5] Specifically, we sought to ascertain discriminant and convergent validity where the TAISCH is associated positively with constructs where we expect positive associations (convergent) and negatively with those we expect negative associations (discriminant).[18] The Wong‐Baker pain scale is a recommended pain‐assessment tool by the Joint Commission on Accreditation of Healthcare Organizations, and is widely used in hospitals and various healthcare settings.[19] The scale has a range from 0 to 10 (0 for no pain and 10 indicating the worst pain). The hypothesis was that the patients' pain levels would adversely affect their perception of the physician's performance (discriminant validity). JSPPPE is a 5‐item validated scale developed to measure patients' perceptions of their physicians' empathic engagement. It has significant correlations with the American Board of Internal Medicine's patient rating surveys, and it is used in standardized patient examinations for medical students.[20] The hypothesis was that patient perception about the quality of physician care would correlate positively with their assessment of the physician's empathy (convergent validity).

Although all of the hospitalist providers in the division consented to participate in this study, only hospitalist providers for whom at least 4 patient surveys were collected were included in the analysis. The study was approved by our institutional review board.

Data Analysis

All data were analyzed using Stata 11 (StataCorp, College Station, TX). Data were analyzed to determine the potential for a single comprehensive assessment of physician performance with confirmatory factor analysis (CFA) using maximum likelihood extraction. Additional factor analyses examined the potential for a multiple factor solution using exploratory factor analysis (EFA) with principle component factor analysis and varimax rotation. Examination of scree plots, factor loadings for individual items greater than 0.40, eigenvalues greater than 1.0, and substantive meaning of the factors were all taken into consideration when determining the number of factors to retain from factor analytic models.[21] Cronbach's s were calculated for each factor to assess reliability. These data provided internal structure validity evidence (demonstrated by acceptable reliability and factor structure) to TAISCH.[5]

After arriving at the final TAISCH scale, composite TAISCH scores were computed. Associations between composite TAISCH scores with the Wong‐Baker pain scale, the JSPPPE, and the overall satisfaction question were assessed using linear regression with the svy command in Stata to account for the nested design of having each patient report on a single hospitalist provider. Correlation between composite TAISCH score and PG physician care scores (comprised of 5 questions: time physician spent with you, physician concern with questions/worries, physician kept you informed, friendliness/courtesy of physician, and skill of physician) were assessed at the provider level when both data were available.

RESULTS

A total of 330 patients were considered to be eligible through medical record screening. Of those patients, 73 (22%) were already discharged by the time the research assistant attempted to enroll them after 2 days of care by a single physician. Of 257 inpatients approached, 30 patients (12%) refused to participate. Among the 227 consented patients, 24 (9%) were excluded as they were unable to correctly identify their hospitalist provider. A total of 203 patients were enrolled, and each patient rated a single hospitalist; a total of 29 unique hospitalists were assessed by these patients. The patients' mean age was 60 years, 114 (56%) were female, and 61 (30%) were of nonwhite race (Table 1). The hospitalist physicians' demographic information is also shown in Table 1. Two hospitalists with fewer than 4 surveys collected were excluded from the analysis. Thus, final analysis included 200 unique patients assessing 1 of the 27 hospitalists (mean=7.4 surveys per hospitalist).

Characteristics of the 203 Patients and 29 Hospitalist Physicians Studied
CharacteristicsValue
  • NOTE: Abbreviations: SD, standard deviation.

Patients, N=203 
Age, y, mean (SD)60.0 (17.2)
Female, n (%)114 (56.1)
Nonwhite race, n (%)61 (30.5)
Observation stay, n (%)45 (22.1)
How are you feeling today? n (%) 
Very poor11 (5.5)
Poor14 (7.0)
Fair67 (33.5)
Good71 (35.5)
Very good33 (16.5)
Excellent4 (2.0)
Hospitalists, N=29 
Age, n (%) 
2630 years7 (24.1)
3135 years8 (27.6)
3640 years12 (41.4)
4145 years2 (6.9)
Female, n (%)11 (37.9)
International medical graduate, n (%)18 (62.1)
Years in current practice, n (%) 
<19 (31.0)
127 (24.1)
346 (20.7)
565 (17.2)
7 or more2 (6.9)
Race, n (%) 
Caucasian4 (13.8)
Asian19 (65.5)
African/African American5 (17.2)
Other1 (3.4)
Academic rank, n (%) 
Assistant professor9 (31.0)
Clinical instructor10 (34.5)
Clinical associate/nonfaculty10 (34.5)
Percentage of clinical effort, n (%) 
>70%6 (20.7)
50%70%19 (65.5)
<50%4 (13.8)

Validation of TAISCH

On the 17‐item TAISCH administered, the 2 conditional questions (When I asked to see Dr. X, s/he came within a reasonable amount of time. and If Dr. X interacted with your family, how well did s/he deal with them?) were applicable to fewer than 40% of patients. As such, they were not included in the analysis.

Internal Structure Validity Evidence

Results from factor analyses are shown in Table 2. The CFA modeling of a single factor solution with 15 items explained 42% of the total variance. The 27 hospitalists' average 15‐item TAISCH score ranged from 3.25 to 4.28 (mean [standard deviation]=3.82 [0.24]; possible score range: 15). Reliability of the 15‐item TAISCH was appropriate (Cronbach's =0.88).

Factor Loadings for 15‐Item TAISCH Measure Based on Confirmatory Factor Analysis
TAISCH (Cronbach's =0.88)Factor Loading
  • NOTE: Abbreviations: TAISCH, Tool to Assess Inpatient Satisfaction with Care from Hospitalists. *Response category: below average, average, above average, top 10% of all doctors, the very best of any doctor I have come across. Response category: none, a little, some, a lot, tremendously. Response category: strongly disagree, disagree, neutral, agree, strongly agree. Response category: poor, fair, good, very good, excellent. Response category: never, rarely, sometimes, most of the time, every single time.

Compared to all other physicians that you know, how do you rate Dr. X's compassion, empathy, and concern for you?*0.91
Compared to all other physicians that you know, how do you rate Dr. X's ability to communicate with you?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's skill in diagnosing and treating your medical conditions?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's fund of knowledge?*0.80
How much confidence do you have in Dr. X's plan for your care?0.71
Dr. X kept me informed of the plans for my care.0.69
Effectively preparing patients for discharge is an important part of what doctors in the hospital do. How well has Dr. X done in getting you ready to be discharged from the hospital?0.67
Dr. X let me talk without interrupting.0.60
Dr. X encouraged me to ask questions.0.59
Dr. X checks to be sure I understood everything.0.55
I sensed Dr. X was in a rush when s/he was with me. (reverse coded)0.55
Dr. X showed interest in my views and opinions about my health.0.54
Dr. X discusses options with me and involves me in decision making.0.47
Dr. X asked permission to enter the room and waited for an answer.0.25
Dr. X sat down when s/he visited my bedside.0.14

As shown in Table 2, 2 variables had factor loadings below the minimum threshold of 0.40 in the CFA for the 15‐item TAISCH when modeling a single factor solution. Both items were related to physician etiquette: Dr. X asked permission to enter the room and waited for an answer. and Dr. X sat down when he/she visited my bedside.

When CFA was executed again, as a single factor omitting the 2 items that demonstrated lower factor loadings, the 13‐item single factor solution explained 47% of the total variance, and the Cronbach's was 0.92.

EFA models were also explored for potential alternate solutions. These analyses resulted in lesser reliability (low Cronbach's ), weak construct operationalization, and poor face validity (as judged by the research team).

Both the 13‐ and 15‐item single factor solutions were examined further to determine whether associations with criterion variables (pain, empathy) differed substantively. Given that results were similar across both solutions, subsequent analyses were completed with the 15‐item single factor solution, which included the etiquette‐related variables.

Relationship to Other Variables Validity Evidence

The association between the 15‐item TAISCH and JSPPPE was significantly positive (=12.2, P<0.001). Additionally, there was a positive and significant association between TAISCH and the overall satisfaction question: I would recommend Dr. X to my loved ones should they need hospitalization in the future. (=11.2, P<0.001). This overall satisfaction question was also associated positively with JSPPPE (=13.2, P<0.001). There was a statistically significant negative association between TAISCH and Wong‐Baker pain scale (=2.42, P<0.05).

The PG data from the same period were available for 24 out of 27 hospitalists. The number of PG surveys collected per provider ranged from 5 to 30 (mean=14). At the provider level, there was not a statistically significant correlation between PG and the 15‐item TAISCH (P=0.51). Of note, PG was also not significantly correlated with the overall satisfaction question, JSPPPE, or the Wong‐Baker pain scale (all P>0.10).

DISCUSSION

Our new metric, TAISCH, was found to be a reliable and valid measurement tool to assess patient satisfaction with the hospitalist physician's care. Because we only surveyed patients who could correctly identify their hospitalist physicians after interacting for at least 2 consecutive days, the attribution of the data to the individual hospitalist is almost certainly correct. The high participation rate indicates that the patients were not hesitant about rating their hospitalist provider's quality of care, even when asked while they were still in the hospital.

The majority of the patients approached were able to correctly identify their hospitalist provider. This rate (91%) was much higher than the rate previously reported in the literature where a picture card was used to improve provider recognition.[22] It is also likely that 1 physician, rather than a team of physicians, taking care of patients make it easier for patients to recall the name and recognize the face of their inpatient provider.

The CFA of TAISCH showed good fit but suggests that 2 variables, both from Kahn's etiquette‐based medicine (EtBM) checklist,[9] may not load in the same way as the other items. Tackett and colleagues reported that hospitalists who performed more EtBM behaviors scored higher on PG evaluations.[23] Such results, along with the comparable explanation of variance and reliability, convinced us to retain these 2 items in the final 15‐item TAISCH as dictated by the CFA. Although the literature supports the fact that physician etiquette is related to perception of high‐quality care, it is possible that these 2 questions were answered differently (and thereby failed to load the same way), because environmental limitations may be preventing physicians' ability to perform them consistently. We prefer the 15‐item version of TAISCH and future studies may provide additional information about its performance as compared to the 13‐item adaptation.

The significantly negative association between the Wong‐Baker pain scale and TAISCH stresses the importance of adequately addressing and treating the patient's pain. Hanna et al. showed that the patients' perceptions of pain control was associated with their overall satisfaction score measured by HCAHPS.[24] The association seen in our study was not unexpected, because TAISCH is administered while the patients are acutely ill in the hospital, when pain is likely more prevalent and severe than it is during the postdischarge settings (when the HCAHPS or PG surveys are administered). Interestingly, Hanna et al. discovered that the team's attention to controlling pain was more strongly correlated with overall satisfaction than was the actual pain control.[24] These data, now confirmed by our study, should serve to remind us that a hospitalist's concern and effort to relieve pain may augment patient satisfaction with the quality of care, even when eliminating the pain may be difficult or impossible.

TAISCH was found not to be correlated with PG scores. Several explanations for this deserve consideration. First, the postdischarge PG survey that is used for our institution does not list the name of the specific hospitalist providers for the patients to evaluate. Because patients encounter multiple physicians during their hospital stay (eg, emergency department physicians, hospitalist providers, consultants), it is possible that patients are not reflecting on the named doctor when assessing the the attending of record on the PG mailed questionnaire. Second, the representation of patients who responded to TAISCH and PG were different; almost all patients completed TAISCH as opposed to a small minority who decide to respond to the PG survey. Third, TAISCH measures the physicians' performance more comprehensively with a larger number of variables. Last, it is possible that we were underpowered to detect significant correlation, because there were only 24 providers who had data from both TAISCH and PG. However, our results endorse using caution in interpreting PG scores for individual hospitalist's performance, particularly for high‐stakes consequences (including the provision of incentives to high performer and the insistence on remediation for low performers).

Several limitations of this study should be considered. First, only hospitalist providers from a single division were assessed. This may limit the generalizability of our findings. Second, although patients were assured about confidentiality of their responses, they might have provided more favorable answers, because they may have felt uncomfortable rating their physician poorly. One review article of the measurement of healthcare satisfaction indicated that impersonal (mailed) methods result in more criticism and lower satisfaction than assessments made in person using interviews. As the trade‐off, the mailed surveys yield lower response rates that may introduce other forms of bias.[25] Even on the HCHAPS survey report for the same period from our institution, 78% of patients gave top box ratings for our doctors' communication skills, which is at the state average.[26] Similarly, a study that used postdischarge telephone interviews to collect patients' satisfaction with hospitalists' care quality reported an average score of 4.20 out of 5.[27] These findings confirm that highly skewed ratings are common for these types of surveys, irrespective of how or when the data are collected.

Despite the aforementioned limitations, TAISCH use need not be limited to hospitalist physicians. It may also be used to assess allied health professionals or trainees performance, which cannot be assessed by HCHAPS or PG. Applying TAISCH in different hospital settings (eg, emergency department or critical care units), assessing hospitalists' reactions to TAISCH, learning whether TAISCH leads to hospitalists' behavior changes or appraising whether performance can improve in response to coaching interventions for those performing poorly are all research questions that merit additional consideration.

CONCLUSION

TAISCH allows for obtaining patient satisfaction data that are highly attributable to specific hospitalist providers. The data collection method also permits high response rates so that input comes from almost all patients. The timeliness of the TAISCH assessments also makes it possible for real‐time service recovery, which is impossible with other commonly used metrics assessing patient satisfaction. Our next step will include testing the most effective way to provide feedback to providers and to coach these individuals so as to improve performance.

Acknowledgements

The authors would like to thank Po‐Han Chen at the BEAD Core for his statistical analysis support.

Disclosures: This study was supported by the Johns Hopkins Osler Center for Clinical Excellence. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. The authors report no conflicts of interest.

References
  1. Brumenthal D, Jena AB. Hospital value‐based purchasing. J Hosp Med. 2013;8:271277.
  2. HCAHPS survey. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed August 27, 2011.
  3. Press Ganey survey. Press Ganey website. Available at: http://www.pressganey.com/index.aspx. Accessed February 12, 2013.
  4. Society of Hospital Medicine. Membership Committee Guidelines for Hospitalists Patient Satisfaction Surveys. Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=Practice_Resources119:166.e7e16.
  5. Makoul G, Krupat E, Chang CH. Measuring patient views of physician communication skills: development and testing of the Communication Assessment Tool. Patient Educ Couns. 2007;67:333342.
  6. Jenkinson C, Coulter A, Bruster S. The Picker Patient Experience Questionnaire: development and validation using data from in‐patient surveys in five countries. Int J Qual Health Care. 2002;14:353358.
  7. The Patient Satisfaction Questionnaire from RAND Health. RAND Health website. Available at: http://www.rand.org/health/surveys_tools/psq.html. Accessed December 30, 2011.
  8. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358:19881989.
  9. Christmas C, Kravet S, Durso C, Wright SM. Defining clinical excellence in academic medicine: a qualitative study of the master clinicians. Mayo Clin Proc. 2008;83:989994.
  10. Wright SM, Christmas C, Burkhart K, Kravet S, Durso C. Creating an academy of clinical excellence at Johns Hopkins Bayview Medical Center: a 3‐year experience. Acad Med. 2010;85:18331839.
  11. Bendapudi NM, Berry LL, Keith FA, Turner Parish J, Rayburn WL. Patients' perspectives on ideal physician behaviors. Mayo Clin Proc. 2006;81(3):338344.
  12. The Miller‐Coulson Academy of Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/innovative/signature_programs/academy_of_clinical_excellence/. Accessed April 25, 2014.
  13. Osler Center for Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/johns_hopkins_bayview/education_training/continuing_education/osler_center_for_clinical_excellence. Accessed April 25, 2014.
  14. Wong‐Baker FACES Foundation. Available at: http://www.wongbakerfaces.org. Accessed July 8, 2013.
  15. Kane GC, Gotto JL, Mangione S, West S, Hojat M. Jefferson Scale of Patient's Perceptions of Physician Empathy: preliminary psychometric data. Croat Med J. 2007;48:8186.
  16. Glaser KM, Markham FW, Adler HM, McManus PR, Hojat M. Relationships between scores on the Jefferson Scale of Physician Empathy, patient perceptions of physician empathy, and humanistic approaches to patient care: a validity study. Med Sci Monit. 2007;13(7):CR291CR294.
  17. Campbell DT, Fiske DW. Convergent and discriminant validation by the multitrait‐multimethod matrix. Psychol Bul. 1959;56(2):81105.
  18. The Joint Commission. Facts about pain management. Available at: http://www.jointcommission.org/pain_management. Accessed April 25, 2014.
  19. Berg K, Majdan JF, Berg D, et al. Medical students' self‐reported empathy and simulated patients' assessments of student empathy: an analysis by gender and ethnicity. Acad Med. 2011;86(8):984988.
  20. Gorsuch RL. Factor Analysis. Hillsdale, NJ: Lawrence Erlbaum Associates; 1983.
  21. Arora VM, Schaninger C, D'Arcy M, et al. Improving inpatients' identification of their doctors: use of FACE cards. Jt Comm J Qual Pateint Saf. 1009;35(12):613619.
  22. Tackett S, Tad‐y D, Rios R et al. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  23. Hanna MN, Gonzalez‐Fernandez M, Barrett AD, et al. Does patient perception of pain control affect patient satisfaction across surgical units in a tertiary teaching hospital? Am J Med Qual. 2012;27:411416.
  24. Crow R, Gage H, Hampson S, et al. The measurement of satisfaction with health care: implications for practice from a systematic review of the literature. Health Technol Assess. 2002;6(32):1244.
  25. Centers for Medicare 7(2):131136.
References
  1. Brumenthal D, Jena AB. Hospital value‐based purchasing. J Hosp Med. 2013;8:271277.
  2. HCAHPS survey. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed August 27, 2011.
  3. Press Ganey survey. Press Ganey website. Available at: http://www.pressganey.com/index.aspx. Accessed February 12, 2013.
  4. Society of Hospital Medicine. Membership Committee Guidelines for Hospitalists Patient Satisfaction Surveys. Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=Practice_Resources119:166.e7e16.
  5. Makoul G, Krupat E, Chang CH. Measuring patient views of physician communication skills: development and testing of the Communication Assessment Tool. Patient Educ Couns. 2007;67:333342.
  6. Jenkinson C, Coulter A, Bruster S. The Picker Patient Experience Questionnaire: development and validation using data from in‐patient surveys in five countries. Int J Qual Health Care. 2002;14:353358.
  7. The Patient Satisfaction Questionnaire from RAND Health. RAND Health website. Available at: http://www.rand.org/health/surveys_tools/psq.html. Accessed December 30, 2011.
  8. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358:19881989.
  9. Christmas C, Kravet S, Durso C, Wright SM. Defining clinical excellence in academic medicine: a qualitative study of the master clinicians. Mayo Clin Proc. 2008;83:989994.
  10. Wright SM, Christmas C, Burkhart K, Kravet S, Durso C. Creating an academy of clinical excellence at Johns Hopkins Bayview Medical Center: a 3‐year experience. Acad Med. 2010;85:18331839.
  11. Bendapudi NM, Berry LL, Keith FA, Turner Parish J, Rayburn WL. Patients' perspectives on ideal physician behaviors. Mayo Clin Proc. 2006;81(3):338344.
  12. The Miller‐Coulson Academy of Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/innovative/signature_programs/academy_of_clinical_excellence/. Accessed April 25, 2014.
  13. Osler Center for Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/johns_hopkins_bayview/education_training/continuing_education/osler_center_for_clinical_excellence. Accessed April 25, 2014.
  14. Wong‐Baker FACES Foundation. Available at: http://www.wongbakerfaces.org. Accessed July 8, 2013.
  15. Kane GC, Gotto JL, Mangione S, West S, Hojat M. Jefferson Scale of Patient's Perceptions of Physician Empathy: preliminary psychometric data. Croat Med J. 2007;48:8186.
  16. Glaser KM, Markham FW, Adler HM, McManus PR, Hojat M. Relationships between scores on the Jefferson Scale of Physician Empathy, patient perceptions of physician empathy, and humanistic approaches to patient care: a validity study. Med Sci Monit. 2007;13(7):CR291CR294.
  17. Campbell DT, Fiske DW. Convergent and discriminant validation by the multitrait‐multimethod matrix. Psychol Bul. 1959;56(2):81105.
  18. The Joint Commission. Facts about pain management. Available at: http://www.jointcommission.org/pain_management. Accessed April 25, 2014.
  19. Berg K, Majdan JF, Berg D, et al. Medical students' self‐reported empathy and simulated patients' assessments of student empathy: an analysis by gender and ethnicity. Acad Med. 2011;86(8):984988.
  20. Gorsuch RL. Factor Analysis. Hillsdale, NJ: Lawrence Erlbaum Associates; 1983.
  21. Arora VM, Schaninger C, D'Arcy M, et al. Improving inpatients' identification of their doctors: use of FACE cards. Jt Comm J Qual Pateint Saf. 1009;35(12):613619.
  22. Tackett S, Tad‐y D, Rios R et al. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  23. Hanna MN, Gonzalez‐Fernandez M, Barrett AD, et al. Does patient perception of pain control affect patient satisfaction across surgical units in a tertiary teaching hospital? Am J Med Qual. 2012;27:411416.
  24. Crow R, Gage H, Hampson S, et al. The measurement of satisfaction with health care: implications for practice from a systematic review of the literature. Health Technol Assess. 2002;6(32):1244.
  25. Centers for Medicare 7(2):131136.
Issue
Journal of Hospital Medicine - 9(9)
Issue
Journal of Hospital Medicine - 9(9)
Page Number
553-558
Page Number
553-558
Publications
Publications
Article Type
Display Headline
Development and validation of the tool to assess inpatient satisfaction with care from hospitalists
Display Headline
Development and validation of the tool to assess inpatient satisfaction with care from hospitalists
Sections
Article Source

© 2014 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Haruka Torok, MD, Johns Hopkins University School of Medicine, Johns Hopkins Bayview Medical Center, 5200 Eastern Ave., MFL Bldg, West Tower 6th Floor CIMS Suite, Baltimore, MD 21224; Telephone: 410‐550‐5018; Fax: 410‐550‐2972; E‐mail: htorok1@jhmi.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Image
Disable zoom
Off
Media Files
Image
Disable zoom
Off
Image
Disable zoom
Off
Image
Disable zoom
Off
Image
Disable zoom
Off
Image
Disable zoom
Off
Image
Disable zoom
Off
Image
Disable zoom
Off

Sepsis Outcomes Across Settings

Article Type
Changed
Mon, 05/22/2017 - 18:30
Display Headline
Does sepsis treatment differ between primary and overflow intensive care units?

Sepsis is a major cause of death in hospitalized patients.13 It is recommended that patients with sepsis be treated with early appropriate antibiotics, as well as early goal‐directed therapy including fluid and vasopressor support according to evidence‐based guidelines.46 Following such evidence‐based protocols and process‐of‐care interventions has been shown to be associated with better patient outcomes, including decreased mortality.7, 8

Most patients with severe sepsis are cared for in intensive care units (ICUs). At times, there are no beds available in the primary ICU and patients presenting to the hospital with sepsis are cared for in other units. Patients admitted to a non‐preferred clinical inpatient setting are sometimes referred to as overflow.9 ICUs can differ significantly in staffing patterns, equipment, and training.10 It is not known if overflow sepsis patients receive similar care when admitted to non‐primary ICUs.

At our hospital, we have an active bed management system led by the hospitalist division.11 This system includes protocols to place sepsis patients in the overflow ICU if the primary ICU is full. We hypothesized that process‐of‐care interventions would be more strictly adhered to when sepsis patients were in the primary ICU rather than in the overflow unit at our institution.

METHODS

Design

This was a retrospective cohort study of all patients with sepsis admitted to either the primary medical intensive care unit (MICU) or the overflow cardiac intensive care unit (CICU) at our hospital between July 2009 and February 2010. We reviewed the admission database starting with the month of February 2010 and proceeded backwards, month by month, until we reached the target number of patients.

Setting

The study was conducted at our 320‐bed, university‐affiliated academic medical center in Baltimore, MD. The MICU and the CICU are closed units that are located adjacent to each other and have 12 beds each. They are staffed by separate pools of attending physicians trained in pulmonary/critical care medicine and cardiovascular diseases, respectively, and no attending physician attends in both units. During the study period, there were 10 unique MICU and 14 unique CICU attending physicians; while most attending physicians covered the unit for 14 days, none of the physicians were on service more than 2 of the 2‐week blocks (28 days). Each unit is additionally staffed by fellows of the respective specialties, and internal medicine residents and interns belonging to the same residency program (who rotate through both ICUs). Residents and fellows are generally assigned to these ICUs for 4 continuous weeks. The assignment of specific attendings, fellows, and residents to either ICU is performed by individual division administrators on a rotational basis based on residency, fellowship, and faculty service requirements. The teams in each ICU function independently of each other. Clinical care of patients requiring the assistance of the other specialty (pulmonary medicine or cardiology) have guidance conferred via an official consultation. Orders on patients in both ICUs are written by the residents using the same computerized order entry system (CPOE) under the supervision of their attending physicians. The nursing staff is exclusive to each ICU. The respiratory therapists spend time in both units. The nursing and respiratory therapy staff in both ICUs are similarly trained and certified, and have the same patient‐to‐nursing ratios.

Subjects

All patients admitted with a possible diagnosis of sepsis to either the MICU or CICU were identified by querying the hospital electronic triage database called etriage. This Web‐based application is used to admit patients to all the Medicine services at our hospital. We employed a wide case‐finding net using keywords that included pneumonia, sepsis, hypotension, high lactate, hypoxia, UTI (urinary tract infection)/urosepsis, SIRS (systemic inflammatory response syndrome), hypothermia, and respiratory failure. A total of 197 adult patients were identified. The charts and the electronic medical record (EMR) of these patients were then reviewed to determine the presence of a sepsis diagnosis using standard consensus criteria.12 Severe sepsis was defined by sepsis associated with organ dysfunction, hypoperfusion, or hypotension using criteria described by Bone et al.12

Fifty‐six did not meet the criteria for sepsis and were excluded from the analysis. A total of 141 patients were included in the study. This being a pilot study, we did not have any preliminary data regarding adherence to sepsis guidelines in overflow ICUs to calculate appropriate sample size. However, in 2 recent studies of dedicated ICUs (Ferrer et al13 and Castellanos‐Ortega et al14), the averaged adherence to a single measure like checking of lactate level was 27% pre‐intervention and 62% post‐intervention. With alpha level 0.05 and 80% power, one would need 31 patients in each unit to detect such differences with respect to this intervention. Although this data does not necessarily apply to overflow ICUs or for combination of processes, we used a goal of having at least 31 patients in each ICU.

The study was approved by the Johns Hopkins Institutional Review Board. The need for informed consent was waived given the retrospective nature of the study.

Data Extraction Process and Procedures

The clinical data was extracted from the EMR and patient charts using a standardized data extraction instrument, modified from a case report form (CRF) used and validated in previous studies.15, 16 The following procedures were used for the data extraction:

  • The data extractors included 4 physicians and 1 research assistant and were trained and tested by a single expert in data review and extraction.

  • Lab data was transcribed directly from the EMR. Calculation of acute physiology and chronic health evaluation (APACHE II) scores were done using the website http://www.sfar.org/subores2/apache22.html (Socit Franaise d'Anesthsie et de Ranimation). Sepsis‐related organ failure assessment (SOFA) scores were calculated using usual criteria.17

  • Delivery of specific treatments and interventions, including their timing, was extracted from the EMR.

  • The attending physicians' notes were used as the final source to assign diagnoses such as presence of acute lung injury, site of infection, and record interventions.

 

Data Analysis

Analyses focused primarily on assessing whether patients were treated differently between the MICU and CICU. The primary exposure variables were the process‐of‐care measures. We specifically used measurement of central venous saturation, checking of lactate level, and administration of antibiotics within 60 minutes in patients with severe sepsis as our primary process‐of‐care measures.13 Continuous variables were reported as mean standard deviation, and Student's t tests were used to compare the 2 groups. Categorical data were expressed as frequency distributions, and chi‐square tests were used to identify differences between the 2 groups. All tests were 2‐tailed with statistical significance set at 0.05. Statistical analysis was performed using SPSS version 19.0. (IBM, Armonk, NY).

To overcome data constraints, we created a dichotomous variable for each of the 3 primary processes‐of‐care (indicating receipt of process or not) and then combined them into 1 dichotomous variable indicating whether or not the patients with severe sepsis received all 3 primary processes‐of‐care. The combined variable was the key independent variable in the model.

We performed logistic regression analysis on patients with severe sepsis. The equation Logit [P(ICU Type = CICU)] = + 1Combined + 2Age describes the framework of the model, with ICU type being the dependent variable, and the combined variable of patients receiving all primary measures being the independent variable and controlled for age. Logistic regression was performed using JMP (SAS Institute, Inc, Cary, NC).

We additionally performed a secondary analysis to explore possible predictors of mortality using a logistic regression model, with the event of death as the dependent variable, and age, APACHE II scores, combined processes‐of‐care, and ICU type included as independent variables.

RESULTS

There were 100 patients admitted to the MICU and 41 patients admitted to the CICU during the study period (Table 1). The majority of the patients were admitted to the ICUs directly from the emergency department (ED) (n = 129), with a small number of patients who were transferred from the Medicine floors (n = 12).

Baseline Patient Characteristics for the 141 Patients Admitted to Intensive Care Units With Sepsis During the Study Period
 MICU (N =100)CICU (N =41)P Value
  • Abbreviations: CICU, cardiac intensive care unit; MICU, medical intensive care unit; APACHE II, acute physiology and chronic health evaluation; SOFA, sepsis‐related organ failure assessment.

Age in years, mean SD67 14.872 15.10.11
Female, n (%)57 (57)27 (66)0.33
Patients with chronic organ insufficiency, n (%)59 (59)22 (54)0.56
Patients with severe sepsis, n (%)88 (88)21 (51)<0.001
Patients needing mechanical ventilation, n (%)43 (43)14 (34)0.33
APACHE II score, mean SD25.53 9.1124.37 9.530.50
SOFA score on day 1, mean SD7.09 3.556.71 4.570.60
Patients with acute lung injury on presentation, n (%)8 (8)2 (5)0.50

There were no significant differences between the 2 study groups in terms of age, sex, primary site of infection, mean APACHE II score, SOFA scores on day 1, chronic organ insufficiency, immune suppression, or need for mechanical ventilation (Table 1). The most common site of infection was lung. There were significantly more patients with severe sepsis in the MICU (88% vs 51%, P <0.001).

Sepsis Process‐of‐Care Measures

There were no significant differences in the proportion of severe sepsis patients who had central venous saturation checked (MICU: 46% vs CICU: 41%, P = 0.67), lactate level checked (95% vs 100%, P = 0.37), or received antibiotics within 60 minutes of presentation (75% vs 69%, P = 0.59) (Table 2). Multiple other processes and treatments were delivered similarly, as shown in Table 2.

ICU Treatments and Processes‐of‐Care for Patients With Sepsis During the Study Period
Primary Process‐of‐Care Measures (Severe Sepsis Patients)MICU (N = 88)CICU (N = 21)P Value
  • Abbreviations: CICU, cardiac intensive care unit; DVT, deep vein thrombosis; GI, gastrointestinal; ICU, intensive care unit; MICU, medical intensive care unit; RBC, red blood cell; SD, standard deviation. * Missing data causes percentages to be other than what might be suspected if it were available for all patients.

Patients with central venous oxygen saturation checked, n (%)*31 (46)7 (41)0.67
Patients with lactate level checked, n (%)*58 (95)16 (100)0.37
Received antibiotics within 60 min, n (%)*46 (75)11 (69)0.59
Patients who had all 3 above processes and treatments, n (%)19 (22)4 (19)0.79
Received vasopressor, n (%)25 (28)8 (38)0.55
ICU Treatments and Processes (All Sepsis Patients)(N =100)(N = 41) 
Fluid balance 24 h after admission in liters, mean SD1.96 2.421.42 2.630.24
Patients who received stress dose steroids, n (%)11 (11)4 (10)0.83
Patients who received Drotrecogin alfa, n (%)0 (0)0 (0) 
Morning glucose 24 h after admission in mg/dL, mean SD161 111144 800.38
Received DVT prophylaxis within 24 h of admission, n (%)74 (74)20 (49)0.004
Received GI prophylaxis within 24 h of admission, n (%)68 (68)18 (44)0.012
Received RBC transfusion within 24 h of admission, n (%)8 (8)7 (17)0.11
Received renal replacement therapy, n (%)13 (13)3 (7)0.33
Received a spontaneous breathing trial within 24 h of admission, n (%)*4 (11)4 (33)0.07

Logistic regression analysis examining the receipt of all 3 primary processes‐of‐care while controlling for age revealed that the odds of the being in one of the ICUs was not significantly different (P = 0.85). The secondary analysis regression models revealed that only the APACHE II score (odds ratio [OR] = 1.21; confidence interval [CI], 1.121.31) was significantly associated with higher odds of mortality. ICU‐type [MICU vs CICU] (OR = 1.85; CI, 0.428.20), age (OR = 1.01; CI, 0.971.06), and combined processes of care (OR = 0.26; CI, 0.071.01) did not have significant associations with odds of mortality.

A review of microbiologic sensitivities revealed a trend towards significance that the cultured microorganism(s) was likely to be resistant to the initial antibiotics administered in MICU vs CICU (15% vs 5%, respectively, P = 0.09).

Mechanical Ventilation Parameters

The majority of the ventilated patients were admitted to each ICU in assist control (AC) mode. There were no significant differences in categories of mean tidal volume (TV) (P = 0.3), mean plateau pressures (P = 0.12), mean fraction of inspired oxygen (FiO2) (P = 0.95), and mean positive end‐expiratory pressures (PEEP) (P = 0.98) noted across the 2 units at the time of ICU admission, and also 24 hours after ICU admission. Further comparison of measurements of tidal volumes and plateau pressures over 7 days of ICU stay revealed no significant differences in the 2 ICUs (P = 0.40 and 0.57, respectively, on day 7 of ICU admission). There was a trend towards significance in fewer patients in the MICU receiving spontaneous breathing trial within 24 hours of ICU admission (11% vs 33%, P = 0.07) (Table 2).

Patient Outcomes

There were no significant differences in ICU mortality (MICU 19% vs CICU 10%, P = 0.18), or hospital mortality (21% vs 15%, P = 0.38) across the units (Table 3). Mean ICU and hospital length of stay (LOS) and proportion of patients discharged home with unassisted breathing were similar (Table 3).

Patient Outcomes for the 141 Patients Admitted to the Intensive Care Units With Sepsis During the Study Period
Patient OutcomesMICU (N = 100)CICU (N = 41)P Value
  • Abbreviations: CICU, cardiac intensive care unit; ICU, intensive care unit; MICU, medical intensive care unit; SD, standard deviation.

ICU mortality, n (%)19 (19)4 (10)0.18
Hospital mortality, n (%)21 (21)6 (15)0.38
Discharged home with unassisted breathing, n (%)33 (33)19 (46)0.14
ICU length of stay in days, mean SD4.78 6.244.92 6.320.97
Hospital length of stay in days, mean SD9.68 9.229.73 9.330.98

DISCUSSION

Since sepsis is more commonly treated in the medical ICU and some data suggests that specialty ICUs may be better at providing desired care,18, 19 we believed that patients treated in the MICU would be more likely to receive guideline‐concordant care. The study refutes our a priori hypothesis and reveals that evidence‐based processes‐of‐care associated with improved outcomes for sepsis are similarly implemented at our institution in the primary and overflow ICU. These findings are important, as ICU bed availability is a frequent problem and many hospitals overflow patients to non‐primary ICUs.9, 20

The observed equivalence in the care delivered may be a function of the relatively high number of patients with sepsis treated in the overflow unit, thereby giving the delivery teams enough experience to provide the desired care. An alternative explanation could be that the residents in CICU brought with them the experience from having previously trained in the MICU. Although, some of the care processes for sepsis patients are influenced by the CPOE (with embedded order sets and protocols), it is unlikely that CPOE can fully account for similarity in care because many processes and therapies (like use of steroids, amount of fluid delivered in first 24 hours, packed red blood cells [PRBC] transfusion, and spontaneous breathing trials) are not embedded within order sets.

The significant difference noted in the areas of deep vein thrombosis (DVT) and gastrointestinal (GI) prophylaxis within 24 hours of ICU admission was unexpected. These preventive therapies are included in initial order sets in the CPOE, which prompt physicians to order them as standard‐of‐care. With respect to DVT prophylaxis, we suspect that some of the difference might be attributable to specific contraindications to its use, which could have been more common in one of the units. There were more patients in MICU on mechanical ventilation (although not statistically significant) and with severe sepsis (statistically significant) at time of admission, which might have contributed to the difference noted in use of GI prophylaxis. It is also plausible that these differences might have disappeared if they were reassessed beyond 24 hours into the ICU admission. We cannot rule out the presence of unit‐ and physician‐level differences that contributed to this. Likewise, there was an unexpected trend towards significance, wherein more patients in CICU had spontaneous breathing trials within 24 hours of admission. This might also be explained by the higher number of patients with severe sepsis in the MICU (preempting any weaning attempts). These caveats aside, it is reassuring that, at our institution, admitting septic patients to the first available ICU bed does not adversely affect important processes‐of‐care.

One might ask whether this study's data should reassure other sites who are boarding septic patients in non‐primary ICUs. Irrespective of the number of patients studied or the degree of statistical significance of the associations, an observational study design cannot prove that boarding septic patients in non‐primary ICUs is either safe or unsafe. However, we hope that readers reflect on, and take inventory of, systems issues that may be different between unitswith an eye towards eliminating variation such that all units managing septic patients are primed to deliver guideline‐concordant care. Other hospitals that use CPOE with sepsis order sets, have protocols for sepsis care, and who train nursing and respiratory therapists to meet high standards might be pleased to see that the patients in our study received comparable, high‐quality care across the 2 units. While our data suggests that boarding patients in overflow units may be safe, these findings would need to be replicated at other sites using prospective designs to prove safety.

Length of emergency room stay prior to admission is associated with higher mortality rates.2123 At many hospitals, critical care beds are a scarce resource such that most hospitals have a policy for the triage of patients to critical care beds.24, 25 Lundberg and colleagues' study demonstrated that patients who developed septic shock on the medical wards experienced delays in receipt of intravenous fluids, inotropic agents and transfer to a critical care setting.26 Thus, rather than waiting in the ED or on the medical service for an MICU bed to become available, it may be most wise to admit a critically sick septic patient to the first available ICU bed, even to an overflow ICU. In a recent study by Sidlow and Aggarwal, 1104 patients discharged from the coronary care unit (CCU) with a non‐cardiac primary diagnosis were compared to patients admitted to the MICU in the same hospital.27 The study found no differences in patient mortality, 30‐day readmission rate, hospital LOS, ICU LOS, and safety outcomes of ventilator‐associated pneumonia and catheter‐associated bloodstream infections between ICUs. However, their study did not examine processes‐of‐care delivered between the primary ICU and the overflow unit, and did not validate the primary diagnoses of patients admitted to the ICU.

Several limitations of this study should be considered. First, this study was conducted at a single center. Second, we used a retrospective study design; however, a prospective study randomizing patients to 1 of the 2 units would likely never be possible. Third, the relatively small number of patients limited the power of the study to detect mortality differences between the units. However, this was a pilot study focused on processes of care as opposed to clinical outcomes. Fourth, it is possible that we did not capture every single patient with sepsis with our keyword search. Our use of a previously validated screening process should have limited the number of missed cases.15, 16 Fifth, although the 2 ICUs have exclusive nursing staff and attending physicians, the housestaff and respiratory therapists do rotate between the 2 ICUs and place orders in the common CPOE. The rotating housestaff may certainly represent a source for confounding, but the large numbers (>30) of evenly spread housestaff over the study period minimizes the potential for any trainee to be responsible for a large proportion of observed practice. Sixth, ICU attendings are the physicians of record and could influence the results. Because no attending physician was on service for more than 4 weeks during the study period, and patients were equally spread over this same time, concerns about clustering and biases this may have created should be minimal but cannot be ruled out. Seventh, some interventions and processes, such as antibiotic administration and measurement of lactate, may have been initiated in the ED, thereby decreasing the potential for differences between the groups. Additionally, we cannot rule out the possibility that factors other than bed availability drove the admission process (we found that the relative proportion of patients admitted to overflow ICU during hours of ambulance diversion was similar to the overflow ICU admissions during non‐ambulance diversion hours). It is possible that some selection bias by the hospitalist assigning patients to specific ICUs influenced their triage decisionsalthough all triaging doctors go through the same process of training in active bed management.11 While more patients admitted to the MICU had severe sepsis, there were no differences between groups in APACHE II or SOFA scores. However, we cannot rule out that there were other residual confounders. Finally, in a small number of cases (4/41, 10%), the CICU team consulted the MICU attending for assistance. This input had the potential to reduce disparities in care between the units.

Overflowing patients to non‐primary ICUs occurs in many hospitals. Our study demonstrates that sepsis treatment for overflow patients may be similar to that received in the primary ICU. While a large multicentered and randomized trial could determine whether significant management and outcome differences exist between primary and overflow ICUs, feasibility concerns make it unlikely that such a study will ever be conducted.

Acknowledgements

Disclosure: Dr Wright is a Miller‐Coulson Family Scholar and this work is supported by the Miller‐Coulson family through the Johns Hopkins Center for Innovative Medicine. Dr Sevransky was supported with a grant from National Institute of General Medical Sciences, NIGMS K‐23‐1399. All other authors disclose no relevant or financial conflicts of interest.

Files
References
  1. Angus DC,Linde‐Zwirble WT,Lidicker J,Clermont G,Carcillo J,Pinsky MR.Epidemiology of severe sepsis in the United States: analysis of incidence, outcome, and associated costs of care.Crit Care Med.2001;29(7):13031310.
  2. Kumar G,Kumar N,Taneja A, et al;for the Milwaukee Initiative in Critical Care Outcomes Research (MICCOR) Group of Investigators.Nationwide trends of severe sepsis in the twenty first century (2000–2007).Chest.2011;140(5):12231231.
  3. Dombrovskiy VY,Martin AA,Sunderram J,Paz HL.Rapid increase in hospitalization and mortality rates for severe sepsis in the United States: a trend analysis from 1993 to 2003.Crit Care Med.2007;35(5):12441250.
  4. Dellinger RP,Levy MM,Carlet JM, et al.Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2008.Crit Care Med.2008;36(1):296327.
  5. Jones AE,Shapiro NI,Trzeciak S, et al.Lactate clearance vs central venous oxygen saturation as goals of early sepsis therapy: a randomized clinical trial.JAMA.2010;303(8):739746.
  6. Rivers E,Nguyen B,Havstad S, et al.Early goal‐directed therapy in the treatment of severe sepsis and septic shock.N Engl J Med.2001;345(19):13681377.
  7. Nguyen HB,Corbett SW,Steele R, et al.Implementation of a bundle of quality indicators for the early management of severe sepsis and septic shock is associated with decreased mortality.Crit Care Med.2007;35(4):11051112.
  8. Kumar A,Zarychanski R,Light B, et al.Early combination antibiotic therapy yields improved survival compared with monotherapy in septic shock: a propensity‐matched analysis.Crit Care Med.2010;38(9):17731785.
  9. Johannes MS.A new dimension of the PACU: the dilemma of the ICU overflow patient.J Post Anesth Nurs.1994;9(5):297300.
  10. Groeger JS,Strosberg MA,Halpern NA, et al.Descriptive analysis of critical care units in the United States.Crit Care Med.1992;20(6):846863.
  11. Howell E,Bessman E,Kravet S,Kolodner K,Marshall R,Wright S.Active bed management by hospitalists and emergency department throughput.Ann Intern Med.2008;149(11):804811.
  12. Bone RC,Balk RA,Cerra FB, et al.Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee, American College of Chest Physicians/Society of Critical Care Medicine.Chest.1992;101(6):16441655.
  13. Ferrer R,Artigas A,Levy MM, et al.Improvement in process of care and outcome after a multicenter severe sepsis educational program in Spain.JAMA.2008;299(19):22942303.
  14. Castellanos‐Ortega A,Suberviola B,Garcia‐Astudillo LA, et al.Impact of the surviving sepsis campaign protocols on hospital length of stay and mortality in septic shock patients: results of a three‐year follow‐up quasi‐experimental study.Crit Care Med.2010;38(4):10361043.
  15. Needham DM,Dennison CR,Dowdy DW, et al.Study protocol: the improving care of acute lung injury patients (ICAP) study.Crit Care.2006;10(1):R9.
  16. Ali N,Gutteridge D,Shahul S,Checkley W,Sevransky J,Martin G.Critical illness outcome study: an observational study of protocols and mortality in intensive care units.Open Access J Clin Trials.2011;3(September):5565.
  17. Vincent JL,Moreno R,Takala J, et al.The SOFA (sepsis‐related organ failure assessment) score to describe organ dysfunction/failure: on behalf of the Working Group on Sepsis‐Related Problems of the European Society of Intensive Care Medicine.Intensive Care Med.1996;22(7):707710.
  18. Pronovost PJ,Angus DC,Dorman T,Robinson KA,Dremsizov TT,Young TL.Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review.JAMA.2002;288(17):21512162.
  19. Fuchs RJ,Berenholtz SM,Dorman T.Do intensivists in ICU improve outcome?Best Pract Res Clin Anaesthesiol.2005;19(1):125135.
  20. Lindsay M.Is the postanesthesia care unit becoming an intensive care unit?J Perianesth Nurs.1999;14(2):7377.
  21. Chalfin DB,Trzeciak S,Likourezos A,Baumann BM,Dellinger RP;for the DELAY‐ED Study Group.Impact of delayed transfer of critically ill patients from the emergency department to the intensive care unit.Crit Care Med.2007;35(6):14771483.
  22. Renaud B,Santin A,Coma E, et al.Association between timing of intensive care unit admission and outcomes for emergency department patients with community‐acquired pneumonia.Crit Care Med.2009;37(11):28672874.
  23. Shen YC,Hsia RY.Association between ambulance diversion and survival among patients with acute myocardial infarction.JAMA.2011;305(23):24402447.
  24. Teres D.Civilian triage in the intensive care unit: the ritual of the last bed.Crit Care Med.1993;21(4):598606.
  25. Sinuff T,Kahnamoui K,Cook DJ,Luce JM,Levy MM;for the Values Ethics and Rationing in Critical Care Task Force.Rationing critical care beds: a systematic review.Crit Care Med.2004;32(7):15881597.
  26. Lundberg JS,Perl TM,Wiblin T, et al.Septic shock: an analysis of outcomes for patients with onset on hospital wards versus intensive care units.Crit Care Med.1998;26(6):10201024.
  27. Sidlow R,Aggarwal V.“The MICU is full”: one hospital's experience with an overflow triage policy.Jt Comm J Qual Patient Saf.2011;37(10):456460.
Article PDF
Issue
Journal of Hospital Medicine - 7(8)
Publications
Page Number
600-605
Sections
Files
Files
Article PDF
Article PDF

Sepsis is a major cause of death in hospitalized patients.13 It is recommended that patients with sepsis be treated with early appropriate antibiotics, as well as early goal‐directed therapy including fluid and vasopressor support according to evidence‐based guidelines.46 Following such evidence‐based protocols and process‐of‐care interventions has been shown to be associated with better patient outcomes, including decreased mortality.7, 8

Most patients with severe sepsis are cared for in intensive care units (ICUs). At times, there are no beds available in the primary ICU and patients presenting to the hospital with sepsis are cared for in other units. Patients admitted to a non‐preferred clinical inpatient setting are sometimes referred to as overflow.9 ICUs can differ significantly in staffing patterns, equipment, and training.10 It is not known if overflow sepsis patients receive similar care when admitted to non‐primary ICUs.

At our hospital, we have an active bed management system led by the hospitalist division.11 This system includes protocols to place sepsis patients in the overflow ICU if the primary ICU is full. We hypothesized that process‐of‐care interventions would be more strictly adhered to when sepsis patients were in the primary ICU rather than in the overflow unit at our institution.

METHODS

Design

This was a retrospective cohort study of all patients with sepsis admitted to either the primary medical intensive care unit (MICU) or the overflow cardiac intensive care unit (CICU) at our hospital between July 2009 and February 2010. We reviewed the admission database starting with the month of February 2010 and proceeded backwards, month by month, until we reached the target number of patients.

Setting

The study was conducted at our 320‐bed, university‐affiliated academic medical center in Baltimore, MD. The MICU and the CICU are closed units that are located adjacent to each other and have 12 beds each. They are staffed by separate pools of attending physicians trained in pulmonary/critical care medicine and cardiovascular diseases, respectively, and no attending physician attends in both units. During the study period, there were 10 unique MICU and 14 unique CICU attending physicians; while most attending physicians covered the unit for 14 days, none of the physicians were on service more than 2 of the 2‐week blocks (28 days). Each unit is additionally staffed by fellows of the respective specialties, and internal medicine residents and interns belonging to the same residency program (who rotate through both ICUs). Residents and fellows are generally assigned to these ICUs for 4 continuous weeks. The assignment of specific attendings, fellows, and residents to either ICU is performed by individual division administrators on a rotational basis based on residency, fellowship, and faculty service requirements. The teams in each ICU function independently of each other. Clinical care of patients requiring the assistance of the other specialty (pulmonary medicine or cardiology) have guidance conferred via an official consultation. Orders on patients in both ICUs are written by the residents using the same computerized order entry system (CPOE) under the supervision of their attending physicians. The nursing staff is exclusive to each ICU. The respiratory therapists spend time in both units. The nursing and respiratory therapy staff in both ICUs are similarly trained and certified, and have the same patient‐to‐nursing ratios.

Subjects

All patients admitted with a possible diagnosis of sepsis to either the MICU or CICU were identified by querying the hospital electronic triage database called etriage. This Web‐based application is used to admit patients to all the Medicine services at our hospital. We employed a wide case‐finding net using keywords that included pneumonia, sepsis, hypotension, high lactate, hypoxia, UTI (urinary tract infection)/urosepsis, SIRS (systemic inflammatory response syndrome), hypothermia, and respiratory failure. A total of 197 adult patients were identified. The charts and the electronic medical record (EMR) of these patients were then reviewed to determine the presence of a sepsis diagnosis using standard consensus criteria.12 Severe sepsis was defined by sepsis associated with organ dysfunction, hypoperfusion, or hypotension using criteria described by Bone et al.12

Fifty‐six did not meet the criteria for sepsis and were excluded from the analysis. A total of 141 patients were included in the study. This being a pilot study, we did not have any preliminary data regarding adherence to sepsis guidelines in overflow ICUs to calculate appropriate sample size. However, in 2 recent studies of dedicated ICUs (Ferrer et al13 and Castellanos‐Ortega et al14), the averaged adherence to a single measure like checking of lactate level was 27% pre‐intervention and 62% post‐intervention. With alpha level 0.05 and 80% power, one would need 31 patients in each unit to detect such differences with respect to this intervention. Although this data does not necessarily apply to overflow ICUs or for combination of processes, we used a goal of having at least 31 patients in each ICU.

The study was approved by the Johns Hopkins Institutional Review Board. The need for informed consent was waived given the retrospective nature of the study.

Data Extraction Process and Procedures

The clinical data was extracted from the EMR and patient charts using a standardized data extraction instrument, modified from a case report form (CRF) used and validated in previous studies.15, 16 The following procedures were used for the data extraction:

  • The data extractors included 4 physicians and 1 research assistant and were trained and tested by a single expert in data review and extraction.

  • Lab data was transcribed directly from the EMR. Calculation of acute physiology and chronic health evaluation (APACHE II) scores were done using the website http://www.sfar.org/subores2/apache22.html (Socit Franaise d'Anesthsie et de Ranimation). Sepsis‐related organ failure assessment (SOFA) scores were calculated using usual criteria.17

  • Delivery of specific treatments and interventions, including their timing, was extracted from the EMR.

  • The attending physicians' notes were used as the final source to assign diagnoses such as presence of acute lung injury, site of infection, and record interventions.

 

Data Analysis

Analyses focused primarily on assessing whether patients were treated differently between the MICU and CICU. The primary exposure variables were the process‐of‐care measures. We specifically used measurement of central venous saturation, checking of lactate level, and administration of antibiotics within 60 minutes in patients with severe sepsis as our primary process‐of‐care measures.13 Continuous variables were reported as mean standard deviation, and Student's t tests were used to compare the 2 groups. Categorical data were expressed as frequency distributions, and chi‐square tests were used to identify differences between the 2 groups. All tests were 2‐tailed with statistical significance set at 0.05. Statistical analysis was performed using SPSS version 19.0. (IBM, Armonk, NY).

To overcome data constraints, we created a dichotomous variable for each of the 3 primary processes‐of‐care (indicating receipt of process or not) and then combined them into 1 dichotomous variable indicating whether or not the patients with severe sepsis received all 3 primary processes‐of‐care. The combined variable was the key independent variable in the model.

We performed logistic regression analysis on patients with severe sepsis. The equation Logit [P(ICU Type = CICU)] = + 1Combined + 2Age describes the framework of the model, with ICU type being the dependent variable, and the combined variable of patients receiving all primary measures being the independent variable and controlled for age. Logistic regression was performed using JMP (SAS Institute, Inc, Cary, NC).

We additionally performed a secondary analysis to explore possible predictors of mortality using a logistic regression model, with the event of death as the dependent variable, and age, APACHE II scores, combined processes‐of‐care, and ICU type included as independent variables.

RESULTS

There were 100 patients admitted to the MICU and 41 patients admitted to the CICU during the study period (Table 1). The majority of the patients were admitted to the ICUs directly from the emergency department (ED) (n = 129), with a small number of patients who were transferred from the Medicine floors (n = 12).

Baseline Patient Characteristics for the 141 Patients Admitted to Intensive Care Units With Sepsis During the Study Period
 MICU (N =100)CICU (N =41)P Value
  • Abbreviations: CICU, cardiac intensive care unit; MICU, medical intensive care unit; APACHE II, acute physiology and chronic health evaluation; SOFA, sepsis‐related organ failure assessment.

Age in years, mean SD67 14.872 15.10.11
Female, n (%)57 (57)27 (66)0.33
Patients with chronic organ insufficiency, n (%)59 (59)22 (54)0.56
Patients with severe sepsis, n (%)88 (88)21 (51)<0.001
Patients needing mechanical ventilation, n (%)43 (43)14 (34)0.33
APACHE II score, mean SD25.53 9.1124.37 9.530.50
SOFA score on day 1, mean SD7.09 3.556.71 4.570.60
Patients with acute lung injury on presentation, n (%)8 (8)2 (5)0.50

There were no significant differences between the 2 study groups in terms of age, sex, primary site of infection, mean APACHE II score, SOFA scores on day 1, chronic organ insufficiency, immune suppression, or need for mechanical ventilation (Table 1). The most common site of infection was lung. There were significantly more patients with severe sepsis in the MICU (88% vs 51%, P <0.001).

Sepsis Process‐of‐Care Measures

There were no significant differences in the proportion of severe sepsis patients who had central venous saturation checked (MICU: 46% vs CICU: 41%, P = 0.67), lactate level checked (95% vs 100%, P = 0.37), or received antibiotics within 60 minutes of presentation (75% vs 69%, P = 0.59) (Table 2). Multiple other processes and treatments were delivered similarly, as shown in Table 2.

ICU Treatments and Processes‐of‐Care for Patients With Sepsis During the Study Period
Primary Process‐of‐Care Measures (Severe Sepsis Patients)MICU (N = 88)CICU (N = 21)P Value
  • Abbreviations: CICU, cardiac intensive care unit; DVT, deep vein thrombosis; GI, gastrointestinal; ICU, intensive care unit; MICU, medical intensive care unit; RBC, red blood cell; SD, standard deviation. * Missing data causes percentages to be other than what might be suspected if it were available for all patients.

Patients with central venous oxygen saturation checked, n (%)*31 (46)7 (41)0.67
Patients with lactate level checked, n (%)*58 (95)16 (100)0.37
Received antibiotics within 60 min, n (%)*46 (75)11 (69)0.59
Patients who had all 3 above processes and treatments, n (%)19 (22)4 (19)0.79
Received vasopressor, n (%)25 (28)8 (38)0.55
ICU Treatments and Processes (All Sepsis Patients)(N =100)(N = 41) 
Fluid balance 24 h after admission in liters, mean SD1.96 2.421.42 2.630.24
Patients who received stress dose steroids, n (%)11 (11)4 (10)0.83
Patients who received Drotrecogin alfa, n (%)0 (0)0 (0) 
Morning glucose 24 h after admission in mg/dL, mean SD161 111144 800.38
Received DVT prophylaxis within 24 h of admission, n (%)74 (74)20 (49)0.004
Received GI prophylaxis within 24 h of admission, n (%)68 (68)18 (44)0.012
Received RBC transfusion within 24 h of admission, n (%)8 (8)7 (17)0.11
Received renal replacement therapy, n (%)13 (13)3 (7)0.33
Received a spontaneous breathing trial within 24 h of admission, n (%)*4 (11)4 (33)0.07

Logistic regression analysis examining the receipt of all 3 primary processes‐of‐care while controlling for age revealed that the odds of the being in one of the ICUs was not significantly different (P = 0.85). The secondary analysis regression models revealed that only the APACHE II score (odds ratio [OR] = 1.21; confidence interval [CI], 1.121.31) was significantly associated with higher odds of mortality. ICU‐type [MICU vs CICU] (OR = 1.85; CI, 0.428.20), age (OR = 1.01; CI, 0.971.06), and combined processes of care (OR = 0.26; CI, 0.071.01) did not have significant associations with odds of mortality.

A review of microbiologic sensitivities revealed a trend towards significance that the cultured microorganism(s) was likely to be resistant to the initial antibiotics administered in MICU vs CICU (15% vs 5%, respectively, P = 0.09).

Mechanical Ventilation Parameters

The majority of the ventilated patients were admitted to each ICU in assist control (AC) mode. There were no significant differences in categories of mean tidal volume (TV) (P = 0.3), mean plateau pressures (P = 0.12), mean fraction of inspired oxygen (FiO2) (P = 0.95), and mean positive end‐expiratory pressures (PEEP) (P = 0.98) noted across the 2 units at the time of ICU admission, and also 24 hours after ICU admission. Further comparison of measurements of tidal volumes and plateau pressures over 7 days of ICU stay revealed no significant differences in the 2 ICUs (P = 0.40 and 0.57, respectively, on day 7 of ICU admission). There was a trend towards significance in fewer patients in the MICU receiving spontaneous breathing trial within 24 hours of ICU admission (11% vs 33%, P = 0.07) (Table 2).

Patient Outcomes

There were no significant differences in ICU mortality (MICU 19% vs CICU 10%, P = 0.18), or hospital mortality (21% vs 15%, P = 0.38) across the units (Table 3). Mean ICU and hospital length of stay (LOS) and proportion of patients discharged home with unassisted breathing were similar (Table 3).

Patient Outcomes for the 141 Patients Admitted to the Intensive Care Units With Sepsis During the Study Period
Patient OutcomesMICU (N = 100)CICU (N = 41)P Value
  • Abbreviations: CICU, cardiac intensive care unit; ICU, intensive care unit; MICU, medical intensive care unit; SD, standard deviation.

ICU mortality, n (%)19 (19)4 (10)0.18
Hospital mortality, n (%)21 (21)6 (15)0.38
Discharged home with unassisted breathing, n (%)33 (33)19 (46)0.14
ICU length of stay in days, mean SD4.78 6.244.92 6.320.97
Hospital length of stay in days, mean SD9.68 9.229.73 9.330.98

DISCUSSION

Since sepsis is more commonly treated in the medical ICU and some data suggests that specialty ICUs may be better at providing desired care,18, 19 we believed that patients treated in the MICU would be more likely to receive guideline‐concordant care. The study refutes our a priori hypothesis and reveals that evidence‐based processes‐of‐care associated with improved outcomes for sepsis are similarly implemented at our institution in the primary and overflow ICU. These findings are important, as ICU bed availability is a frequent problem and many hospitals overflow patients to non‐primary ICUs.9, 20

The observed equivalence in the care delivered may be a function of the relatively high number of patients with sepsis treated in the overflow unit, thereby giving the delivery teams enough experience to provide the desired care. An alternative explanation could be that the residents in CICU brought with them the experience from having previously trained in the MICU. Although, some of the care processes for sepsis patients are influenced by the CPOE (with embedded order sets and protocols), it is unlikely that CPOE can fully account for similarity in care because many processes and therapies (like use of steroids, amount of fluid delivered in first 24 hours, packed red blood cells [PRBC] transfusion, and spontaneous breathing trials) are not embedded within order sets.

The significant difference noted in the areas of deep vein thrombosis (DVT) and gastrointestinal (GI) prophylaxis within 24 hours of ICU admission was unexpected. These preventive therapies are included in initial order sets in the CPOE, which prompt physicians to order them as standard‐of‐care. With respect to DVT prophylaxis, we suspect that some of the difference might be attributable to specific contraindications to its use, which could have been more common in one of the units. There were more patients in MICU on mechanical ventilation (although not statistically significant) and with severe sepsis (statistically significant) at time of admission, which might have contributed to the difference noted in use of GI prophylaxis. It is also plausible that these differences might have disappeared if they were reassessed beyond 24 hours into the ICU admission. We cannot rule out the presence of unit‐ and physician‐level differences that contributed to this. Likewise, there was an unexpected trend towards significance, wherein more patients in CICU had spontaneous breathing trials within 24 hours of admission. This might also be explained by the higher number of patients with severe sepsis in the MICU (preempting any weaning attempts). These caveats aside, it is reassuring that, at our institution, admitting septic patients to the first available ICU bed does not adversely affect important processes‐of‐care.

One might ask whether this study's data should reassure other sites who are boarding septic patients in non‐primary ICUs. Irrespective of the number of patients studied or the degree of statistical significance of the associations, an observational study design cannot prove that boarding septic patients in non‐primary ICUs is either safe or unsafe. However, we hope that readers reflect on, and take inventory of, systems issues that may be different between unitswith an eye towards eliminating variation such that all units managing septic patients are primed to deliver guideline‐concordant care. Other hospitals that use CPOE with sepsis order sets, have protocols for sepsis care, and who train nursing and respiratory therapists to meet high standards might be pleased to see that the patients in our study received comparable, high‐quality care across the 2 units. While our data suggests that boarding patients in overflow units may be safe, these findings would need to be replicated at other sites using prospective designs to prove safety.

Length of emergency room stay prior to admission is associated with higher mortality rates.2123 At many hospitals, critical care beds are a scarce resource such that most hospitals have a policy for the triage of patients to critical care beds.24, 25 Lundberg and colleagues' study demonstrated that patients who developed septic shock on the medical wards experienced delays in receipt of intravenous fluids, inotropic agents and transfer to a critical care setting.26 Thus, rather than waiting in the ED or on the medical service for an MICU bed to become available, it may be most wise to admit a critically sick septic patient to the first available ICU bed, even to an overflow ICU. In a recent study by Sidlow and Aggarwal, 1104 patients discharged from the coronary care unit (CCU) with a non‐cardiac primary diagnosis were compared to patients admitted to the MICU in the same hospital.27 The study found no differences in patient mortality, 30‐day readmission rate, hospital LOS, ICU LOS, and safety outcomes of ventilator‐associated pneumonia and catheter‐associated bloodstream infections between ICUs. However, their study did not examine processes‐of‐care delivered between the primary ICU and the overflow unit, and did not validate the primary diagnoses of patients admitted to the ICU.

Several limitations of this study should be considered. First, this study was conducted at a single center. Second, we used a retrospective study design; however, a prospective study randomizing patients to 1 of the 2 units would likely never be possible. Third, the relatively small number of patients limited the power of the study to detect mortality differences between the units. However, this was a pilot study focused on processes of care as opposed to clinical outcomes. Fourth, it is possible that we did not capture every single patient with sepsis with our keyword search. Our use of a previously validated screening process should have limited the number of missed cases.15, 16 Fifth, although the 2 ICUs have exclusive nursing staff and attending physicians, the housestaff and respiratory therapists do rotate between the 2 ICUs and place orders in the common CPOE. The rotating housestaff may certainly represent a source for confounding, but the large numbers (>30) of evenly spread housestaff over the study period minimizes the potential for any trainee to be responsible for a large proportion of observed practice. Sixth, ICU attendings are the physicians of record and could influence the results. Because no attending physician was on service for more than 4 weeks during the study period, and patients were equally spread over this same time, concerns about clustering and biases this may have created should be minimal but cannot be ruled out. Seventh, some interventions and processes, such as antibiotic administration and measurement of lactate, may have been initiated in the ED, thereby decreasing the potential for differences between the groups. Additionally, we cannot rule out the possibility that factors other than bed availability drove the admission process (we found that the relative proportion of patients admitted to overflow ICU during hours of ambulance diversion was similar to the overflow ICU admissions during non‐ambulance diversion hours). It is possible that some selection bias by the hospitalist assigning patients to specific ICUs influenced their triage decisionsalthough all triaging doctors go through the same process of training in active bed management.11 While more patients admitted to the MICU had severe sepsis, there were no differences between groups in APACHE II or SOFA scores. However, we cannot rule out that there were other residual confounders. Finally, in a small number of cases (4/41, 10%), the CICU team consulted the MICU attending for assistance. This input had the potential to reduce disparities in care between the units.

Overflowing patients to non‐primary ICUs occurs in many hospitals. Our study demonstrates that sepsis treatment for overflow patients may be similar to that received in the primary ICU. While a large multicentered and randomized trial could determine whether significant management and outcome differences exist between primary and overflow ICUs, feasibility concerns make it unlikely that such a study will ever be conducted.

Acknowledgements

Disclosure: Dr Wright is a Miller‐Coulson Family Scholar and this work is supported by the Miller‐Coulson family through the Johns Hopkins Center for Innovative Medicine. Dr Sevransky was supported with a grant from National Institute of General Medical Sciences, NIGMS K‐23‐1399. All other authors disclose no relevant or financial conflicts of interest.

Sepsis is a major cause of death in hospitalized patients.13 It is recommended that patients with sepsis be treated with early appropriate antibiotics, as well as early goal‐directed therapy including fluid and vasopressor support according to evidence‐based guidelines.46 Following such evidence‐based protocols and process‐of‐care interventions has been shown to be associated with better patient outcomes, including decreased mortality.7, 8

Most patients with severe sepsis are cared for in intensive care units (ICUs). At times, there are no beds available in the primary ICU and patients presenting to the hospital with sepsis are cared for in other units. Patients admitted to a non‐preferred clinical inpatient setting are sometimes referred to as overflow.9 ICUs can differ significantly in staffing patterns, equipment, and training.10 It is not known if overflow sepsis patients receive similar care when admitted to non‐primary ICUs.

At our hospital, we have an active bed management system led by the hospitalist division.11 This system includes protocols to place sepsis patients in the overflow ICU if the primary ICU is full. We hypothesized that process‐of‐care interventions would be more strictly adhered to when sepsis patients were in the primary ICU rather than in the overflow unit at our institution.

METHODS

Design

This was a retrospective cohort study of all patients with sepsis admitted to either the primary medical intensive care unit (MICU) or the overflow cardiac intensive care unit (CICU) at our hospital between July 2009 and February 2010. We reviewed the admission database starting with the month of February 2010 and proceeded backwards, month by month, until we reached the target number of patients.

Setting

The study was conducted at our 320‐bed, university‐affiliated academic medical center in Baltimore, MD. The MICU and the CICU are closed units that are located adjacent to each other and have 12 beds each. They are staffed by separate pools of attending physicians trained in pulmonary/critical care medicine and cardiovascular diseases, respectively, and no attending physician attends in both units. During the study period, there were 10 unique MICU and 14 unique CICU attending physicians; while most attending physicians covered the unit for 14 days, none of the physicians were on service more than 2 of the 2‐week blocks (28 days). Each unit is additionally staffed by fellows of the respective specialties, and internal medicine residents and interns belonging to the same residency program (who rotate through both ICUs). Residents and fellows are generally assigned to these ICUs for 4 continuous weeks. The assignment of specific attendings, fellows, and residents to either ICU is performed by individual division administrators on a rotational basis based on residency, fellowship, and faculty service requirements. The teams in each ICU function independently of each other. Clinical care of patients requiring the assistance of the other specialty (pulmonary medicine or cardiology) have guidance conferred via an official consultation. Orders on patients in both ICUs are written by the residents using the same computerized order entry system (CPOE) under the supervision of their attending physicians. The nursing staff is exclusive to each ICU. The respiratory therapists spend time in both units. The nursing and respiratory therapy staff in both ICUs are similarly trained and certified, and have the same patient‐to‐nursing ratios.

Subjects

All patients admitted with a possible diagnosis of sepsis to either the MICU or CICU were identified by querying the hospital electronic triage database called etriage. This Web‐based application is used to admit patients to all the Medicine services at our hospital. We employed a wide case‐finding net using keywords that included pneumonia, sepsis, hypotension, high lactate, hypoxia, UTI (urinary tract infection)/urosepsis, SIRS (systemic inflammatory response syndrome), hypothermia, and respiratory failure. A total of 197 adult patients were identified. The charts and the electronic medical record (EMR) of these patients were then reviewed to determine the presence of a sepsis diagnosis using standard consensus criteria.12 Severe sepsis was defined by sepsis associated with organ dysfunction, hypoperfusion, or hypotension using criteria described by Bone et al.12

Fifty‐six did not meet the criteria for sepsis and were excluded from the analysis. A total of 141 patients were included in the study. This being a pilot study, we did not have any preliminary data regarding adherence to sepsis guidelines in overflow ICUs to calculate appropriate sample size. However, in 2 recent studies of dedicated ICUs (Ferrer et al13 and Castellanos‐Ortega et al14), the averaged adherence to a single measure like checking of lactate level was 27% pre‐intervention and 62% post‐intervention. With alpha level 0.05 and 80% power, one would need 31 patients in each unit to detect such differences with respect to this intervention. Although this data does not necessarily apply to overflow ICUs or for combination of processes, we used a goal of having at least 31 patients in each ICU.

The study was approved by the Johns Hopkins Institutional Review Board. The need for informed consent was waived given the retrospective nature of the study.

Data Extraction Process and Procedures

The clinical data was extracted from the EMR and patient charts using a standardized data extraction instrument, modified from a case report form (CRF) used and validated in previous studies.15, 16 The following procedures were used for the data extraction:

  • The data extractors included 4 physicians and 1 research assistant and were trained and tested by a single expert in data review and extraction.

  • Lab data was transcribed directly from the EMR. Calculation of acute physiology and chronic health evaluation (APACHE II) scores were done using the website http://www.sfar.org/subores2/apache22.html (Socit Franaise d'Anesthsie et de Ranimation). Sepsis‐related organ failure assessment (SOFA) scores were calculated using usual criteria.17

  • Delivery of specific treatments and interventions, including their timing, was extracted from the EMR.

  • The attending physicians' notes were used as the final source to assign diagnoses such as presence of acute lung injury, site of infection, and record interventions.

 

Data Analysis

Analyses focused primarily on assessing whether patients were treated differently between the MICU and CICU. The primary exposure variables were the process‐of‐care measures. We specifically used measurement of central venous saturation, checking of lactate level, and administration of antibiotics within 60 minutes in patients with severe sepsis as our primary process‐of‐care measures.13 Continuous variables were reported as mean standard deviation, and Student's t tests were used to compare the 2 groups. Categorical data were expressed as frequency distributions, and chi‐square tests were used to identify differences between the 2 groups. All tests were 2‐tailed with statistical significance set at 0.05. Statistical analysis was performed using SPSS version 19.0. (IBM, Armonk, NY).

To overcome data constraints, we created a dichotomous variable for each of the 3 primary processes‐of‐care (indicating receipt of process or not) and then combined them into 1 dichotomous variable indicating whether or not the patients with severe sepsis received all 3 primary processes‐of‐care. The combined variable was the key independent variable in the model.

We performed logistic regression analysis on patients with severe sepsis. The equation Logit [P(ICU Type = CICU)] = + 1Combined + 2Age describes the framework of the model, with ICU type being the dependent variable, and the combined variable of patients receiving all primary measures being the independent variable and controlled for age. Logistic regression was performed using JMP (SAS Institute, Inc, Cary, NC).

We additionally performed a secondary analysis to explore possible predictors of mortality using a logistic regression model, with the event of death as the dependent variable, and age, APACHE II scores, combined processes‐of‐care, and ICU type included as independent variables.

RESULTS

There were 100 patients admitted to the MICU and 41 patients admitted to the CICU during the study period (Table 1). The majority of the patients were admitted to the ICUs directly from the emergency department (ED) (n = 129), with a small number of patients who were transferred from the Medicine floors (n = 12).

Baseline Patient Characteristics for the 141 Patients Admitted to Intensive Care Units With Sepsis During the Study Period
 MICU (N =100)CICU (N =41)P Value
  • Abbreviations: CICU, cardiac intensive care unit; MICU, medical intensive care unit; APACHE II, acute physiology and chronic health evaluation; SOFA, sepsis‐related organ failure assessment.

Age in years, mean SD67 14.872 15.10.11
Female, n (%)57 (57)27 (66)0.33
Patients with chronic organ insufficiency, n (%)59 (59)22 (54)0.56
Patients with severe sepsis, n (%)88 (88)21 (51)<0.001
Patients needing mechanical ventilation, n (%)43 (43)14 (34)0.33
APACHE II score, mean SD25.53 9.1124.37 9.530.50
SOFA score on day 1, mean SD7.09 3.556.71 4.570.60
Patients with acute lung injury on presentation, n (%)8 (8)2 (5)0.50

There were no significant differences between the 2 study groups in terms of age, sex, primary site of infection, mean APACHE II score, SOFA scores on day 1, chronic organ insufficiency, immune suppression, or need for mechanical ventilation (Table 1). The most common site of infection was lung. There were significantly more patients with severe sepsis in the MICU (88% vs 51%, P <0.001).

Sepsis Process‐of‐Care Measures

There were no significant differences in the proportion of severe sepsis patients who had central venous saturation checked (MICU: 46% vs CICU: 41%, P = 0.67), lactate level checked (95% vs 100%, P = 0.37), or received antibiotics within 60 minutes of presentation (75% vs 69%, P = 0.59) (Table 2). Multiple other processes and treatments were delivered similarly, as shown in Table 2.

ICU Treatments and Processes‐of‐Care for Patients With Sepsis During the Study Period
Primary Process‐of‐Care Measures (Severe Sepsis Patients)MICU (N = 88)CICU (N = 21)P Value
  • Abbreviations: CICU, cardiac intensive care unit; DVT, deep vein thrombosis; GI, gastrointestinal; ICU, intensive care unit; MICU, medical intensive care unit; RBC, red blood cell; SD, standard deviation. * Missing data causes percentages to be other than what might be suspected if it were available for all patients.

Patients with central venous oxygen saturation checked, n (%)*31 (46)7 (41)0.67
Patients with lactate level checked, n (%)*58 (95)16 (100)0.37
Received antibiotics within 60 min, n (%)*46 (75)11 (69)0.59
Patients who had all 3 above processes and treatments, n (%)19 (22)4 (19)0.79
Received vasopressor, n (%)25 (28)8 (38)0.55
ICU Treatments and Processes (All Sepsis Patients)(N =100)(N = 41) 
Fluid balance 24 h after admission in liters, mean SD1.96 2.421.42 2.630.24
Patients who received stress dose steroids, n (%)11 (11)4 (10)0.83
Patients who received Drotrecogin alfa, n (%)0 (0)0 (0) 
Morning glucose 24 h after admission in mg/dL, mean SD161 111144 800.38
Received DVT prophylaxis within 24 h of admission, n (%)74 (74)20 (49)0.004
Received GI prophylaxis within 24 h of admission, n (%)68 (68)18 (44)0.012
Received RBC transfusion within 24 h of admission, n (%)8 (8)7 (17)0.11
Received renal replacement therapy, n (%)13 (13)3 (7)0.33
Received a spontaneous breathing trial within 24 h of admission, n (%)*4 (11)4 (33)0.07

Logistic regression analysis examining the receipt of all 3 primary processes‐of‐care while controlling for age revealed that the odds of the being in one of the ICUs was not significantly different (P = 0.85). The secondary analysis regression models revealed that only the APACHE II score (odds ratio [OR] = 1.21; confidence interval [CI], 1.121.31) was significantly associated with higher odds of mortality. ICU‐type [MICU vs CICU] (OR = 1.85; CI, 0.428.20), age (OR = 1.01; CI, 0.971.06), and combined processes of care (OR = 0.26; CI, 0.071.01) did not have significant associations with odds of mortality.

A review of microbiologic sensitivities revealed a trend towards significance that the cultured microorganism(s) was likely to be resistant to the initial antibiotics administered in MICU vs CICU (15% vs 5%, respectively, P = 0.09).

Mechanical Ventilation Parameters

The majority of the ventilated patients were admitted to each ICU in assist control (AC) mode. There were no significant differences in categories of mean tidal volume (TV) (P = 0.3), mean plateau pressures (P = 0.12), mean fraction of inspired oxygen (FiO2) (P = 0.95), and mean positive end‐expiratory pressures (PEEP) (P = 0.98) noted across the 2 units at the time of ICU admission, and also 24 hours after ICU admission. Further comparison of measurements of tidal volumes and plateau pressures over 7 days of ICU stay revealed no significant differences in the 2 ICUs (P = 0.40 and 0.57, respectively, on day 7 of ICU admission). There was a trend towards significance in fewer patients in the MICU receiving spontaneous breathing trial within 24 hours of ICU admission (11% vs 33%, P = 0.07) (Table 2).

Patient Outcomes

There were no significant differences in ICU mortality (MICU 19% vs CICU 10%, P = 0.18), or hospital mortality (21% vs 15%, P = 0.38) across the units (Table 3). Mean ICU and hospital length of stay (LOS) and proportion of patients discharged home with unassisted breathing were similar (Table 3).

Patient Outcomes for the 141 Patients Admitted to the Intensive Care Units With Sepsis During the Study Period
Patient OutcomesMICU (N = 100)CICU (N = 41)P Value
  • Abbreviations: CICU, cardiac intensive care unit; ICU, intensive care unit; MICU, medical intensive care unit; SD, standard deviation.

ICU mortality, n (%)19 (19)4 (10)0.18
Hospital mortality, n (%)21 (21)6 (15)0.38
Discharged home with unassisted breathing, n (%)33 (33)19 (46)0.14
ICU length of stay in days, mean SD4.78 6.244.92 6.320.97
Hospital length of stay in days, mean SD9.68 9.229.73 9.330.98

DISCUSSION

Since sepsis is more commonly treated in the medical ICU and some data suggests that specialty ICUs may be better at providing desired care,18, 19 we believed that patients treated in the MICU would be more likely to receive guideline‐concordant care. The study refutes our a priori hypothesis and reveals that evidence‐based processes‐of‐care associated with improved outcomes for sepsis are similarly implemented at our institution in the primary and overflow ICU. These findings are important, as ICU bed availability is a frequent problem and many hospitals overflow patients to non‐primary ICUs.9, 20

The observed equivalence in the care delivered may be a function of the relatively high number of patients with sepsis treated in the overflow unit, thereby giving the delivery teams enough experience to provide the desired care. An alternative explanation could be that the residents in CICU brought with them the experience from having previously trained in the MICU. Although, some of the care processes for sepsis patients are influenced by the CPOE (with embedded order sets and protocols), it is unlikely that CPOE can fully account for similarity in care because many processes and therapies (like use of steroids, amount of fluid delivered in first 24 hours, packed red blood cells [PRBC] transfusion, and spontaneous breathing trials) are not embedded within order sets.

The significant difference noted in the areas of deep vein thrombosis (DVT) and gastrointestinal (GI) prophylaxis within 24 hours of ICU admission was unexpected. These preventive therapies are included in initial order sets in the CPOE, which prompt physicians to order them as standard‐of‐care. With respect to DVT prophylaxis, we suspect that some of the difference might be attributable to specific contraindications to its use, which could have been more common in one of the units. There were more patients in MICU on mechanical ventilation (although not statistically significant) and with severe sepsis (statistically significant) at time of admission, which might have contributed to the difference noted in use of GI prophylaxis. It is also plausible that these differences might have disappeared if they were reassessed beyond 24 hours into the ICU admission. We cannot rule out the presence of unit‐ and physician‐level differences that contributed to this. Likewise, there was an unexpected trend towards significance, wherein more patients in CICU had spontaneous breathing trials within 24 hours of admission. This might also be explained by the higher number of patients with severe sepsis in the MICU (preempting any weaning attempts). These caveats aside, it is reassuring that, at our institution, admitting septic patients to the first available ICU bed does not adversely affect important processes‐of‐care.

One might ask whether this study's data should reassure other sites who are boarding septic patients in non‐primary ICUs. Irrespective of the number of patients studied or the degree of statistical significance of the associations, an observational study design cannot prove that boarding septic patients in non‐primary ICUs is either safe or unsafe. However, we hope that readers reflect on, and take inventory of, systems issues that may be different between unitswith an eye towards eliminating variation such that all units managing septic patients are primed to deliver guideline‐concordant care. Other hospitals that use CPOE with sepsis order sets, have protocols for sepsis care, and who train nursing and respiratory therapists to meet high standards might be pleased to see that the patients in our study received comparable, high‐quality care across the 2 units. While our data suggests that boarding patients in overflow units may be safe, these findings would need to be replicated at other sites using prospective designs to prove safety.

Length of emergency room stay prior to admission is associated with higher mortality rates.2123 At many hospitals, critical care beds are a scarce resource such that most hospitals have a policy for the triage of patients to critical care beds.24, 25 Lundberg and colleagues' study demonstrated that patients who developed septic shock on the medical wards experienced delays in receipt of intravenous fluids, inotropic agents and transfer to a critical care setting.26 Thus, rather than waiting in the ED or on the medical service for an MICU bed to become available, it may be most wise to admit a critically sick septic patient to the first available ICU bed, even to an overflow ICU. In a recent study by Sidlow and Aggarwal, 1104 patients discharged from the coronary care unit (CCU) with a non‐cardiac primary diagnosis were compared to patients admitted to the MICU in the same hospital.27 The study found no differences in patient mortality, 30‐day readmission rate, hospital LOS, ICU LOS, and safety outcomes of ventilator‐associated pneumonia and catheter‐associated bloodstream infections between ICUs. However, their study did not examine processes‐of‐care delivered between the primary ICU and the overflow unit, and did not validate the primary diagnoses of patients admitted to the ICU.

Several limitations of this study should be considered. First, this study was conducted at a single center. Second, we used a retrospective study design; however, a prospective study randomizing patients to 1 of the 2 units would likely never be possible. Third, the relatively small number of patients limited the power of the study to detect mortality differences between the units. However, this was a pilot study focused on processes of care as opposed to clinical outcomes. Fourth, it is possible that we did not capture every single patient with sepsis with our keyword search. Our use of a previously validated screening process should have limited the number of missed cases.15, 16 Fifth, although the 2 ICUs have exclusive nursing staff and attending physicians, the housestaff and respiratory therapists do rotate between the 2 ICUs and place orders in the common CPOE. The rotating housestaff may certainly represent a source for confounding, but the large numbers (>30) of evenly spread housestaff over the study period minimizes the potential for any trainee to be responsible for a large proportion of observed practice. Sixth, ICU attendings are the physicians of record and could influence the results. Because no attending physician was on service for more than 4 weeks during the study period, and patients were equally spread over this same time, concerns about clustering and biases this may have created should be minimal but cannot be ruled out. Seventh, some interventions and processes, such as antibiotic administration and measurement of lactate, may have been initiated in the ED, thereby decreasing the potential for differences between the groups. Additionally, we cannot rule out the possibility that factors other than bed availability drove the admission process (we found that the relative proportion of patients admitted to overflow ICU during hours of ambulance diversion was similar to the overflow ICU admissions during non‐ambulance diversion hours). It is possible that some selection bias by the hospitalist assigning patients to specific ICUs influenced their triage decisionsalthough all triaging doctors go through the same process of training in active bed management.11 While more patients admitted to the MICU had severe sepsis, there were no differences between groups in APACHE II or SOFA scores. However, we cannot rule out that there were other residual confounders. Finally, in a small number of cases (4/41, 10%), the CICU team consulted the MICU attending for assistance. This input had the potential to reduce disparities in care between the units.

Overflowing patients to non‐primary ICUs occurs in many hospitals. Our study demonstrates that sepsis treatment for overflow patients may be similar to that received in the primary ICU. While a large multicentered and randomized trial could determine whether significant management and outcome differences exist between primary and overflow ICUs, feasibility concerns make it unlikely that such a study will ever be conducted.

Acknowledgements

Disclosure: Dr Wright is a Miller‐Coulson Family Scholar and this work is supported by the Miller‐Coulson family through the Johns Hopkins Center for Innovative Medicine. Dr Sevransky was supported with a grant from National Institute of General Medical Sciences, NIGMS K‐23‐1399. All other authors disclose no relevant or financial conflicts of interest.

References
  1. Angus DC,Linde‐Zwirble WT,Lidicker J,Clermont G,Carcillo J,Pinsky MR.Epidemiology of severe sepsis in the United States: analysis of incidence, outcome, and associated costs of care.Crit Care Med.2001;29(7):13031310.
  2. Kumar G,Kumar N,Taneja A, et al;for the Milwaukee Initiative in Critical Care Outcomes Research (MICCOR) Group of Investigators.Nationwide trends of severe sepsis in the twenty first century (2000–2007).Chest.2011;140(5):12231231.
  3. Dombrovskiy VY,Martin AA,Sunderram J,Paz HL.Rapid increase in hospitalization and mortality rates for severe sepsis in the United States: a trend analysis from 1993 to 2003.Crit Care Med.2007;35(5):12441250.
  4. Dellinger RP,Levy MM,Carlet JM, et al.Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2008.Crit Care Med.2008;36(1):296327.
  5. Jones AE,Shapiro NI,Trzeciak S, et al.Lactate clearance vs central venous oxygen saturation as goals of early sepsis therapy: a randomized clinical trial.JAMA.2010;303(8):739746.
  6. Rivers E,Nguyen B,Havstad S, et al.Early goal‐directed therapy in the treatment of severe sepsis and septic shock.N Engl J Med.2001;345(19):13681377.
  7. Nguyen HB,Corbett SW,Steele R, et al.Implementation of a bundle of quality indicators for the early management of severe sepsis and septic shock is associated with decreased mortality.Crit Care Med.2007;35(4):11051112.
  8. Kumar A,Zarychanski R,Light B, et al.Early combination antibiotic therapy yields improved survival compared with monotherapy in septic shock: a propensity‐matched analysis.Crit Care Med.2010;38(9):17731785.
  9. Johannes MS.A new dimension of the PACU: the dilemma of the ICU overflow patient.J Post Anesth Nurs.1994;9(5):297300.
  10. Groeger JS,Strosberg MA,Halpern NA, et al.Descriptive analysis of critical care units in the United States.Crit Care Med.1992;20(6):846863.
  11. Howell E,Bessman E,Kravet S,Kolodner K,Marshall R,Wright S.Active bed management by hospitalists and emergency department throughput.Ann Intern Med.2008;149(11):804811.
  12. Bone RC,Balk RA,Cerra FB, et al.Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee, American College of Chest Physicians/Society of Critical Care Medicine.Chest.1992;101(6):16441655.
  13. Ferrer R,Artigas A,Levy MM, et al.Improvement in process of care and outcome after a multicenter severe sepsis educational program in Spain.JAMA.2008;299(19):22942303.
  14. Castellanos‐Ortega A,Suberviola B,Garcia‐Astudillo LA, et al.Impact of the surviving sepsis campaign protocols on hospital length of stay and mortality in septic shock patients: results of a three‐year follow‐up quasi‐experimental study.Crit Care Med.2010;38(4):10361043.
  15. Needham DM,Dennison CR,Dowdy DW, et al.Study protocol: the improving care of acute lung injury patients (ICAP) study.Crit Care.2006;10(1):R9.
  16. Ali N,Gutteridge D,Shahul S,Checkley W,Sevransky J,Martin G.Critical illness outcome study: an observational study of protocols and mortality in intensive care units.Open Access J Clin Trials.2011;3(September):5565.
  17. Vincent JL,Moreno R,Takala J, et al.The SOFA (sepsis‐related organ failure assessment) score to describe organ dysfunction/failure: on behalf of the Working Group on Sepsis‐Related Problems of the European Society of Intensive Care Medicine.Intensive Care Med.1996;22(7):707710.
  18. Pronovost PJ,Angus DC,Dorman T,Robinson KA,Dremsizov TT,Young TL.Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review.JAMA.2002;288(17):21512162.
  19. Fuchs RJ,Berenholtz SM,Dorman T.Do intensivists in ICU improve outcome?Best Pract Res Clin Anaesthesiol.2005;19(1):125135.
  20. Lindsay M.Is the postanesthesia care unit becoming an intensive care unit?J Perianesth Nurs.1999;14(2):7377.
  21. Chalfin DB,Trzeciak S,Likourezos A,Baumann BM,Dellinger RP;for the DELAY‐ED Study Group.Impact of delayed transfer of critically ill patients from the emergency department to the intensive care unit.Crit Care Med.2007;35(6):14771483.
  22. Renaud B,Santin A,Coma E, et al.Association between timing of intensive care unit admission and outcomes for emergency department patients with community‐acquired pneumonia.Crit Care Med.2009;37(11):28672874.
  23. Shen YC,Hsia RY.Association between ambulance diversion and survival among patients with acute myocardial infarction.JAMA.2011;305(23):24402447.
  24. Teres D.Civilian triage in the intensive care unit: the ritual of the last bed.Crit Care Med.1993;21(4):598606.
  25. Sinuff T,Kahnamoui K,Cook DJ,Luce JM,Levy MM;for the Values Ethics and Rationing in Critical Care Task Force.Rationing critical care beds: a systematic review.Crit Care Med.2004;32(7):15881597.
  26. Lundberg JS,Perl TM,Wiblin T, et al.Septic shock: an analysis of outcomes for patients with onset on hospital wards versus intensive care units.Crit Care Med.1998;26(6):10201024.
  27. Sidlow R,Aggarwal V.“The MICU is full”: one hospital's experience with an overflow triage policy.Jt Comm J Qual Patient Saf.2011;37(10):456460.
References
  1. Angus DC,Linde‐Zwirble WT,Lidicker J,Clermont G,Carcillo J,Pinsky MR.Epidemiology of severe sepsis in the United States: analysis of incidence, outcome, and associated costs of care.Crit Care Med.2001;29(7):13031310.
  2. Kumar G,Kumar N,Taneja A, et al;for the Milwaukee Initiative in Critical Care Outcomes Research (MICCOR) Group of Investigators.Nationwide trends of severe sepsis in the twenty first century (2000–2007).Chest.2011;140(5):12231231.
  3. Dombrovskiy VY,Martin AA,Sunderram J,Paz HL.Rapid increase in hospitalization and mortality rates for severe sepsis in the United States: a trend analysis from 1993 to 2003.Crit Care Med.2007;35(5):12441250.
  4. Dellinger RP,Levy MM,Carlet JM, et al.Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2008.Crit Care Med.2008;36(1):296327.
  5. Jones AE,Shapiro NI,Trzeciak S, et al.Lactate clearance vs central venous oxygen saturation as goals of early sepsis therapy: a randomized clinical trial.JAMA.2010;303(8):739746.
  6. Rivers E,Nguyen B,Havstad S, et al.Early goal‐directed therapy in the treatment of severe sepsis and septic shock.N Engl J Med.2001;345(19):13681377.
  7. Nguyen HB,Corbett SW,Steele R, et al.Implementation of a bundle of quality indicators for the early management of severe sepsis and septic shock is associated with decreased mortality.Crit Care Med.2007;35(4):11051112.
  8. Kumar A,Zarychanski R,Light B, et al.Early combination antibiotic therapy yields improved survival compared with monotherapy in septic shock: a propensity‐matched analysis.Crit Care Med.2010;38(9):17731785.
  9. Johannes MS.A new dimension of the PACU: the dilemma of the ICU overflow patient.J Post Anesth Nurs.1994;9(5):297300.
  10. Groeger JS,Strosberg MA,Halpern NA, et al.Descriptive analysis of critical care units in the United States.Crit Care Med.1992;20(6):846863.
  11. Howell E,Bessman E,Kravet S,Kolodner K,Marshall R,Wright S.Active bed management by hospitalists and emergency department throughput.Ann Intern Med.2008;149(11):804811.
  12. Bone RC,Balk RA,Cerra FB, et al.Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee, American College of Chest Physicians/Society of Critical Care Medicine.Chest.1992;101(6):16441655.
  13. Ferrer R,Artigas A,Levy MM, et al.Improvement in process of care and outcome after a multicenter severe sepsis educational program in Spain.JAMA.2008;299(19):22942303.
  14. Castellanos‐Ortega A,Suberviola B,Garcia‐Astudillo LA, et al.Impact of the surviving sepsis campaign protocols on hospital length of stay and mortality in septic shock patients: results of a three‐year follow‐up quasi‐experimental study.Crit Care Med.2010;38(4):10361043.
  15. Needham DM,Dennison CR,Dowdy DW, et al.Study protocol: the improving care of acute lung injury patients (ICAP) study.Crit Care.2006;10(1):R9.
  16. Ali N,Gutteridge D,Shahul S,Checkley W,Sevransky J,Martin G.Critical illness outcome study: an observational study of protocols and mortality in intensive care units.Open Access J Clin Trials.2011;3(September):5565.
  17. Vincent JL,Moreno R,Takala J, et al.The SOFA (sepsis‐related organ failure assessment) score to describe organ dysfunction/failure: on behalf of the Working Group on Sepsis‐Related Problems of the European Society of Intensive Care Medicine.Intensive Care Med.1996;22(7):707710.
  18. Pronovost PJ,Angus DC,Dorman T,Robinson KA,Dremsizov TT,Young TL.Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review.JAMA.2002;288(17):21512162.
  19. Fuchs RJ,Berenholtz SM,Dorman T.Do intensivists in ICU improve outcome?Best Pract Res Clin Anaesthesiol.2005;19(1):125135.
  20. Lindsay M.Is the postanesthesia care unit becoming an intensive care unit?J Perianesth Nurs.1999;14(2):7377.
  21. Chalfin DB,Trzeciak S,Likourezos A,Baumann BM,Dellinger RP;for the DELAY‐ED Study Group.Impact of delayed transfer of critically ill patients from the emergency department to the intensive care unit.Crit Care Med.2007;35(6):14771483.
  22. Renaud B,Santin A,Coma E, et al.Association between timing of intensive care unit admission and outcomes for emergency department patients with community‐acquired pneumonia.Crit Care Med.2009;37(11):28672874.
  23. Shen YC,Hsia RY.Association between ambulance diversion and survival among patients with acute myocardial infarction.JAMA.2011;305(23):24402447.
  24. Teres D.Civilian triage in the intensive care unit: the ritual of the last bed.Crit Care Med.1993;21(4):598606.
  25. Sinuff T,Kahnamoui K,Cook DJ,Luce JM,Levy MM;for the Values Ethics and Rationing in Critical Care Task Force.Rationing critical care beds: a systematic review.Crit Care Med.2004;32(7):15881597.
  26. Lundberg JS,Perl TM,Wiblin T, et al.Septic shock: an analysis of outcomes for patients with onset on hospital wards versus intensive care units.Crit Care Med.1998;26(6):10201024.
  27. Sidlow R,Aggarwal V.“The MICU is full”: one hospital's experience with an overflow triage policy.Jt Comm J Qual Patient Saf.2011;37(10):456460.
Issue
Journal of Hospital Medicine - 7(8)
Issue
Journal of Hospital Medicine - 7(8)
Page Number
600-605
Page Number
600-605
Publications
Publications
Article Type
Display Headline
Does sepsis treatment differ between primary and overflow intensive care units?
Display Headline
Does sepsis treatment differ between primary and overflow intensive care units?
Sections
Article Source

Copyright © 2012 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Johns Hopkins Bayview Medical Center, 5200 Eastern Ave, Baltimore, MD 21224
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Image
Disable zoom
Off
Media Files
Image
Disable zoom
Off

Learning Needs of Physician Assistants

Article Type
Changed
Mon, 05/22/2017 - 19:38
Display Headline
Learning needs of physician assistants working in hospital medicine

Physician assistants (PA) have rapidly become an integral component in the United States health care delivery system, including in the field of Hospital Medicine, the fastest growing medical field in the United States.1, 2 Since its induction in 1997, hospitalist providers in North America have increased by 30‐fold.3 Correlating with this, the number of PAs practicing in the field of hospital medicine has also increased greatly in recent years. According to the American Academy of Physician Assistants (AAPA) census reports, Hospital Medicine first appeared as one of the specialty choices in the 2006 census (response rate, 33% of all individuals eligible to practice as PAs) when it was selected as the primary specialty by 239 PAs (1.1% of respondents). In the 2008 report (response rate, 35%), the number grew to 421 (1.7%) PAs.2

PA training programs emphasize primary care and offer limited exposure to inpatient medicine. After PA students complete their first 12 months of training in didactic coursework that teach the basic sciences, they typically spend the next year on clinical rotations, largely rooted in outpatient care.2, 4 Upon graduation, PAs do not have to pursue postgraduate training before beginning to practice in their preferred specialty areas. Thus, a majority of PAs going into specialty areas are trained on the job. This is not an exception in the field of hospital medicine.

In recent years, despite an increase in the number of PAs in Hospital Medicine, some medical centers have chosen to phase out the use of midlevel hospitalist providers (including PAs) with the purposeful decision to not hire new midlevel providers.5 The rationale for this strategy is that there is thought to be a steep learning curve that requires much time to overcome before these providers feel comfortable across the breadth of clinical cases. Before they become experienced and confident in caring for a highly complex heterogeneous patient population, they cannot operate autonomously and are not a cost‐effective alternative to physicians. The complexities associated with practicing in this field were clarified in 2006 when the Society of Hospital Medicine identified 51 core competencies in hospital medicine.3, 6 Some hospitalist programs are willing to provide their PAs with on‐the‐job training, but many programs do not have the educational expertise or the resources to make this happen. Structured and focused postgraduate training in hospital medicine seems like a reasonable solution to prepare newly graduating PAs that are interested in pursuing hospitalist careers, but such opportunities are very limited.7

To date, there is no available information about the learning needs of PAs working in hospital medicine settings. We hypothesized that understanding the learning needs of PA hospitalists would inform the development of more effective and efficient training programs. We studied PAs with experience working in hospital medicine to (1) identify self‐perceived gaps in their skills and knowledge upon starting their hospitalist careers and (2) understand their views about optimal training for careers in hospital medicine.

METHODS

Study Design

We conducted a cross‐sectional survey of a convenience sample of self‐identified PAs working in adult Hospital Medicine. The survey was distributed using an electronic survey program.

Participants

The subjects for the survey were identified through the Facebook group PAs in Hospital Medicine, which had 133 members as of July 2010. This source was selected because it was the most comprehensive list of self‐identified hospitalist PAs. Additionally, the group allowed us to send individualized invitations to complete the survey along with subsequent reminder messages to nonresponders. Subjects were eligible to participate if they were PAs with experience working in hospital medicine settings taking care of adult internal medicine inpatients.

Survey Instrument

The survey instrument was developed based on the Core Competencies in Hospital Medicine with the goal of identifying PA hospitalists' knowledge and skill gaps that were present when they started their hospitalist career.

In one section, respondents were asked about content areas among the Core Competencies in Hospital Medicine that they believed would have enhanced their effectiveness in practicing hospital medicine had they had additional training before starting their work as hospitalists. Response options ranged from Strongly Agree to Strongly Disagree. Because there were content areas that seemed more relevant to physicians, through rigorous discussions, our study team (including a hospitalist physician, senior hospitalist PA, two curriculum development experts, one medical education research expert, and an experienced hospital medicine research assistant) selected topics that were felt to be particularly germane to PA hospitalists. The relevance of this content to PA hospitalists was confirmed through pilot testing of the instrument. Another series of questions asked the PAs about their views on formal postgraduate training programs. The subjects were also queried about the frequency with which they performed various procedures (using the following scale: Never, Rarely [1‐2/year], Regularly [1‐2/month], Often [1‐2/week]) and whether they felt it was necessary for PAs to have procedure skills listed as part of the Core Competencies in Hospital Medicine (using the following scale: Not necessary, Preferable, Essential). Finally, the survey included a question about the PAs' preferred learning methods by asking the degree of helpfulness on various approaches (using the following scale: Not at all, Little, Some, A lot, Tremendously). Demographic information was also collected. The instrument was pilot‐tested for clarity on the 9 PA hospitalists who were affiliated with our hospitalist service, and the instrument was iteratively revised based on their feedback.

Data Collection and Analysis

Between September and December 2010, the survey invitations were sent as Facebook messages to the 133 members of the Facebook group PAs in Hospital Medicine. Sixteen members could not be contacted because their account setup did not allow us to send messages, and 14 were excluded because they were non‐PA members. In order to maximize participation, up to 4 reminder messages were sent to the 103 targeted PAs. The survey results were analyzed using Stata 11. Descriptive statistics were used to characterize the responses.

This study protocol was approved by the institution's review board.

RESULTS

Sixty‐nine PAs responded (response rate, 67%). Table 1 provides demographic characteristics of the respondents. The majority of respondents were 2635 years old and had worked as hospitalists for a mean of 4.3 years.

Characteristics of the 62 Physician Assistant Respondents Who Elected to Share Demographic and Personal Information
Characteristics*Value
  • Abbreviations: ICU, intensive care unit; PA, physician assistant; SD, standard deviation.

  • Seven PAs did not provide any personal or demographic information.

  • Because of missing data, numbers may not correspond to the exact percentages.

Age, years, n (%) 
<261 (2)
263016 (29)
313514 (25)
364010 (18)
41455 (9)
>4510 (18)
Women, n (%)35 (63)
Year of graduation from PA school, mode (SD)2002 (7)
No. of years working/worked as hospitalist, mean (SD)4.3 (3.4)
Completed any postgraduate training program, n (%)0 (0)
Hospitalist was the first PA job, n (%)30 (49)
Salary, US$, n (%) 
50,00170,0001 (2)
70,00190,00032 (57)
>90,00023 (41)
Location of hospital, n (%) 
Urban35 (57)
Suburban21 (34)
Rural5 (8)
Hospital characteristics, n (%) 
Academic medical center25 (41)
Community teaching hospital20 (33)
Community nonteaching hospital16 (26)
Responsibilities in addition to taking care of inpatients on medicine floor, n (%) 
Care for patients in ICU22 (35)
Perform inpatient consultations31 (50)
See outpatients11 (18)

Clinical Conditions

Table 2 shows the respondents' experience with 19 core competency clinical conditions before beginning their careers as hospitalist PAs. They reported having most experience in managing diabetes and urinary tract infections, and least experience in managing healthcare associated pneumonias and sepsis syndrome.

Physician Assistant Experiences with 19 Core Clinical Conditions Before Starting Career in Hospital Medicine
Clinical ConditionMean (SD)*
  • Abbreviation: SD, standard deviation.

  • Likert scale: 1, no experience, I knew nothing about this condition; 2, no experience, I had heard/read about this condition; 3, I had experience caring for 1 patient (simulated or real) with this condition; 4, I had experience caring for 25 patients with this condition; 5, I had experience caring for many (>5) patients with this condition.

Urinary tract infection4.5 (0.8)
Diabetes mellitus4.5 (0.8)
Asthma4.4 (0.9)
Community‐acquired pneumonia4.3 (0.9)
Chronic obstructive pulmonary disease4.3 (1.0)
Cellulitis4.2 (0.9)
Congestive heart failure4.1 (1.0)
Cardiac arrhythmia3.9 (1.1)
Delirium and dementia3.8 (1.1)
Acute coronary syndrome3.8 (1.2)
Acute renal failure3.8 (1.1)
Gastrointestinal bleed3.7 (1.1)
Venous thromboembolism3.7 (1.2)
Pain management3.7 (1.2)
Perioperative medicine3.6 (1.4)
Stroke3.5 (1.2)
Alcohol and drug withdrawal3.4 (1.1)
Sepsis syndrome3.3 (1.1)
Hospital‐acquired pneumonia3.2 (1.1)

Procedures

Most PA hospitalists (67%) perform electrocardiograms and chest X‐ray interpretations regularly (more than 1‐2/ week). However, nearly all PA hospitalists never or rarely (less than 1‐2/year) perform any invasive procedures, including arthrocentesis (98%), lumbar puncture (100%), paracentesis (91%), thoracentesis (98%), central line placement (91%), peripherally inserted central catheter placement (91%), and peripheral intravenous insertion (91%). Despite the infrequency of execution, more than 50% of respondents explained that it is either preferable or essential to be able to perform these procedures.

Content Knowledge

The PA hospitalists indicated which content areas may have allowed them to be more successful had they learned the material before starting their hospitalist career (Table 3). The top 4 topics that PA hospitalists believed would have helped them most to care for inpatients were palliative care (85% agreed or strongly agreed), nutrition for hospitalized patients (84%), performing consultations in the hospital (64%), and prevention of health careassociated infection (61%).

Content Areas that 62 Respondent PAs Believed Would Have Enhanced Their Effectiveness in Practicing Hospital Medicine Had They Had Additional Training Before Starting Their Work as Hospitalists
Health Care System TopicsPAs Who Agreed or Strongly Agreed, n (%)
Palliative care47 (85)
Nutrition for hospitalized patients46 (84)
Performing consultations in hospital35 (64)
Prevention of health careassociated infections34 (62)
Diagnostic decision‐making processes32 (58)
Patient handoff and transitions of care31 (56)
Evidence‐based medicine28 (51)
Communication with patients and families27 (49)
Drug safety and drug interactions27 (49)
Team approach and multidisciplinary care26 (48)
Patient safety and quality improvement processes25 (45)
Care of elderly patients24 (44)
Medical ethics22 (40)
Patient education20 (36)
Care of uninsured or underinsured patients18 (33)

Professional Growth as Hospitalist Providers

PAs judged working with physician preceptors (mean SD, 4.5 0.6) and discussing patients with consultants (mean SD, 4.3 0.8) to be most helpful for their professional growth, whereas receiving feedback/audits about their performance (mean SD, 3.5 1), attending conferences/lectures (mean SD, 3.6 0.7), and reading journals/textbooks (mean SD, 3.6 0.8) were rated as being less useful. Respondents believed that the mean number of months required for new hospitalist PAs to become fully competent team members was 11 months ( 8.6 SD). Forty‐three percent of respondents shared the perspective that some clinical experience in an inpatient setting was an essential prerequisite for entry into a hospitalist position. Although more than half (58%) felt that completion of postgraduate training program in hospital medicine was not necessary as a prerequisite, almost all (91%) explained that they would have been interested in such a program even if it meant having a lower stipend than a hospitalist PA salary during the first year on the job (Table 4).

Self‐Reported Interest from 55 Respondents in Postgraduate Hospitalist Training Depending on Varying Levels of Incentives and Disincentives
Interest in Trainingn (%)
Interested and willing to pay tuition1 (2)
Interested even if there was no stipend, as long as I didn't have to pay any additional tuition3 (5)
Interested ONLY if a stipend of at least 25% of a hospitalist PA salary was offered4 (7)
Interested ONLY if a stipend of at least 50% of a hospitalist PA salary was offered21 (38)
Interested ONLY if a stipend of at least 75% of a hospitalist PA salary was offered21 (38)
Interested ONLY if 100% of a hospitalist PA salary was offered4 (7)
Not interested under any circumstances1 (2)

DISCUSSION

Our survey addresses a wide range of topics related to PA hospitalists' learning needs including their experience with the Core Competencies in Hospital Medicine and their views on the benefits of PA training following graduation. Although self‐efficacy is not assessed, our study revealed that PAs who are choosing hospitalist careers have limited prior clinical experience treating many medical conditions that are managed in inpatient settings, such as sepsis syndrome. This inexperience with commonly seen clinical conditions, such as sepsis, wherein following guidelines can both reduce costs and improve outcomes, is problematic. More experience and training with such conditions would almost certainly reduce variability, improve skills, and augment confidence. These observed variations in experience in caring for conditions that often prompt admission to the hospital among PAs starting their hospitalist careers emphasizes the need to be learner‐centered when training PAs, so as to provide tailored guidance and oversight.

Only a few other empiric research articles have focused on PA hospitalists. One article described a postgraduate training program for PAs in hospital medicine that was launched in 2008. The curriculum was developed based on the Core Competencies in Hospital Medicine, and the authors explained that after 12 months of training, their first graduate functioned at the level of a PA with 4 years of experience under her belt.7 Several articles describe experiences using midlevel providers (including PAs) in general surgery, primary care medicine, cardiology, emergency medicine, critical care, pediatrics, and hospital medicine settings.5, 820 Many of these articles reported favorable results showing that using midlevel providers was either superior or just as effective in terms of cost and quality measures to physician‐only models. Many of these papers alluded to the ways in which PAs have enabled graduate medical education training programs to comply with residents' duty‐hour restrictions. A recent analysis that compared outcomes related to inpatient care provided by a hospitalist‐PA model versus a traditional resident‐based model revealed a slightly longer length of stay on the PA team but similar charges, readmission rates, and mortality.19 Yet another paper revealed that patients admitted to a residents' service, compared with the nonteaching hospitalist service that uses PAs and nurse practitioners, were different, having higher comorbidity burdens and higher acuity diagnoses.20 The authors suggested that this variance might be explained by the difference in their training, abilities, and goals of the groups. There was no research article that sought to capture the perspectives of practicing hospitalist PAs.

Our study revealed that although half of respondents became hospitalists immediately after graduating from PA school, a majority agreed that additional clinical training in inpatient settings would have been welcomed and helpful. This study's results reveal that although there is a fair amount of perceived interest in postgraduate training programs in hospital medicine, there are very few training opportunities for PAs in hospital medicine.7, 21 The American Academy of Physician Assistants, the Society of Hospital Medicine, and the American Academy of Nurse Practitioners cosponsor Adult Hospital Medicine Boot Camp for PAs and nurse practitioners annually to facilitate knowledge acquisition, but this course is truly an orientation rather than a comprehensive training program.22 Our findings suggest that more rigorous and thorough training in hospital medicine would be valued and appreciated by PA hospitalists.

Several limitations of this study should be considered. First, our survey respondents may not represent the entire spectrum of practicing PA hospitalists. However, the demographic data of 421 PAs who indicated their specialty as hospital medicine in the 2008 National Physician Assistants Census Report were not dissimilar from our informants; 65% were women, and their mean number of years in hospital medicine was 3.9 years.2 Second, our study sample was small. It was difficult to identify a national sample of hospitalist PAs, and we had to resort to a creative use of social media to find a national sample. Third, the study relied exclusively on self‐report, and since we asked about their perceived learning needs when they started working as hospitalists, recall bias cannot be excluded. However, the questions addressing attitudes and beliefs can only be ascertained from the informants themselves. That said, the input from hospitalist physicians about training needs for the PAs who they are supervising would have strengthened the reliability of the data, but this was not possible given the sampling strategy that we elected to use. Finally, our survey instrument was developed based on the Core Competencies in Hospital Medicine, which is a blueprint to develop standardized curricula for teaching hospital medicine in medical school, postgraduate training programs (ie, residency, fellowship), and continuing medical education programs. It is not clear whether the same competencies should be expected of PA hospitalists who may have different job descriptions from physician hospitalists.

In conclusion, we present the first national data on self‐perceived learning needs of PAs working in hospital medicine settings. This study collates the perceptions of PAs working in hospital medicine and highlights the fact that training in PA school does not adequately prepare them to care for hospitalized patients. Hospitalist groups may use this study's findings to coach and instruct newly hired or inexperienced hospitalist PAs, particularly until postgraduate training opportunities become more prevalent. PA schools may consider the results of this study for modifying their curricula in hopes of emphasizing the clinical content that may be most relevant for a proportion of their graduates.

Acknowledgements

The authors would like to thank Drs. David Kern and Belinda Chen at Johns Hopkins Bayview Medical Center for their assistance in developing the survey instrument.

Financial support: This study was supported by the Linda Brandt Research Award program of the Association of Postgraduate PA Programs. Dr. Wright is a Miller‐Coulson Family Scholar and was supported through the Johns Hopkins Center for Innovative Medicine.

Disclosures: Dr. Torok and Ms. Lackner received a Linda Brandt Research Award from the Association of Postgraduate PA Programs for support of this study. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine.

Files
References
  1. United States Department of Labor, Bureau of Labor Statistics. Available at: http://www.bls.gov. Accessed February 16,2011.
  2. American Academy of Physician Assistants. Available at: http://www.aapa.org. Accessed April 20,2011.
  3. Society of Hospital Medicine. Available at: http://www.hospitalmedicine.org. Accessed January 24,2011.
  4. Accreditation Review Commission on Education for the Physician Assistants Accreditation Standards. Available at: http://www.arc‐pa.org/acc_standards. Accessed February 16,2011.
  5. Parekh VI,Roy CL.Non‐physician providers in hospital medicine: not so fast.J Hosp Med.2010;5(2):103106.
  6. Dressler DD,Pistoria MJ,Budnitz TL,McKean SC,Amin AN.Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1:4856.
  7. Will KK,Budavari AL,Wilkens JA,Mishark K,Hartsell ZC.A hospitalist postgraduate training program for physician assistants.J Hosp Med.2010;5:9498.
  8. Resnick AS,Todd BA,Mullen JL,Morris JB.How do surgical residents and non‐physician practitioners play together in the sandbox?Curr Surg.2006;63:155164.
  9. Victorino GP,Organ CH.Physician assistant influence on surgery residents.Arch Surg.2003;138:971976.
  10. Buch KE,Genovese MY,Conigliaro JL, et al.Non‐physician practitioners' overall enhancement to a surgical resident's experience.J Surg Educ.2008;65:5053.
  11. Roblin DW,Howard DH,Becker ER,Kathleen Adams E,Roberts MH.Use of midlevel practitioners to achieve labor cost savings in the primary care practice of an MCO.Health Serv Res.2004;39:607626.
  12. Grzybicki DM,Sullivan PJ,Oppy JM,Bethke AM,Raab SS.The economic benefit for family/general medicine practices employing physician assistants.Am J Manag Care.2002;8:613620.
  13. Kaissi A,Kralewski J,Dowd B.Financial and organizational factors affecting the employment of nurse practitioners and physician assistants in medical group practices.J Ambul Care Manage.2003;26:209216.
  14. Nishimura RA,Linderbaum JA,Naessens JM,Spurrier B,Koch MB,Gaines KA.A nonresident cardiovascular inpatient service improves residents' experiences in an academic medical center: a new model to meet the challenges of the new millennium.Acad Med.2004;79:426431.
  15. Kleinpell RM,Ely EW,Grabenkort R.Nurse practitioners and physician assistants in the intensive care unit: an evidence‐based review.Crit Care Med.2008;36:28882897.
  16. Carter AJ,Chochinov AH.A systematic review of the impact of nurse practitioners on cost, quality of care, satisfaction and wait times in the emergency department.CJEM.2007;9:286295.
  17. Mathur M,Rampersad A,Howard K,Goldman GM.Physician assistants as physician extenders in the pediatric intensive care unit setting—a 5‐year experience.Pediatr Crit Care Med.2005;6:1419.
  18. Abrass CK,Ballweg R,Gilshannon M,Coombs JB.A process for reducing workload and enhancing residents' education at an academic medical center.Acad Med.2001;76:798805.
  19. Singh S,Fletcher KE,Schapira MM, et al.A comparison of outcomes of general medical inpatient care provided by a hospitalist‐physician assistant model vs a traditional resident‐based model.J Hosp Med.2011;6:112130.
  20. O'Connor AB,Lang VJ,Lurie SJ, et al.The effect of nonteaching services on the distribution of inpatient cases for internal medicine residents.Acad Med.2009;84:220225.
  21. Association of Postgraduate PA Programs. Available at: http://appap.org/Home/tabid/38/Default.aspx. Accessed February 16,2011.
  22. Adult Hospital Medicine Boot Camp for PAs and NPs. Available at: http://www.aapa.org/component/content/article/23—general‐/673‐adult‐hospital‐medicine‐boot‐camp‐for‐pas‐and‐nps. Accessed February 16,2011.
Article PDF
Issue
Journal of Hospital Medicine - 7(3)
Publications
Page Number
190-194
Sections
Files
Files
Article PDF
Article PDF

Physician assistants (PA) have rapidly become an integral component in the United States health care delivery system, including in the field of Hospital Medicine, the fastest growing medical field in the United States.1, 2 Since its induction in 1997, hospitalist providers in North America have increased by 30‐fold.3 Correlating with this, the number of PAs practicing in the field of hospital medicine has also increased greatly in recent years. According to the American Academy of Physician Assistants (AAPA) census reports, Hospital Medicine first appeared as one of the specialty choices in the 2006 census (response rate, 33% of all individuals eligible to practice as PAs) when it was selected as the primary specialty by 239 PAs (1.1% of respondents). In the 2008 report (response rate, 35%), the number grew to 421 (1.7%) PAs.2

PA training programs emphasize primary care and offer limited exposure to inpatient medicine. After PA students complete their first 12 months of training in didactic coursework that teach the basic sciences, they typically spend the next year on clinical rotations, largely rooted in outpatient care.2, 4 Upon graduation, PAs do not have to pursue postgraduate training before beginning to practice in their preferred specialty areas. Thus, a majority of PAs going into specialty areas are trained on the job. This is not an exception in the field of hospital medicine.

In recent years, despite an increase in the number of PAs in Hospital Medicine, some medical centers have chosen to phase out the use of midlevel hospitalist providers (including PAs) with the purposeful decision to not hire new midlevel providers.5 The rationale for this strategy is that there is thought to be a steep learning curve that requires much time to overcome before these providers feel comfortable across the breadth of clinical cases. Before they become experienced and confident in caring for a highly complex heterogeneous patient population, they cannot operate autonomously and are not a cost‐effective alternative to physicians. The complexities associated with practicing in this field were clarified in 2006 when the Society of Hospital Medicine identified 51 core competencies in hospital medicine.3, 6 Some hospitalist programs are willing to provide their PAs with on‐the‐job training, but many programs do not have the educational expertise or the resources to make this happen. Structured and focused postgraduate training in hospital medicine seems like a reasonable solution to prepare newly graduating PAs that are interested in pursuing hospitalist careers, but such opportunities are very limited.7

To date, there is no available information about the learning needs of PAs working in hospital medicine settings. We hypothesized that understanding the learning needs of PA hospitalists would inform the development of more effective and efficient training programs. We studied PAs with experience working in hospital medicine to (1) identify self‐perceived gaps in their skills and knowledge upon starting their hospitalist careers and (2) understand their views about optimal training for careers in hospital medicine.

METHODS

Study Design

We conducted a cross‐sectional survey of a convenience sample of self‐identified PAs working in adult Hospital Medicine. The survey was distributed using an electronic survey program.

Participants

The subjects for the survey were identified through the Facebook group PAs in Hospital Medicine, which had 133 members as of July 2010. This source was selected because it was the most comprehensive list of self‐identified hospitalist PAs. Additionally, the group allowed us to send individualized invitations to complete the survey along with subsequent reminder messages to nonresponders. Subjects were eligible to participate if they were PAs with experience working in hospital medicine settings taking care of adult internal medicine inpatients.

Survey Instrument

The survey instrument was developed based on the Core Competencies in Hospital Medicine with the goal of identifying PA hospitalists' knowledge and skill gaps that were present when they started their hospitalist career.

In one section, respondents were asked about content areas among the Core Competencies in Hospital Medicine that they believed would have enhanced their effectiveness in practicing hospital medicine had they had additional training before starting their work as hospitalists. Response options ranged from Strongly Agree to Strongly Disagree. Because there were content areas that seemed more relevant to physicians, through rigorous discussions, our study team (including a hospitalist physician, senior hospitalist PA, two curriculum development experts, one medical education research expert, and an experienced hospital medicine research assistant) selected topics that were felt to be particularly germane to PA hospitalists. The relevance of this content to PA hospitalists was confirmed through pilot testing of the instrument. Another series of questions asked the PAs about their views on formal postgraduate training programs. The subjects were also queried about the frequency with which they performed various procedures (using the following scale: Never, Rarely [1‐2/year], Regularly [1‐2/month], Often [1‐2/week]) and whether they felt it was necessary for PAs to have procedure skills listed as part of the Core Competencies in Hospital Medicine (using the following scale: Not necessary, Preferable, Essential). Finally, the survey included a question about the PAs' preferred learning methods by asking the degree of helpfulness on various approaches (using the following scale: Not at all, Little, Some, A lot, Tremendously). Demographic information was also collected. The instrument was pilot‐tested for clarity on the 9 PA hospitalists who were affiliated with our hospitalist service, and the instrument was iteratively revised based on their feedback.

Data Collection and Analysis

Between September and December 2010, the survey invitations were sent as Facebook messages to the 133 members of the Facebook group PAs in Hospital Medicine. Sixteen members could not be contacted because their account setup did not allow us to send messages, and 14 were excluded because they were non‐PA members. In order to maximize participation, up to 4 reminder messages were sent to the 103 targeted PAs. The survey results were analyzed using Stata 11. Descriptive statistics were used to characterize the responses.

This study protocol was approved by the institution's review board.

RESULTS

Sixty‐nine PAs responded (response rate, 67%). Table 1 provides demographic characteristics of the respondents. The majority of respondents were 2635 years old and had worked as hospitalists for a mean of 4.3 years.

Characteristics of the 62 Physician Assistant Respondents Who Elected to Share Demographic and Personal Information
Characteristics*Value
  • Abbreviations: ICU, intensive care unit; PA, physician assistant; SD, standard deviation.

  • Seven PAs did not provide any personal or demographic information.

  • Because of missing data, numbers may not correspond to the exact percentages.

Age, years, n (%) 
<261 (2)
263016 (29)
313514 (25)
364010 (18)
41455 (9)
>4510 (18)
Women, n (%)35 (63)
Year of graduation from PA school, mode (SD)2002 (7)
No. of years working/worked as hospitalist, mean (SD)4.3 (3.4)
Completed any postgraduate training program, n (%)0 (0)
Hospitalist was the first PA job, n (%)30 (49)
Salary, US$, n (%) 
50,00170,0001 (2)
70,00190,00032 (57)
>90,00023 (41)
Location of hospital, n (%) 
Urban35 (57)
Suburban21 (34)
Rural5 (8)
Hospital characteristics, n (%) 
Academic medical center25 (41)
Community teaching hospital20 (33)
Community nonteaching hospital16 (26)
Responsibilities in addition to taking care of inpatients on medicine floor, n (%) 
Care for patients in ICU22 (35)
Perform inpatient consultations31 (50)
See outpatients11 (18)

Clinical Conditions

Table 2 shows the respondents' experience with 19 core competency clinical conditions before beginning their careers as hospitalist PAs. They reported having most experience in managing diabetes and urinary tract infections, and least experience in managing healthcare associated pneumonias and sepsis syndrome.

Physician Assistant Experiences with 19 Core Clinical Conditions Before Starting Career in Hospital Medicine
Clinical ConditionMean (SD)*
  • Abbreviation: SD, standard deviation.

  • Likert scale: 1, no experience, I knew nothing about this condition; 2, no experience, I had heard/read about this condition; 3, I had experience caring for 1 patient (simulated or real) with this condition; 4, I had experience caring for 25 patients with this condition; 5, I had experience caring for many (>5) patients with this condition.

Urinary tract infection4.5 (0.8)
Diabetes mellitus4.5 (0.8)
Asthma4.4 (0.9)
Community‐acquired pneumonia4.3 (0.9)
Chronic obstructive pulmonary disease4.3 (1.0)
Cellulitis4.2 (0.9)
Congestive heart failure4.1 (1.0)
Cardiac arrhythmia3.9 (1.1)
Delirium and dementia3.8 (1.1)
Acute coronary syndrome3.8 (1.2)
Acute renal failure3.8 (1.1)
Gastrointestinal bleed3.7 (1.1)
Venous thromboembolism3.7 (1.2)
Pain management3.7 (1.2)
Perioperative medicine3.6 (1.4)
Stroke3.5 (1.2)
Alcohol and drug withdrawal3.4 (1.1)
Sepsis syndrome3.3 (1.1)
Hospital‐acquired pneumonia3.2 (1.1)

Procedures

Most PA hospitalists (67%) perform electrocardiograms and chest X‐ray interpretations regularly (more than 1‐2/ week). However, nearly all PA hospitalists never or rarely (less than 1‐2/year) perform any invasive procedures, including arthrocentesis (98%), lumbar puncture (100%), paracentesis (91%), thoracentesis (98%), central line placement (91%), peripherally inserted central catheter placement (91%), and peripheral intravenous insertion (91%). Despite the infrequency of execution, more than 50% of respondents explained that it is either preferable or essential to be able to perform these procedures.

Content Knowledge

The PA hospitalists indicated which content areas may have allowed them to be more successful had they learned the material before starting their hospitalist career (Table 3). The top 4 topics that PA hospitalists believed would have helped them most to care for inpatients were palliative care (85% agreed or strongly agreed), nutrition for hospitalized patients (84%), performing consultations in the hospital (64%), and prevention of health careassociated infection (61%).

Content Areas that 62 Respondent PAs Believed Would Have Enhanced Their Effectiveness in Practicing Hospital Medicine Had They Had Additional Training Before Starting Their Work as Hospitalists
Health Care System TopicsPAs Who Agreed or Strongly Agreed, n (%)
Palliative care47 (85)
Nutrition for hospitalized patients46 (84)
Performing consultations in hospital35 (64)
Prevention of health careassociated infections34 (62)
Diagnostic decision‐making processes32 (58)
Patient handoff and transitions of care31 (56)
Evidence‐based medicine28 (51)
Communication with patients and families27 (49)
Drug safety and drug interactions27 (49)
Team approach and multidisciplinary care26 (48)
Patient safety and quality improvement processes25 (45)
Care of elderly patients24 (44)
Medical ethics22 (40)
Patient education20 (36)
Care of uninsured or underinsured patients18 (33)

Professional Growth as Hospitalist Providers

PAs judged working with physician preceptors (mean SD, 4.5 0.6) and discussing patients with consultants (mean SD, 4.3 0.8) to be most helpful for their professional growth, whereas receiving feedback/audits about their performance (mean SD, 3.5 1), attending conferences/lectures (mean SD, 3.6 0.7), and reading journals/textbooks (mean SD, 3.6 0.8) were rated as being less useful. Respondents believed that the mean number of months required for new hospitalist PAs to become fully competent team members was 11 months ( 8.6 SD). Forty‐three percent of respondents shared the perspective that some clinical experience in an inpatient setting was an essential prerequisite for entry into a hospitalist position. Although more than half (58%) felt that completion of postgraduate training program in hospital medicine was not necessary as a prerequisite, almost all (91%) explained that they would have been interested in such a program even if it meant having a lower stipend than a hospitalist PA salary during the first year on the job (Table 4).

Self‐Reported Interest from 55 Respondents in Postgraduate Hospitalist Training Depending on Varying Levels of Incentives and Disincentives
Interest in Trainingn (%)
Interested and willing to pay tuition1 (2)
Interested even if there was no stipend, as long as I didn't have to pay any additional tuition3 (5)
Interested ONLY if a stipend of at least 25% of a hospitalist PA salary was offered4 (7)
Interested ONLY if a stipend of at least 50% of a hospitalist PA salary was offered21 (38)
Interested ONLY if a stipend of at least 75% of a hospitalist PA salary was offered21 (38)
Interested ONLY if 100% of a hospitalist PA salary was offered4 (7)
Not interested under any circumstances1 (2)

DISCUSSION

Our survey addresses a wide range of topics related to PA hospitalists' learning needs including their experience with the Core Competencies in Hospital Medicine and their views on the benefits of PA training following graduation. Although self‐efficacy is not assessed, our study revealed that PAs who are choosing hospitalist careers have limited prior clinical experience treating many medical conditions that are managed in inpatient settings, such as sepsis syndrome. This inexperience with commonly seen clinical conditions, such as sepsis, wherein following guidelines can both reduce costs and improve outcomes, is problematic. More experience and training with such conditions would almost certainly reduce variability, improve skills, and augment confidence. These observed variations in experience in caring for conditions that often prompt admission to the hospital among PAs starting their hospitalist careers emphasizes the need to be learner‐centered when training PAs, so as to provide tailored guidance and oversight.

Only a few other empiric research articles have focused on PA hospitalists. One article described a postgraduate training program for PAs in hospital medicine that was launched in 2008. The curriculum was developed based on the Core Competencies in Hospital Medicine, and the authors explained that after 12 months of training, their first graduate functioned at the level of a PA with 4 years of experience under her belt.7 Several articles describe experiences using midlevel providers (including PAs) in general surgery, primary care medicine, cardiology, emergency medicine, critical care, pediatrics, and hospital medicine settings.5, 820 Many of these articles reported favorable results showing that using midlevel providers was either superior or just as effective in terms of cost and quality measures to physician‐only models. Many of these papers alluded to the ways in which PAs have enabled graduate medical education training programs to comply with residents' duty‐hour restrictions. A recent analysis that compared outcomes related to inpatient care provided by a hospitalist‐PA model versus a traditional resident‐based model revealed a slightly longer length of stay on the PA team but similar charges, readmission rates, and mortality.19 Yet another paper revealed that patients admitted to a residents' service, compared with the nonteaching hospitalist service that uses PAs and nurse practitioners, were different, having higher comorbidity burdens and higher acuity diagnoses.20 The authors suggested that this variance might be explained by the difference in their training, abilities, and goals of the groups. There was no research article that sought to capture the perspectives of practicing hospitalist PAs.

Our study revealed that although half of respondents became hospitalists immediately after graduating from PA school, a majority agreed that additional clinical training in inpatient settings would have been welcomed and helpful. This study's results reveal that although there is a fair amount of perceived interest in postgraduate training programs in hospital medicine, there are very few training opportunities for PAs in hospital medicine.7, 21 The American Academy of Physician Assistants, the Society of Hospital Medicine, and the American Academy of Nurse Practitioners cosponsor Adult Hospital Medicine Boot Camp for PAs and nurse practitioners annually to facilitate knowledge acquisition, but this course is truly an orientation rather than a comprehensive training program.22 Our findings suggest that more rigorous and thorough training in hospital medicine would be valued and appreciated by PA hospitalists.

Several limitations of this study should be considered. First, our survey respondents may not represent the entire spectrum of practicing PA hospitalists. However, the demographic data of 421 PAs who indicated their specialty as hospital medicine in the 2008 National Physician Assistants Census Report were not dissimilar from our informants; 65% were women, and their mean number of years in hospital medicine was 3.9 years.2 Second, our study sample was small. It was difficult to identify a national sample of hospitalist PAs, and we had to resort to a creative use of social media to find a national sample. Third, the study relied exclusively on self‐report, and since we asked about their perceived learning needs when they started working as hospitalists, recall bias cannot be excluded. However, the questions addressing attitudes and beliefs can only be ascertained from the informants themselves. That said, the input from hospitalist physicians about training needs for the PAs who they are supervising would have strengthened the reliability of the data, but this was not possible given the sampling strategy that we elected to use. Finally, our survey instrument was developed based on the Core Competencies in Hospital Medicine, which is a blueprint to develop standardized curricula for teaching hospital medicine in medical school, postgraduate training programs (ie, residency, fellowship), and continuing medical education programs. It is not clear whether the same competencies should be expected of PA hospitalists who may have different job descriptions from physician hospitalists.

In conclusion, we present the first national data on self‐perceived learning needs of PAs working in hospital medicine settings. This study collates the perceptions of PAs working in hospital medicine and highlights the fact that training in PA school does not adequately prepare them to care for hospitalized patients. Hospitalist groups may use this study's findings to coach and instruct newly hired or inexperienced hospitalist PAs, particularly until postgraduate training opportunities become more prevalent. PA schools may consider the results of this study for modifying their curricula in hopes of emphasizing the clinical content that may be most relevant for a proportion of their graduates.

Acknowledgements

The authors would like to thank Drs. David Kern and Belinda Chen at Johns Hopkins Bayview Medical Center for their assistance in developing the survey instrument.

Financial support: This study was supported by the Linda Brandt Research Award program of the Association of Postgraduate PA Programs. Dr. Wright is a Miller‐Coulson Family Scholar and was supported through the Johns Hopkins Center for Innovative Medicine.

Disclosures: Dr. Torok and Ms. Lackner received a Linda Brandt Research Award from the Association of Postgraduate PA Programs for support of this study. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine.

Physician assistants (PA) have rapidly become an integral component in the United States health care delivery system, including in the field of Hospital Medicine, the fastest growing medical field in the United States.1, 2 Since its induction in 1997, hospitalist providers in North America have increased by 30‐fold.3 Correlating with this, the number of PAs practicing in the field of hospital medicine has also increased greatly in recent years. According to the American Academy of Physician Assistants (AAPA) census reports, Hospital Medicine first appeared as one of the specialty choices in the 2006 census (response rate, 33% of all individuals eligible to practice as PAs) when it was selected as the primary specialty by 239 PAs (1.1% of respondents). In the 2008 report (response rate, 35%), the number grew to 421 (1.7%) PAs.2

PA training programs emphasize primary care and offer limited exposure to inpatient medicine. After PA students complete their first 12 months of training in didactic coursework that teach the basic sciences, they typically spend the next year on clinical rotations, largely rooted in outpatient care.2, 4 Upon graduation, PAs do not have to pursue postgraduate training before beginning to practice in their preferred specialty areas. Thus, a majority of PAs going into specialty areas are trained on the job. This is not an exception in the field of hospital medicine.

In recent years, despite an increase in the number of PAs in Hospital Medicine, some medical centers have chosen to phase out the use of midlevel hospitalist providers (including PAs) with the purposeful decision to not hire new midlevel providers.5 The rationale for this strategy is that there is thought to be a steep learning curve that requires much time to overcome before these providers feel comfortable across the breadth of clinical cases. Before they become experienced and confident in caring for a highly complex heterogeneous patient population, they cannot operate autonomously and are not a cost‐effective alternative to physicians. The complexities associated with practicing in this field were clarified in 2006 when the Society of Hospital Medicine identified 51 core competencies in hospital medicine.3, 6 Some hospitalist programs are willing to provide their PAs with on‐the‐job training, but many programs do not have the educational expertise or the resources to make this happen. Structured and focused postgraduate training in hospital medicine seems like a reasonable solution to prepare newly graduating PAs that are interested in pursuing hospitalist careers, but such opportunities are very limited.7

To date, there is no available information about the learning needs of PAs working in hospital medicine settings. We hypothesized that understanding the learning needs of PA hospitalists would inform the development of more effective and efficient training programs. We studied PAs with experience working in hospital medicine to (1) identify self‐perceived gaps in their skills and knowledge upon starting their hospitalist careers and (2) understand their views about optimal training for careers in hospital medicine.

METHODS

Study Design

We conducted a cross‐sectional survey of a convenience sample of self‐identified PAs working in adult Hospital Medicine. The survey was distributed using an electronic survey program.

Participants

The subjects for the survey were identified through the Facebook group PAs in Hospital Medicine, which had 133 members as of July 2010. This source was selected because it was the most comprehensive list of self‐identified hospitalist PAs. Additionally, the group allowed us to send individualized invitations to complete the survey along with subsequent reminder messages to nonresponders. Subjects were eligible to participate if they were PAs with experience working in hospital medicine settings taking care of adult internal medicine inpatients.

Survey Instrument

The survey instrument was developed based on the Core Competencies in Hospital Medicine with the goal of identifying PA hospitalists' knowledge and skill gaps that were present when they started their hospitalist career.

In one section, respondents were asked about content areas among the Core Competencies in Hospital Medicine that they believed would have enhanced their effectiveness in practicing hospital medicine had they had additional training before starting their work as hospitalists. Response options ranged from Strongly Agree to Strongly Disagree. Because there were content areas that seemed more relevant to physicians, through rigorous discussions, our study team (including a hospitalist physician, senior hospitalist PA, two curriculum development experts, one medical education research expert, and an experienced hospital medicine research assistant) selected topics that were felt to be particularly germane to PA hospitalists. The relevance of this content to PA hospitalists was confirmed through pilot testing of the instrument. Another series of questions asked the PAs about their views on formal postgraduate training programs. The subjects were also queried about the frequency with which they performed various procedures (using the following scale: Never, Rarely [1‐2/year], Regularly [1‐2/month], Often [1‐2/week]) and whether they felt it was necessary for PAs to have procedure skills listed as part of the Core Competencies in Hospital Medicine (using the following scale: Not necessary, Preferable, Essential). Finally, the survey included a question about the PAs' preferred learning methods by asking the degree of helpfulness on various approaches (using the following scale: Not at all, Little, Some, A lot, Tremendously). Demographic information was also collected. The instrument was pilot‐tested for clarity on the 9 PA hospitalists who were affiliated with our hospitalist service, and the instrument was iteratively revised based on their feedback.

Data Collection and Analysis

Between September and December 2010, the survey invitations were sent as Facebook messages to the 133 members of the Facebook group PAs in Hospital Medicine. Sixteen members could not be contacted because their account setup did not allow us to send messages, and 14 were excluded because they were non‐PA members. In order to maximize participation, up to 4 reminder messages were sent to the 103 targeted PAs. The survey results were analyzed using Stata 11. Descriptive statistics were used to characterize the responses.

This study protocol was approved by the institution's review board.

RESULTS

Sixty‐nine PAs responded (response rate, 67%). Table 1 provides demographic characteristics of the respondents. The majority of respondents were 2635 years old and had worked as hospitalists for a mean of 4.3 years.

Characteristics of the 62 Physician Assistant Respondents Who Elected to Share Demographic and Personal Information
Characteristics*Value
  • Abbreviations: ICU, intensive care unit; PA, physician assistant; SD, standard deviation.

  • Seven PAs did not provide any personal or demographic information.

  • Because of missing data, numbers may not correspond to the exact percentages.

Age, years, n (%) 
<261 (2)
263016 (29)
313514 (25)
364010 (18)
41455 (9)
>4510 (18)
Women, n (%)35 (63)
Year of graduation from PA school, mode (SD)2002 (7)
No. of years working/worked as hospitalist, mean (SD)4.3 (3.4)
Completed any postgraduate training program, n (%)0 (0)
Hospitalist was the first PA job, n (%)30 (49)
Salary, US$, n (%) 
50,00170,0001 (2)
70,00190,00032 (57)
>90,00023 (41)
Location of hospital, n (%) 
Urban35 (57)
Suburban21 (34)
Rural5 (8)
Hospital characteristics, n (%) 
Academic medical center25 (41)
Community teaching hospital20 (33)
Community nonteaching hospital16 (26)
Responsibilities in addition to taking care of inpatients on medicine floor, n (%) 
Care for patients in ICU22 (35)
Perform inpatient consultations31 (50)
See outpatients11 (18)

Clinical Conditions

Table 2 shows the respondents' experience with 19 core competency clinical conditions before beginning their careers as hospitalist PAs. They reported having most experience in managing diabetes and urinary tract infections, and least experience in managing healthcare associated pneumonias and sepsis syndrome.

Physician Assistant Experiences with 19 Core Clinical Conditions Before Starting Career in Hospital Medicine
Clinical ConditionMean (SD)*
  • Abbreviation: SD, standard deviation.

  • Likert scale: 1, no experience, I knew nothing about this condition; 2, no experience, I had heard/read about this condition; 3, I had experience caring for 1 patient (simulated or real) with this condition; 4, I had experience caring for 25 patients with this condition; 5, I had experience caring for many (>5) patients with this condition.

Urinary tract infection4.5 (0.8)
Diabetes mellitus4.5 (0.8)
Asthma4.4 (0.9)
Community‐acquired pneumonia4.3 (0.9)
Chronic obstructive pulmonary disease4.3 (1.0)
Cellulitis4.2 (0.9)
Congestive heart failure4.1 (1.0)
Cardiac arrhythmia3.9 (1.1)
Delirium and dementia3.8 (1.1)
Acute coronary syndrome3.8 (1.2)
Acute renal failure3.8 (1.1)
Gastrointestinal bleed3.7 (1.1)
Venous thromboembolism3.7 (1.2)
Pain management3.7 (1.2)
Perioperative medicine3.6 (1.4)
Stroke3.5 (1.2)
Alcohol and drug withdrawal3.4 (1.1)
Sepsis syndrome3.3 (1.1)
Hospital‐acquired pneumonia3.2 (1.1)

Procedures

Most PA hospitalists (67%) perform electrocardiograms and chest X‐ray interpretations regularly (more than 1‐2/ week). However, nearly all PA hospitalists never or rarely (less than 1‐2/year) perform any invasive procedures, including arthrocentesis (98%), lumbar puncture (100%), paracentesis (91%), thoracentesis (98%), central line placement (91%), peripherally inserted central catheter placement (91%), and peripheral intravenous insertion (91%). Despite the infrequency of execution, more than 50% of respondents explained that it is either preferable or essential to be able to perform these procedures.

Content Knowledge

The PA hospitalists indicated which content areas may have allowed them to be more successful had they learned the material before starting their hospitalist career (Table 3). The top 4 topics that PA hospitalists believed would have helped them most to care for inpatients were palliative care (85% agreed or strongly agreed), nutrition for hospitalized patients (84%), performing consultations in the hospital (64%), and prevention of health careassociated infection (61%).

Content Areas that 62 Respondent PAs Believed Would Have Enhanced Their Effectiveness in Practicing Hospital Medicine Had They Had Additional Training Before Starting Their Work as Hospitalists
Health Care System TopicsPAs Who Agreed or Strongly Agreed, n (%)
Palliative care47 (85)
Nutrition for hospitalized patients46 (84)
Performing consultations in hospital35 (64)
Prevention of health careassociated infections34 (62)
Diagnostic decision‐making processes32 (58)
Patient handoff and transitions of care31 (56)
Evidence‐based medicine28 (51)
Communication with patients and families27 (49)
Drug safety and drug interactions27 (49)
Team approach and multidisciplinary care26 (48)
Patient safety and quality improvement processes25 (45)
Care of elderly patients24 (44)
Medical ethics22 (40)
Patient education20 (36)
Care of uninsured or underinsured patients18 (33)

Professional Growth as Hospitalist Providers

PAs judged working with physician preceptors (mean SD, 4.5 0.6) and discussing patients with consultants (mean SD, 4.3 0.8) to be most helpful for their professional growth, whereas receiving feedback/audits about their performance (mean SD, 3.5 1), attending conferences/lectures (mean SD, 3.6 0.7), and reading journals/textbooks (mean SD, 3.6 0.8) were rated as being less useful. Respondents believed that the mean number of months required for new hospitalist PAs to become fully competent team members was 11 months ( 8.6 SD). Forty‐three percent of respondents shared the perspective that some clinical experience in an inpatient setting was an essential prerequisite for entry into a hospitalist position. Although more than half (58%) felt that completion of postgraduate training program in hospital medicine was not necessary as a prerequisite, almost all (91%) explained that they would have been interested in such a program even if it meant having a lower stipend than a hospitalist PA salary during the first year on the job (Table 4).

Self‐Reported Interest from 55 Respondents in Postgraduate Hospitalist Training Depending on Varying Levels of Incentives and Disincentives
Interest in Trainingn (%)
Interested and willing to pay tuition1 (2)
Interested even if there was no stipend, as long as I didn't have to pay any additional tuition3 (5)
Interested ONLY if a stipend of at least 25% of a hospitalist PA salary was offered4 (7)
Interested ONLY if a stipend of at least 50% of a hospitalist PA salary was offered21 (38)
Interested ONLY if a stipend of at least 75% of a hospitalist PA salary was offered21 (38)
Interested ONLY if 100% of a hospitalist PA salary was offered4 (7)
Not interested under any circumstances1 (2)

DISCUSSION

Our survey addresses a wide range of topics related to PA hospitalists' learning needs including their experience with the Core Competencies in Hospital Medicine and their views on the benefits of PA training following graduation. Although self‐efficacy is not assessed, our study revealed that PAs who are choosing hospitalist careers have limited prior clinical experience treating many medical conditions that are managed in inpatient settings, such as sepsis syndrome. This inexperience with commonly seen clinical conditions, such as sepsis, wherein following guidelines can both reduce costs and improve outcomes, is problematic. More experience and training with such conditions would almost certainly reduce variability, improve skills, and augment confidence. These observed variations in experience in caring for conditions that often prompt admission to the hospital among PAs starting their hospitalist careers emphasizes the need to be learner‐centered when training PAs, so as to provide tailored guidance and oversight.

Only a few other empiric research articles have focused on PA hospitalists. One article described a postgraduate training program for PAs in hospital medicine that was launched in 2008. The curriculum was developed based on the Core Competencies in Hospital Medicine, and the authors explained that after 12 months of training, their first graduate functioned at the level of a PA with 4 years of experience under her belt.7 Several articles describe experiences using midlevel providers (including PAs) in general surgery, primary care medicine, cardiology, emergency medicine, critical care, pediatrics, and hospital medicine settings.5, 820 Many of these articles reported favorable results showing that using midlevel providers was either superior or just as effective in terms of cost and quality measures to physician‐only models. Many of these papers alluded to the ways in which PAs have enabled graduate medical education training programs to comply with residents' duty‐hour restrictions. A recent analysis that compared outcomes related to inpatient care provided by a hospitalist‐PA model versus a traditional resident‐based model revealed a slightly longer length of stay on the PA team but similar charges, readmission rates, and mortality.19 Yet another paper revealed that patients admitted to a residents' service, compared with the nonteaching hospitalist service that uses PAs and nurse practitioners, were different, having higher comorbidity burdens and higher acuity diagnoses.20 The authors suggested that this variance might be explained by the difference in their training, abilities, and goals of the groups. There was no research article that sought to capture the perspectives of practicing hospitalist PAs.

Our study revealed that although half of respondents became hospitalists immediately after graduating from PA school, a majority agreed that additional clinical training in inpatient settings would have been welcomed and helpful. This study's results reveal that although there is a fair amount of perceived interest in postgraduate training programs in hospital medicine, there are very few training opportunities for PAs in hospital medicine.7, 21 The American Academy of Physician Assistants, the Society of Hospital Medicine, and the American Academy of Nurse Practitioners cosponsor Adult Hospital Medicine Boot Camp for PAs and nurse practitioners annually to facilitate knowledge acquisition, but this course is truly an orientation rather than a comprehensive training program.22 Our findings suggest that more rigorous and thorough training in hospital medicine would be valued and appreciated by PA hospitalists.

Several limitations of this study should be considered. First, our survey respondents may not represent the entire spectrum of practicing PA hospitalists. However, the demographic data of 421 PAs who indicated their specialty as hospital medicine in the 2008 National Physician Assistants Census Report were not dissimilar from our informants; 65% were women, and their mean number of years in hospital medicine was 3.9 years.2 Second, our study sample was small. It was difficult to identify a national sample of hospitalist PAs, and we had to resort to a creative use of social media to find a national sample. Third, the study relied exclusively on self‐report, and since we asked about their perceived learning needs when they started working as hospitalists, recall bias cannot be excluded. However, the questions addressing attitudes and beliefs can only be ascertained from the informants themselves. That said, the input from hospitalist physicians about training needs for the PAs who they are supervising would have strengthened the reliability of the data, but this was not possible given the sampling strategy that we elected to use. Finally, our survey instrument was developed based on the Core Competencies in Hospital Medicine, which is a blueprint to develop standardized curricula for teaching hospital medicine in medical school, postgraduate training programs (ie, residency, fellowship), and continuing medical education programs. It is not clear whether the same competencies should be expected of PA hospitalists who may have different job descriptions from physician hospitalists.

In conclusion, we present the first national data on self‐perceived learning needs of PAs working in hospital medicine settings. This study collates the perceptions of PAs working in hospital medicine and highlights the fact that training in PA school does not adequately prepare them to care for hospitalized patients. Hospitalist groups may use this study's findings to coach and instruct newly hired or inexperienced hospitalist PAs, particularly until postgraduate training opportunities become more prevalent. PA schools may consider the results of this study for modifying their curricula in hopes of emphasizing the clinical content that may be most relevant for a proportion of their graduates.

Acknowledgements

The authors would like to thank Drs. David Kern and Belinda Chen at Johns Hopkins Bayview Medical Center for their assistance in developing the survey instrument.

Financial support: This study was supported by the Linda Brandt Research Award program of the Association of Postgraduate PA Programs. Dr. Wright is a Miller‐Coulson Family Scholar and was supported through the Johns Hopkins Center for Innovative Medicine.

Disclosures: Dr. Torok and Ms. Lackner received a Linda Brandt Research Award from the Association of Postgraduate PA Programs for support of this study. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine.

References
  1. United States Department of Labor, Bureau of Labor Statistics. Available at: http://www.bls.gov. Accessed February 16,2011.
  2. American Academy of Physician Assistants. Available at: http://www.aapa.org. Accessed April 20,2011.
  3. Society of Hospital Medicine. Available at: http://www.hospitalmedicine.org. Accessed January 24,2011.
  4. Accreditation Review Commission on Education for the Physician Assistants Accreditation Standards. Available at: http://www.arc‐pa.org/acc_standards. Accessed February 16,2011.
  5. Parekh VI,Roy CL.Non‐physician providers in hospital medicine: not so fast.J Hosp Med.2010;5(2):103106.
  6. Dressler DD,Pistoria MJ,Budnitz TL,McKean SC,Amin AN.Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1:4856.
  7. Will KK,Budavari AL,Wilkens JA,Mishark K,Hartsell ZC.A hospitalist postgraduate training program for physician assistants.J Hosp Med.2010;5:9498.
  8. Resnick AS,Todd BA,Mullen JL,Morris JB.How do surgical residents and non‐physician practitioners play together in the sandbox?Curr Surg.2006;63:155164.
  9. Victorino GP,Organ CH.Physician assistant influence on surgery residents.Arch Surg.2003;138:971976.
  10. Buch KE,Genovese MY,Conigliaro JL, et al.Non‐physician practitioners' overall enhancement to a surgical resident's experience.J Surg Educ.2008;65:5053.
  11. Roblin DW,Howard DH,Becker ER,Kathleen Adams E,Roberts MH.Use of midlevel practitioners to achieve labor cost savings in the primary care practice of an MCO.Health Serv Res.2004;39:607626.
  12. Grzybicki DM,Sullivan PJ,Oppy JM,Bethke AM,Raab SS.The economic benefit for family/general medicine practices employing physician assistants.Am J Manag Care.2002;8:613620.
  13. Kaissi A,Kralewski J,Dowd B.Financial and organizational factors affecting the employment of nurse practitioners and physician assistants in medical group practices.J Ambul Care Manage.2003;26:209216.
  14. Nishimura RA,Linderbaum JA,Naessens JM,Spurrier B,Koch MB,Gaines KA.A nonresident cardiovascular inpatient service improves residents' experiences in an academic medical center: a new model to meet the challenges of the new millennium.Acad Med.2004;79:426431.
  15. Kleinpell RM,Ely EW,Grabenkort R.Nurse practitioners and physician assistants in the intensive care unit: an evidence‐based review.Crit Care Med.2008;36:28882897.
  16. Carter AJ,Chochinov AH.A systematic review of the impact of nurse practitioners on cost, quality of care, satisfaction and wait times in the emergency department.CJEM.2007;9:286295.
  17. Mathur M,Rampersad A,Howard K,Goldman GM.Physician assistants as physician extenders in the pediatric intensive care unit setting—a 5‐year experience.Pediatr Crit Care Med.2005;6:1419.
  18. Abrass CK,Ballweg R,Gilshannon M,Coombs JB.A process for reducing workload and enhancing residents' education at an academic medical center.Acad Med.2001;76:798805.
  19. Singh S,Fletcher KE,Schapira MM, et al.A comparison of outcomes of general medical inpatient care provided by a hospitalist‐physician assistant model vs a traditional resident‐based model.J Hosp Med.2011;6:112130.
  20. O'Connor AB,Lang VJ,Lurie SJ, et al.The effect of nonteaching services on the distribution of inpatient cases for internal medicine residents.Acad Med.2009;84:220225.
  21. Association of Postgraduate PA Programs. Available at: http://appap.org/Home/tabid/38/Default.aspx. Accessed February 16,2011.
  22. Adult Hospital Medicine Boot Camp for PAs and NPs. Available at: http://www.aapa.org/component/content/article/23—general‐/673‐adult‐hospital‐medicine‐boot‐camp‐for‐pas‐and‐nps. Accessed February 16,2011.
References
  1. United States Department of Labor, Bureau of Labor Statistics. Available at: http://www.bls.gov. Accessed February 16,2011.
  2. American Academy of Physician Assistants. Available at: http://www.aapa.org. Accessed April 20,2011.
  3. Society of Hospital Medicine. Available at: http://www.hospitalmedicine.org. Accessed January 24,2011.
  4. Accreditation Review Commission on Education for the Physician Assistants Accreditation Standards. Available at: http://www.arc‐pa.org/acc_standards. Accessed February 16,2011.
  5. Parekh VI,Roy CL.Non‐physician providers in hospital medicine: not so fast.J Hosp Med.2010;5(2):103106.
  6. Dressler DD,Pistoria MJ,Budnitz TL,McKean SC,Amin AN.Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1:4856.
  7. Will KK,Budavari AL,Wilkens JA,Mishark K,Hartsell ZC.A hospitalist postgraduate training program for physician assistants.J Hosp Med.2010;5:9498.
  8. Resnick AS,Todd BA,Mullen JL,Morris JB.How do surgical residents and non‐physician practitioners play together in the sandbox?Curr Surg.2006;63:155164.
  9. Victorino GP,Organ CH.Physician assistant influence on surgery residents.Arch Surg.2003;138:971976.
  10. Buch KE,Genovese MY,Conigliaro JL, et al.Non‐physician practitioners' overall enhancement to a surgical resident's experience.J Surg Educ.2008;65:5053.
  11. Roblin DW,Howard DH,Becker ER,Kathleen Adams E,Roberts MH.Use of midlevel practitioners to achieve labor cost savings in the primary care practice of an MCO.Health Serv Res.2004;39:607626.
  12. Grzybicki DM,Sullivan PJ,Oppy JM,Bethke AM,Raab SS.The economic benefit for family/general medicine practices employing physician assistants.Am J Manag Care.2002;8:613620.
  13. Kaissi A,Kralewski J,Dowd B.Financial and organizational factors affecting the employment of nurse practitioners and physician assistants in medical group practices.J Ambul Care Manage.2003;26:209216.
  14. Nishimura RA,Linderbaum JA,Naessens JM,Spurrier B,Koch MB,Gaines KA.A nonresident cardiovascular inpatient service improves residents' experiences in an academic medical center: a new model to meet the challenges of the new millennium.Acad Med.2004;79:426431.
  15. Kleinpell RM,Ely EW,Grabenkort R.Nurse practitioners and physician assistants in the intensive care unit: an evidence‐based review.Crit Care Med.2008;36:28882897.
  16. Carter AJ,Chochinov AH.A systematic review of the impact of nurse practitioners on cost, quality of care, satisfaction and wait times in the emergency department.CJEM.2007;9:286295.
  17. Mathur M,Rampersad A,Howard K,Goldman GM.Physician assistants as physician extenders in the pediatric intensive care unit setting—a 5‐year experience.Pediatr Crit Care Med.2005;6:1419.
  18. Abrass CK,Ballweg R,Gilshannon M,Coombs JB.A process for reducing workload and enhancing residents' education at an academic medical center.Acad Med.2001;76:798805.
  19. Singh S,Fletcher KE,Schapira MM, et al.A comparison of outcomes of general medical inpatient care provided by a hospitalist‐physician assistant model vs a traditional resident‐based model.J Hosp Med.2011;6:112130.
  20. O'Connor AB,Lang VJ,Lurie SJ, et al.The effect of nonteaching services on the distribution of inpatient cases for internal medicine residents.Acad Med.2009;84:220225.
  21. Association of Postgraduate PA Programs. Available at: http://appap.org/Home/tabid/38/Default.aspx. Accessed February 16,2011.
  22. Adult Hospital Medicine Boot Camp for PAs and NPs. Available at: http://www.aapa.org/component/content/article/23—general‐/673‐adult‐hospital‐medicine‐boot‐camp‐for‐pas‐and‐nps. Accessed February 16,2011.
Issue
Journal of Hospital Medicine - 7(3)
Issue
Journal of Hospital Medicine - 7(3)
Page Number
190-194
Page Number
190-194
Publications
Publications
Article Type
Display Headline
Learning needs of physician assistants working in hospital medicine
Display Headline
Learning needs of physician assistants working in hospital medicine
Sections
Article Source

Copyright © 2011 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Johns Hopkins University School of Medicine, Johns Hopkins Bayview Medical Center, 5200 Eastern Avenue, MFL Building West Tower 6F CIMS Suite, Baltimore, MD 21224
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Image
Disable zoom
Off
Media Files
Image
Disable zoom
Off
Image
Disable zoom
Off
Image
Disable zoom
Off
Image
Disable zoom
Off
Image
Disable zoom
Off

RIP Conference Provides Peer Mentoring

Article Type
Changed
Thu, 05/25/2017 - 21:53
Display Headline
Research in progress conference for hospitalists provides valuable peer mentoring

The research‐in‐progress (RIP) conference is commonplace in academia, but there are no studies that objectively characterize its value. Bringing faculty together away from revenue‐generating activities carries a significant cost. As such, measuring the success of such gatherings is necessary.

Mentors are an invaluable influence on the careers of junior faculty members, helping them to produce high‐quality research.13 Unfortunately, some divisions lack mentorship to support the academic needs of less experienced faculty.1 Peer mentorship may be a solution. RIP sessions represent an opportunity to intentionally formalize peer mentoring. Further, these sessions can facilitate collaborations as individuals become aware of colleagues' interests. The goal of this study was to assess the value of the research‐in‐progress conference initiated within the hospitalist division at our institution.

Methods

Study Design

This cohort study was conducted to evaluate the value of the RIP conference among hospitalists in our division and the academic outcomes of the projects.

Setting and Participants

The study took place at Johns Hopkins Bayview Medical Center (JHBMC), a 335‐bed university‐affiliated medical center in Baltimore, Maryland. The hospitalist division consists of faculty physicians, nurse practitioners, and physician assistants (20.06 FTE physicians and 7.41 FTE midlevel providers). Twelve (54%) of our faculty members are female, and the mean age of providers is 35.7 years. The providers have been practicing hospitalist medicine for 3.0 years on average; 2 (9%) are clinical associates, 16 (73%) are instructors, and 3 (14%) are assistant professors.

All faculty members presenting at the RIP session were members of the division. A senior faculty member (a professor in the Division of General Internal Medicine) helps to coordinate the conference. The group's research assistant was present at the sessions and was charged with data collection and collation.

The Johns Hopkins University institutional review board approved the study.

The Research in Progress Conference

During the 2009 academic year, our division held 15 RIP sessions. At each session, 1 faculty member presented a research proposal. The goal of each session was to provide a forum where faculty members could share their research ideas (specific aims, hypotheses, planned design, outcome measures, analytic plans, and preliminary results [if applicable]) in order to receive feedback. The senior faculty member met with the presenter prior to each session in order to: (1) ensure that half the RIP time was reserved for discussion and (2) review the presenter's goals so these would be made explicit to peers. The coordinator of the RIP conference facilitated the discussion, solicited input from all attendees, and encouraged constructive criticism.

Evaluation, Data Collection, and Analysis

At the end of each session, attendees (who were exclusively members of the hospitalist division) were asked to complete an anonymous survey. The 1‐page instrument was designed (1) with input from curriculum development experts4 and (2) after a review of the literature about RIP conferences. These steps conferred content validity to the instrument, which assessed perceptions about the session's quality and what was learned. Five‐point Likert scales were used to characterize the conference's success in several areas, including being intellectually/professionally stimulating and keeping them apprised of their colleagues' interests. The survey also assessed the participatory nature of the conference (balance of presentation vs discussion), its climate (extremely critical vs extremely supportive), and how the conference assisted the presenter. The presenters completed a distinct survey related to how helpful the conference was in improving/enhancing their projects. A final open‐ended section invited additional comments. The instrument was piloted and iteratively revised before its use in this study.

For the projects presented, we assessed the percentage that resulted in a peer‐reviewed publication or a presentation at a national meeting.

Results

The mean number of attendees at the RIP sessions was 9.6 persons. A total of 143 evaluations were completed. All 15 presenters (100%) completed their assessments. The research ideas presented spanned a breadth of topics in clinical research, quality improvement, policy, and professional development (Table 1).

Details About RIP Sessions Held During 2009 Academic Year
SessionDatePresenterTopicEvaluations Completed
17/2008Dr. CSHospital medicine in Canada versus the United States7
27/2008Dr. RTProcedures by hospitalists9
38/2008Dr. MAClostridium difficile treatment in the hospital11
48/2008Dr. EHActive bed management6
59/2008Dr. ASMedication reconciliation for geriatric inpatients10
69/2008Dr. DTTime‐motion study of hospitalists10
710/2008Dr. KVe‐Triage pilot16
811/2008Dr. EHAssessing clinical performance of hospitalists7
912/2008Dr. SCTrends and implications of hospitalists' morale8
101/2009Dr. TBLessons learned: tracking urinary catheter use at Bayview11
112/2009Dr. FKUtilizing audit and feedback to improve performance in tobacco dependence counseling12
123/2009Dr. MKSurvivorship care plans7
134/2009Dr. DKOutpatient provider preference for discharge summary format/style/length7
145/2009Dr. RWComparing preoperative consults done by hospitalists and cardiologists11
156/2009Dr. AKDevelopment of Web‐based messaging tool for providers12

Presenter Perspective

All 15 presenters (100%) felt a lot or tremendously supported during their sessions. Thirteen physicians (86%) believed that the sessions were a lot or tremendously helpful in advancing their projects. The presenters believed that the guidance and discussions related to their research ideas, aims, hypotheses, and plans were most helpful for advancing their projects (Table 2).

Perspectives from the 15 Presenters About Research‐in‐Progress Session
 Not at All, n (%)A Little, n (%)Some, n (%)A Lot, n (%)Tremendously, n (%)
General questions:
Intellectually/professionally stimulating0 (0)0 (0)0 (0)5 (33)10 (66)
Feeling supported by your colleagues in your scholarly pursuits0 (0)0 (0)0 (0)4 (27)11 (73)
Session helpful in the following areas:
Advancing your project0 (0)0 (0)2 (13)5 (33)8 (53)
Generated new hypotheses1 (6)3 (20)5 (33)5 (33)1 (6)
Clarification of research questions0 (0)2 (13)4 (27)7 (47)2 (13)
Ideas for alternate methods1 (6)1 (6)2 (13)7 (47)4 (27)
New outcomes suggested1 (6)2 (13)2 (13)5 (33)5 (33)
Strategies to improve or enhance data collection0 (0)2 (13)0 (0)8 (53)5 (33)
Suggestions for alternate analyses or analytical strategies1 (1)1 (6)4 (27)5 (33)4 (27)
Input into what is most novel/emnteresting about this work0 (0)2 (13)3 (20)6 (40)4 (27)
Guidance about the implications of the work1 (6)2 (13)1 (6)7 (47)4 (27)
Ideas about next steps or future direction/studies0 (0)0 (0)3 (21)8 (57)3 (21)

Examples of the written comments are:

  • I was overwhelmed by how engaged people were in my project.

  • The process of preparing for the session and then the discussion both helped my thinking. Colleagues were very supportive.

  • I am so glad I heard these comments and received this feedback now, rather than from peer reviewers selected by a journal to review my study. It would have been a much more difficult situation to fix at that later time.

 

Attendee Perspective

The majority of attendees (123 of 143, 86%) found the sessions to be a lot or extremely stimulating, and almost all (96%) were a lot or extremely satisfied with how the RIP sessions kept them abreast of their colleagues' academic interests. In addition, 92% judged the session's climate to be a lot or extremely supportive, and 88% deemed the balance of presentation to discussion to be just right. Attendees believed that they were most helpful to the presenter in terms of conceiving ideas for alternative methods to be used to answer the research question and in providing strategies to improve data collection (Table 3).

Perspectives from the 143 Attendees Who Completed Evaluations About How the Research‐ in‐Progress Session Was Helpful to the Presenter
Insight Offeredn (%)
Ideas for alternate methods92 (64%)
Strategies to improve data collection85 (59.4%)
New hypotheses generated84 (58.7%)
Ideas for next steps/future direction/studies83 (58%)
New outcomes suggested that should be considered69 (48%)
Clarification of the research questions61 (43%)
Input about what is most novel/emnteresting about the work60 (42%)
Guidance about the real implications of the work59 (41%)
Suggestions for alternate analyses or analytical strategies51 (36%)

The free text comments primarily addressed how the presenters' research ideas were helped by the session:

  • There were great ideas for improvementincluding practical approaches for recruitment.

  • The session made me think of the daily routine things that we do that could be studied.

  • There were some great ideas to help Dr. A make the study more simple, doable, and practical. There were also some good ideas regarding potential sources of funding.

 

Academic Success

Of the 15 projects, 6 have been published in peer‐reviewed journals as first‐ or senior‐authored publications.510 Of these, 3 were presented at national meetings prior to publication. Four additional projects have been presented at a national society's annual meeting, all of which are being prepared for publication. Of the remaining 5 presentations, 4 were terminated because of the low likelihood of academic success. The remaining project is ongoing.

Comparatively, scholarly output in the prior year by the 24 physicians in the hospitalist group was 4 first‐ or senior‐authored publications in peer‐reviewed journals and 3 presentations at national meetings.

Discussion

In this article, we report our experience with the RIP conference. The sessions were perceived to be intellectually stimulating and supportive, whereas the discussions proved helpful in advancing project ideas. Ample discussion time and good attendance were thought to be critical to the success.

To our knowledge, this is the first article gathering feedback from attendees and presenters at a RIP conference and to track academic outcomes. Several types of meetings have been established within faculty and trainee groups to support and encourage scholarly activities.11, 12 The benefits of peer collaboration and peer mentoring have been described in the literature.13, 14 For example, Edwards described the success of shortstop meetings among small groups of faculty members every 4‐6 weeks in which discussions of research projects and mutual feedback would occur.15 Santucci described peer‐mentored research development meetings, with increased research productivity.12

Mentoring is critically important for academic success in medicine.1619 When divisions have limited senior mentors available, peer mentoring has proven to be indispensable as a mechanism to support faculty members.2022 The RIP conference provided a forum for peer mentoring and provided a partial solution to the limited resource of experienced research mentors in the division. The RIP sessions appear to have helped to bring the majority of presented ideas to academic fruition. Perhaps even more important, the sessions were able to terminate studies judged to have low academic promise before the faculty had invested significant time.

Several limitations of our study should be considered. First, this study involved a research‐in‐progress conference coordinated for a group of hospitalist physicians at 1 institution, and the results may not be generalizable. Second, although attendance was good at each conference, some faculty members did not come to many sessions. It is possible that those not attending may have rated the sessions differently. Session evaluations were anonymous, and we do not know whether specific attendees rated all sessions highly, thereby resulting in some degree of clustering. Third, this study did not compare the effectiveness of the RIP conference with other peer‐mentorship models. Finally, our study was uncontrolled. Although it would not be possible to restrict specific faculty from presenting at or attending the RIP conference, we intend to more carefully collect attendance data to see whether there might be a dose‐response effect with respect to participation in this conference and academic success.

In conclusion, our RIP conference was perceived as valuable by our group and was associated with academic success. In our division, the RIP conference serves as a way to operationalize peer mentoring. Our findings may help other groups to refine either the focus or format of their RIP sessions and those wishing to initiate such a conference.

Files
References
  1. Palepu A,Friedman RH,Barnett RC, et al.Junior faculty members' mentoring relationships and their professional development in US medical schools.Acad Med.1998;73:318323.
  2. Swazey JP,Anderson MS.Mentors, Advisors and Role Models in Graduate and Professional Education.Washington, DC:Association of Academic Health Centers;1996.
  3. Bland C,Schmitz CC.Characteristics of the successful researcher and implications for faculty development.J Med Educ.1986;61:2231.
  4. Kern DE,Thomas PA,Hughes MT.Curriculum Development for Medical Education: A Six‐Step Approach.2nd ed.Baltimore, MD:The Johns Hopkins University Press;2009.
  5. Soong C,Fan E,Wright SM, et al.Characteristics of hospitalists and hospitalist programs in the United States and Canada.J Clin Outcomes Meas.2009;16:6974
  6. Thakkar R,Wright S,Boonyasai R, et al.Procedures performed by hospitalist and non‐hospitalist general internists.J Gen Intern Med.2010;25:448452.
  7. Abougergi M,Broor A,Jaar B, et al.Intravenous immunoglobulin for the treatment of severe Clostridium difficile colitis: an observational study and review of the literature [review].J Hosp Med.2010;5:E1E9.
  8. Howell E,Bessman E,Wright S, et al.Active bed management by hospitalists and emergency department throughput.Ann Intern Med.2008;149:804811.
  9. Kantsiper M,McDonald E,Wolff A, et al.Transitioning to breast cancer survivorship: perspectives of patients, cancer specialists, and primary care providers.J Gen Intern Med.2009;24(Suppl 2):S459S466.
  10. Kisuule F,Necochea A,Wright S, et al.Utilizing audit and feedback to improve hospitalists' performance in tobacco dependence counseling.Nicotine Tob Res.2010;12:797800.
  11. Dorrance KA,Denton GD,Proemba J, et al.An internal medicine interest group research program can improve scholarly productivity of medical students and foster mentoring relationships with internists.Teach Learn Med.2008;20:163167.
  12. Santucci AK,Lingler JH,Schmidt KL, et al.Peer‐mentored research development meeting: a model for successful peer mentoring among junior level researchers.Acad Psychiatry.2008;32:493497.
  13. Hurria A,Balducci L,Naeim A, et al.Mentoring junior faculty in geriatric oncology: report from the cancer and aging research group.J Clin Oncol.2008;26:31253127.
  14. Marshall JC,Cook DJ,the Canadian Critical Care Trials Group.Investigator‐led clinical research consortia: the Canadian Critical Care Trials Group.Crit Care Med.2009;37(1):S165S172.
  15. Edward K.“Short stops”: peer support of scholarly activity.Acad Med.2002;77:939.
  16. Luckhaupt SE,Chin MH,Mangione CM,Phillips RS,Bell D,Leonard AC,Tsevat J.Mentorship in academic general internal medicine. Results of a survey of mentors.J Gen Intern Med.2005;20:10141018.
  17. Zerzan JT,Hess R,Schur E, et al.Making the most of mentors: a guide for mentees.Acad Med.2009;84:140144.
  18. Sambunjak D,Straus SE,Marusić A.Mentoring in academic medicine: a systematic review.JAMA.2006;296:11031115.
  19. Steiner J,Curtis P,Lanphear B, et al.Assessing the role of influential mentors in the research development of primary care fellows.Acad Med.2004;79:865872.
  20. Moss J,Teshima J,Leszcz M.Peer group mentoring of junior faculty.Acad Psychiatry.2008;32:230235.
  21. Files JA,Blair JE,Mayer AP,Ko MG.Facilitated peer mentorship: a pilot program for academic advancement of female medical faculty.J Womens Health.2008;17:10091015.
  22. Pololi L,Knight S.Mentoring faculty in academic medicine. A new paradigm?J Gen Intern Med.2005;20:866870.
Article PDF
Issue
Journal of Hospital Medicine - 6(1)
Publications
Page Number
43-46
Legacy Keywords
research skills, teamwork
Sections
Files
Files
Article PDF
Article PDF

The research‐in‐progress (RIP) conference is commonplace in academia, but there are no studies that objectively characterize its value. Bringing faculty together away from revenue‐generating activities carries a significant cost. As such, measuring the success of such gatherings is necessary.

Mentors are an invaluable influence on the careers of junior faculty members, helping them to produce high‐quality research.13 Unfortunately, some divisions lack mentorship to support the academic needs of less experienced faculty.1 Peer mentorship may be a solution. RIP sessions represent an opportunity to intentionally formalize peer mentoring. Further, these sessions can facilitate collaborations as individuals become aware of colleagues' interests. The goal of this study was to assess the value of the research‐in‐progress conference initiated within the hospitalist division at our institution.

Methods

Study Design

This cohort study was conducted to evaluate the value of the RIP conference among hospitalists in our division and the academic outcomes of the projects.

Setting and Participants

The study took place at Johns Hopkins Bayview Medical Center (JHBMC), a 335‐bed university‐affiliated medical center in Baltimore, Maryland. The hospitalist division consists of faculty physicians, nurse practitioners, and physician assistants (20.06 FTE physicians and 7.41 FTE midlevel providers). Twelve (54%) of our faculty members are female, and the mean age of providers is 35.7 years. The providers have been practicing hospitalist medicine for 3.0 years on average; 2 (9%) are clinical associates, 16 (73%) are instructors, and 3 (14%) are assistant professors.

All faculty members presenting at the RIP session were members of the division. A senior faculty member (a professor in the Division of General Internal Medicine) helps to coordinate the conference. The group's research assistant was present at the sessions and was charged with data collection and collation.

The Johns Hopkins University institutional review board approved the study.

The Research in Progress Conference

During the 2009 academic year, our division held 15 RIP sessions. At each session, 1 faculty member presented a research proposal. The goal of each session was to provide a forum where faculty members could share their research ideas (specific aims, hypotheses, planned design, outcome measures, analytic plans, and preliminary results [if applicable]) in order to receive feedback. The senior faculty member met with the presenter prior to each session in order to: (1) ensure that half the RIP time was reserved for discussion and (2) review the presenter's goals so these would be made explicit to peers. The coordinator of the RIP conference facilitated the discussion, solicited input from all attendees, and encouraged constructive criticism.

Evaluation, Data Collection, and Analysis

At the end of each session, attendees (who were exclusively members of the hospitalist division) were asked to complete an anonymous survey. The 1‐page instrument was designed (1) with input from curriculum development experts4 and (2) after a review of the literature about RIP conferences. These steps conferred content validity to the instrument, which assessed perceptions about the session's quality and what was learned. Five‐point Likert scales were used to characterize the conference's success in several areas, including being intellectually/professionally stimulating and keeping them apprised of their colleagues' interests. The survey also assessed the participatory nature of the conference (balance of presentation vs discussion), its climate (extremely critical vs extremely supportive), and how the conference assisted the presenter. The presenters completed a distinct survey related to how helpful the conference was in improving/enhancing their projects. A final open‐ended section invited additional comments. The instrument was piloted and iteratively revised before its use in this study.

For the projects presented, we assessed the percentage that resulted in a peer‐reviewed publication or a presentation at a national meeting.

Results

The mean number of attendees at the RIP sessions was 9.6 persons. A total of 143 evaluations were completed. All 15 presenters (100%) completed their assessments. The research ideas presented spanned a breadth of topics in clinical research, quality improvement, policy, and professional development (Table 1).

Details About RIP Sessions Held During 2009 Academic Year
SessionDatePresenterTopicEvaluations Completed
17/2008Dr. CSHospital medicine in Canada versus the United States7
27/2008Dr. RTProcedures by hospitalists9
38/2008Dr. MAClostridium difficile treatment in the hospital11
48/2008Dr. EHActive bed management6
59/2008Dr. ASMedication reconciliation for geriatric inpatients10
69/2008Dr. DTTime‐motion study of hospitalists10
710/2008Dr. KVe‐Triage pilot16
811/2008Dr. EHAssessing clinical performance of hospitalists7
912/2008Dr. SCTrends and implications of hospitalists' morale8
101/2009Dr. TBLessons learned: tracking urinary catheter use at Bayview11
112/2009Dr. FKUtilizing audit and feedback to improve performance in tobacco dependence counseling12
123/2009Dr. MKSurvivorship care plans7
134/2009Dr. DKOutpatient provider preference for discharge summary format/style/length7
145/2009Dr. RWComparing preoperative consults done by hospitalists and cardiologists11
156/2009Dr. AKDevelopment of Web‐based messaging tool for providers12

Presenter Perspective

All 15 presenters (100%) felt a lot or tremendously supported during their sessions. Thirteen physicians (86%) believed that the sessions were a lot or tremendously helpful in advancing their projects. The presenters believed that the guidance and discussions related to their research ideas, aims, hypotheses, and plans were most helpful for advancing their projects (Table 2).

Perspectives from the 15 Presenters About Research‐in‐Progress Session
 Not at All, n (%)A Little, n (%)Some, n (%)A Lot, n (%)Tremendously, n (%)
General questions:
Intellectually/professionally stimulating0 (0)0 (0)0 (0)5 (33)10 (66)
Feeling supported by your colleagues in your scholarly pursuits0 (0)0 (0)0 (0)4 (27)11 (73)
Session helpful in the following areas:
Advancing your project0 (0)0 (0)2 (13)5 (33)8 (53)
Generated new hypotheses1 (6)3 (20)5 (33)5 (33)1 (6)
Clarification of research questions0 (0)2 (13)4 (27)7 (47)2 (13)
Ideas for alternate methods1 (6)1 (6)2 (13)7 (47)4 (27)
New outcomes suggested1 (6)2 (13)2 (13)5 (33)5 (33)
Strategies to improve or enhance data collection0 (0)2 (13)0 (0)8 (53)5 (33)
Suggestions for alternate analyses or analytical strategies1 (1)1 (6)4 (27)5 (33)4 (27)
Input into what is most novel/emnteresting about this work0 (0)2 (13)3 (20)6 (40)4 (27)
Guidance about the implications of the work1 (6)2 (13)1 (6)7 (47)4 (27)
Ideas about next steps or future direction/studies0 (0)0 (0)3 (21)8 (57)3 (21)

Examples of the written comments are:

  • I was overwhelmed by how engaged people were in my project.

  • The process of preparing for the session and then the discussion both helped my thinking. Colleagues were very supportive.

  • I am so glad I heard these comments and received this feedback now, rather than from peer reviewers selected by a journal to review my study. It would have been a much more difficult situation to fix at that later time.

 

Attendee Perspective

The majority of attendees (123 of 143, 86%) found the sessions to be a lot or extremely stimulating, and almost all (96%) were a lot or extremely satisfied with how the RIP sessions kept them abreast of their colleagues' academic interests. In addition, 92% judged the session's climate to be a lot or extremely supportive, and 88% deemed the balance of presentation to discussion to be just right. Attendees believed that they were most helpful to the presenter in terms of conceiving ideas for alternative methods to be used to answer the research question and in providing strategies to improve data collection (Table 3).

Perspectives from the 143 Attendees Who Completed Evaluations About How the Research‐ in‐Progress Session Was Helpful to the Presenter
Insight Offeredn (%)
Ideas for alternate methods92 (64%)
Strategies to improve data collection85 (59.4%)
New hypotheses generated84 (58.7%)
Ideas for next steps/future direction/studies83 (58%)
New outcomes suggested that should be considered69 (48%)
Clarification of the research questions61 (43%)
Input about what is most novel/emnteresting about the work60 (42%)
Guidance about the real implications of the work59 (41%)
Suggestions for alternate analyses or analytical strategies51 (36%)

The free text comments primarily addressed how the presenters' research ideas were helped by the session:

  • There were great ideas for improvementincluding practical approaches for recruitment.

  • The session made me think of the daily routine things that we do that could be studied.

  • There were some great ideas to help Dr. A make the study more simple, doable, and practical. There were also some good ideas regarding potential sources of funding.

 

Academic Success

Of the 15 projects, 6 have been published in peer‐reviewed journals as first‐ or senior‐authored publications.510 Of these, 3 were presented at national meetings prior to publication. Four additional projects have been presented at a national society's annual meeting, all of which are being prepared for publication. Of the remaining 5 presentations, 4 were terminated because of the low likelihood of academic success. The remaining project is ongoing.

Comparatively, scholarly output in the prior year by the 24 physicians in the hospitalist group was 4 first‐ or senior‐authored publications in peer‐reviewed journals and 3 presentations at national meetings.

Discussion

In this article, we report our experience with the RIP conference. The sessions were perceived to be intellectually stimulating and supportive, whereas the discussions proved helpful in advancing project ideas. Ample discussion time and good attendance were thought to be critical to the success.

To our knowledge, this is the first article gathering feedback from attendees and presenters at a RIP conference and to track academic outcomes. Several types of meetings have been established within faculty and trainee groups to support and encourage scholarly activities.11, 12 The benefits of peer collaboration and peer mentoring have been described in the literature.13, 14 For example, Edwards described the success of shortstop meetings among small groups of faculty members every 4‐6 weeks in which discussions of research projects and mutual feedback would occur.15 Santucci described peer‐mentored research development meetings, with increased research productivity.12

Mentoring is critically important for academic success in medicine.1619 When divisions have limited senior mentors available, peer mentoring has proven to be indispensable as a mechanism to support faculty members.2022 The RIP conference provided a forum for peer mentoring and provided a partial solution to the limited resource of experienced research mentors in the division. The RIP sessions appear to have helped to bring the majority of presented ideas to academic fruition. Perhaps even more important, the sessions were able to terminate studies judged to have low academic promise before the faculty had invested significant time.

Several limitations of our study should be considered. First, this study involved a research‐in‐progress conference coordinated for a group of hospitalist physicians at 1 institution, and the results may not be generalizable. Second, although attendance was good at each conference, some faculty members did not come to many sessions. It is possible that those not attending may have rated the sessions differently. Session evaluations were anonymous, and we do not know whether specific attendees rated all sessions highly, thereby resulting in some degree of clustering. Third, this study did not compare the effectiveness of the RIP conference with other peer‐mentorship models. Finally, our study was uncontrolled. Although it would not be possible to restrict specific faculty from presenting at or attending the RIP conference, we intend to more carefully collect attendance data to see whether there might be a dose‐response effect with respect to participation in this conference and academic success.

In conclusion, our RIP conference was perceived as valuable by our group and was associated with academic success. In our division, the RIP conference serves as a way to operationalize peer mentoring. Our findings may help other groups to refine either the focus or format of their RIP sessions and those wishing to initiate such a conference.

The research‐in‐progress (RIP) conference is commonplace in academia, but there are no studies that objectively characterize its value. Bringing faculty together away from revenue‐generating activities carries a significant cost. As such, measuring the success of such gatherings is necessary.

Mentors are an invaluable influence on the careers of junior faculty members, helping them to produce high‐quality research.13 Unfortunately, some divisions lack mentorship to support the academic needs of less experienced faculty.1 Peer mentorship may be a solution. RIP sessions represent an opportunity to intentionally formalize peer mentoring. Further, these sessions can facilitate collaborations as individuals become aware of colleagues' interests. The goal of this study was to assess the value of the research‐in‐progress conference initiated within the hospitalist division at our institution.

Methods

Study Design

This cohort study was conducted to evaluate the value of the RIP conference among hospitalists in our division and the academic outcomes of the projects.

Setting and Participants

The study took place at Johns Hopkins Bayview Medical Center (JHBMC), a 335‐bed university‐affiliated medical center in Baltimore, Maryland. The hospitalist division consists of faculty physicians, nurse practitioners, and physician assistants (20.06 FTE physicians and 7.41 FTE midlevel providers). Twelve (54%) of our faculty members are female, and the mean age of providers is 35.7 years. The providers have been practicing hospitalist medicine for 3.0 years on average; 2 (9%) are clinical associates, 16 (73%) are instructors, and 3 (14%) are assistant professors.

All faculty members presenting at the RIP session were members of the division. A senior faculty member (a professor in the Division of General Internal Medicine) helps to coordinate the conference. The group's research assistant was present at the sessions and was charged with data collection and collation.

The Johns Hopkins University institutional review board approved the study.

The Research in Progress Conference

During the 2009 academic year, our division held 15 RIP sessions. At each session, 1 faculty member presented a research proposal. The goal of each session was to provide a forum where faculty members could share their research ideas (specific aims, hypotheses, planned design, outcome measures, analytic plans, and preliminary results [if applicable]) in order to receive feedback. The senior faculty member met with the presenter prior to each session in order to: (1) ensure that half the RIP time was reserved for discussion and (2) review the presenter's goals so these would be made explicit to peers. The coordinator of the RIP conference facilitated the discussion, solicited input from all attendees, and encouraged constructive criticism.

Evaluation, Data Collection, and Analysis

At the end of each session, attendees (who were exclusively members of the hospitalist division) were asked to complete an anonymous survey. The 1‐page instrument was designed (1) with input from curriculum development experts4 and (2) after a review of the literature about RIP conferences. These steps conferred content validity to the instrument, which assessed perceptions about the session's quality and what was learned. Five‐point Likert scales were used to characterize the conference's success in several areas, including being intellectually/professionally stimulating and keeping them apprised of their colleagues' interests. The survey also assessed the participatory nature of the conference (balance of presentation vs discussion), its climate (extremely critical vs extremely supportive), and how the conference assisted the presenter. The presenters completed a distinct survey related to how helpful the conference was in improving/enhancing their projects. A final open‐ended section invited additional comments. The instrument was piloted and iteratively revised before its use in this study.

For the projects presented, we assessed the percentage that resulted in a peer‐reviewed publication or a presentation at a national meeting.

Results

The mean number of attendees at the RIP sessions was 9.6 persons. A total of 143 evaluations were completed. All 15 presenters (100%) completed their assessments. The research ideas presented spanned a breadth of topics in clinical research, quality improvement, policy, and professional development (Table 1).

Details About RIP Sessions Held During 2009 Academic Year
SessionDatePresenterTopicEvaluations Completed
17/2008Dr. CSHospital medicine in Canada versus the United States7
27/2008Dr. RTProcedures by hospitalists9
38/2008Dr. MAClostridium difficile treatment in the hospital11
48/2008Dr. EHActive bed management6
59/2008Dr. ASMedication reconciliation for geriatric inpatients10
69/2008Dr. DTTime‐motion study of hospitalists10
710/2008Dr. KVe‐Triage pilot16
811/2008Dr. EHAssessing clinical performance of hospitalists7
912/2008Dr. SCTrends and implications of hospitalists' morale8
101/2009Dr. TBLessons learned: tracking urinary catheter use at Bayview11
112/2009Dr. FKUtilizing audit and feedback to improve performance in tobacco dependence counseling12
123/2009Dr. MKSurvivorship care plans7
134/2009Dr. DKOutpatient provider preference for discharge summary format/style/length7
145/2009Dr. RWComparing preoperative consults done by hospitalists and cardiologists11
156/2009Dr. AKDevelopment of Web‐based messaging tool for providers12

Presenter Perspective

All 15 presenters (100%) felt a lot or tremendously supported during their sessions. Thirteen physicians (86%) believed that the sessions were a lot or tremendously helpful in advancing their projects. The presenters believed that the guidance and discussions related to their research ideas, aims, hypotheses, and plans were most helpful for advancing their projects (Table 2).

Perspectives from the 15 Presenters About Research‐in‐Progress Session
 Not at All, n (%)A Little, n (%)Some, n (%)A Lot, n (%)Tremendously, n (%)
General questions:
Intellectually/professionally stimulating0 (0)0 (0)0 (0)5 (33)10 (66)
Feeling supported by your colleagues in your scholarly pursuits0 (0)0 (0)0 (0)4 (27)11 (73)
Session helpful in the following areas:
Advancing your project0 (0)0 (0)2 (13)5 (33)8 (53)
Generated new hypotheses1 (6)3 (20)5 (33)5 (33)1 (6)
Clarification of research questions0 (0)2 (13)4 (27)7 (47)2 (13)
Ideas for alternate methods1 (6)1 (6)2 (13)7 (47)4 (27)
New outcomes suggested1 (6)2 (13)2 (13)5 (33)5 (33)
Strategies to improve or enhance data collection0 (0)2 (13)0 (0)8 (53)5 (33)
Suggestions for alternate analyses or analytical strategies1 (1)1 (6)4 (27)5 (33)4 (27)
Input into what is most novel/emnteresting about this work0 (0)2 (13)3 (20)6 (40)4 (27)
Guidance about the implications of the work1 (6)2 (13)1 (6)7 (47)4 (27)
Ideas about next steps or future direction/studies0 (0)0 (0)3 (21)8 (57)3 (21)

Examples of the written comments are:

  • I was overwhelmed by how engaged people were in my project.

  • The process of preparing for the session and then the discussion both helped my thinking. Colleagues were very supportive.

  • I am so glad I heard these comments and received this feedback now, rather than from peer reviewers selected by a journal to review my study. It would have been a much more difficult situation to fix at that later time.

 

Attendee Perspective

The majority of attendees (123 of 143, 86%) found the sessions to be a lot or extremely stimulating, and almost all (96%) were a lot or extremely satisfied with how the RIP sessions kept them abreast of their colleagues' academic interests. In addition, 92% judged the session's climate to be a lot or extremely supportive, and 88% deemed the balance of presentation to discussion to be just right. Attendees believed that they were most helpful to the presenter in terms of conceiving ideas for alternative methods to be used to answer the research question and in providing strategies to improve data collection (Table 3).

Perspectives from the 143 Attendees Who Completed Evaluations About How the Research‐ in‐Progress Session Was Helpful to the Presenter
Insight Offeredn (%)
Ideas for alternate methods92 (64%)
Strategies to improve data collection85 (59.4%)
New hypotheses generated84 (58.7%)
Ideas for next steps/future direction/studies83 (58%)
New outcomes suggested that should be considered69 (48%)
Clarification of the research questions61 (43%)
Input about what is most novel/emnteresting about the work60 (42%)
Guidance about the real implications of the work59 (41%)
Suggestions for alternate analyses or analytical strategies51 (36%)

The free text comments primarily addressed how the presenters' research ideas were helped by the session:

  • There were great ideas for improvementincluding practical approaches for recruitment.

  • The session made me think of the daily routine things that we do that could be studied.

  • There were some great ideas to help Dr. A make the study more simple, doable, and practical. There were also some good ideas regarding potential sources of funding.

 

Academic Success

Of the 15 projects, 6 have been published in peer‐reviewed journals as first‐ or senior‐authored publications.510 Of these, 3 were presented at national meetings prior to publication. Four additional projects have been presented at a national society's annual meeting, all of which are being prepared for publication. Of the remaining 5 presentations, 4 were terminated because of the low likelihood of academic success. The remaining project is ongoing.

Comparatively, scholarly output in the prior year by the 24 physicians in the hospitalist group was 4 first‐ or senior‐authored publications in peer‐reviewed journals and 3 presentations at national meetings.

Discussion

In this article, we report our experience with the RIP conference. The sessions were perceived to be intellectually stimulating and supportive, whereas the discussions proved helpful in advancing project ideas. Ample discussion time and good attendance were thought to be critical to the success.

To our knowledge, this is the first article gathering feedback from attendees and presenters at a RIP conference and to track academic outcomes. Several types of meetings have been established within faculty and trainee groups to support and encourage scholarly activities.11, 12 The benefits of peer collaboration and peer mentoring have been described in the literature.13, 14 For example, Edwards described the success of shortstop meetings among small groups of faculty members every 4‐6 weeks in which discussions of research projects and mutual feedback would occur.15 Santucci described peer‐mentored research development meetings, with increased research productivity.12

Mentoring is critically important for academic success in medicine.1619 When divisions have limited senior mentors available, peer mentoring has proven to be indispensable as a mechanism to support faculty members.2022 The RIP conference provided a forum for peer mentoring and provided a partial solution to the limited resource of experienced research mentors in the division. The RIP sessions appear to have helped to bring the majority of presented ideas to academic fruition. Perhaps even more important, the sessions were able to terminate studies judged to have low academic promise before the faculty had invested significant time.

Several limitations of our study should be considered. First, this study involved a research‐in‐progress conference coordinated for a group of hospitalist physicians at 1 institution, and the results may not be generalizable. Second, although attendance was good at each conference, some faculty members did not come to many sessions. It is possible that those not attending may have rated the sessions differently. Session evaluations were anonymous, and we do not know whether specific attendees rated all sessions highly, thereby resulting in some degree of clustering. Third, this study did not compare the effectiveness of the RIP conference with other peer‐mentorship models. Finally, our study was uncontrolled. Although it would not be possible to restrict specific faculty from presenting at or attending the RIP conference, we intend to more carefully collect attendance data to see whether there might be a dose‐response effect with respect to participation in this conference and academic success.

In conclusion, our RIP conference was perceived as valuable by our group and was associated with academic success. In our division, the RIP conference serves as a way to operationalize peer mentoring. Our findings may help other groups to refine either the focus or format of their RIP sessions and those wishing to initiate such a conference.

References
  1. Palepu A,Friedman RH,Barnett RC, et al.Junior faculty members' mentoring relationships and their professional development in US medical schools.Acad Med.1998;73:318323.
  2. Swazey JP,Anderson MS.Mentors, Advisors and Role Models in Graduate and Professional Education.Washington, DC:Association of Academic Health Centers;1996.
  3. Bland C,Schmitz CC.Characteristics of the successful researcher and implications for faculty development.J Med Educ.1986;61:2231.
  4. Kern DE,Thomas PA,Hughes MT.Curriculum Development for Medical Education: A Six‐Step Approach.2nd ed.Baltimore, MD:The Johns Hopkins University Press;2009.
  5. Soong C,Fan E,Wright SM, et al.Characteristics of hospitalists and hospitalist programs in the United States and Canada.J Clin Outcomes Meas.2009;16:6974
  6. Thakkar R,Wright S,Boonyasai R, et al.Procedures performed by hospitalist and non‐hospitalist general internists.J Gen Intern Med.2010;25:448452.
  7. Abougergi M,Broor A,Jaar B, et al.Intravenous immunoglobulin for the treatment of severe Clostridium difficile colitis: an observational study and review of the literature [review].J Hosp Med.2010;5:E1E9.
  8. Howell E,Bessman E,Wright S, et al.Active bed management by hospitalists and emergency department throughput.Ann Intern Med.2008;149:804811.
  9. Kantsiper M,McDonald E,Wolff A, et al.Transitioning to breast cancer survivorship: perspectives of patients, cancer specialists, and primary care providers.J Gen Intern Med.2009;24(Suppl 2):S459S466.
  10. Kisuule F,Necochea A,Wright S, et al.Utilizing audit and feedback to improve hospitalists' performance in tobacco dependence counseling.Nicotine Tob Res.2010;12:797800.
  11. Dorrance KA,Denton GD,Proemba J, et al.An internal medicine interest group research program can improve scholarly productivity of medical students and foster mentoring relationships with internists.Teach Learn Med.2008;20:163167.
  12. Santucci AK,Lingler JH,Schmidt KL, et al.Peer‐mentored research development meeting: a model for successful peer mentoring among junior level researchers.Acad Psychiatry.2008;32:493497.
  13. Hurria A,Balducci L,Naeim A, et al.Mentoring junior faculty in geriatric oncology: report from the cancer and aging research group.J Clin Oncol.2008;26:31253127.
  14. Marshall JC,Cook DJ,the Canadian Critical Care Trials Group.Investigator‐led clinical research consortia: the Canadian Critical Care Trials Group.Crit Care Med.2009;37(1):S165S172.
  15. Edward K.“Short stops”: peer support of scholarly activity.Acad Med.2002;77:939.
  16. Luckhaupt SE,Chin MH,Mangione CM,Phillips RS,Bell D,Leonard AC,Tsevat J.Mentorship in academic general internal medicine. Results of a survey of mentors.J Gen Intern Med.2005;20:10141018.
  17. Zerzan JT,Hess R,Schur E, et al.Making the most of mentors: a guide for mentees.Acad Med.2009;84:140144.
  18. Sambunjak D,Straus SE,Marusić A.Mentoring in academic medicine: a systematic review.JAMA.2006;296:11031115.
  19. Steiner J,Curtis P,Lanphear B, et al.Assessing the role of influential mentors in the research development of primary care fellows.Acad Med.2004;79:865872.
  20. Moss J,Teshima J,Leszcz M.Peer group mentoring of junior faculty.Acad Psychiatry.2008;32:230235.
  21. Files JA,Blair JE,Mayer AP,Ko MG.Facilitated peer mentorship: a pilot program for academic advancement of female medical faculty.J Womens Health.2008;17:10091015.
  22. Pololi L,Knight S.Mentoring faculty in academic medicine. A new paradigm?J Gen Intern Med.2005;20:866870.
References
  1. Palepu A,Friedman RH,Barnett RC, et al.Junior faculty members' mentoring relationships and their professional development in US medical schools.Acad Med.1998;73:318323.
  2. Swazey JP,Anderson MS.Mentors, Advisors and Role Models in Graduate and Professional Education.Washington, DC:Association of Academic Health Centers;1996.
  3. Bland C,Schmitz CC.Characteristics of the successful researcher and implications for faculty development.J Med Educ.1986;61:2231.
  4. Kern DE,Thomas PA,Hughes MT.Curriculum Development for Medical Education: A Six‐Step Approach.2nd ed.Baltimore, MD:The Johns Hopkins University Press;2009.
  5. Soong C,Fan E,Wright SM, et al.Characteristics of hospitalists and hospitalist programs in the United States and Canada.J Clin Outcomes Meas.2009;16:6974
  6. Thakkar R,Wright S,Boonyasai R, et al.Procedures performed by hospitalist and non‐hospitalist general internists.J Gen Intern Med.2010;25:448452.
  7. Abougergi M,Broor A,Jaar B, et al.Intravenous immunoglobulin for the treatment of severe Clostridium difficile colitis: an observational study and review of the literature [review].J Hosp Med.2010;5:E1E9.
  8. Howell E,Bessman E,Wright S, et al.Active bed management by hospitalists and emergency department throughput.Ann Intern Med.2008;149:804811.
  9. Kantsiper M,McDonald E,Wolff A, et al.Transitioning to breast cancer survivorship: perspectives of patients, cancer specialists, and primary care providers.J Gen Intern Med.2009;24(Suppl 2):S459S466.
  10. Kisuule F,Necochea A,Wright S, et al.Utilizing audit and feedback to improve hospitalists' performance in tobacco dependence counseling.Nicotine Tob Res.2010;12:797800.
  11. Dorrance KA,Denton GD,Proemba J, et al.An internal medicine interest group research program can improve scholarly productivity of medical students and foster mentoring relationships with internists.Teach Learn Med.2008;20:163167.
  12. Santucci AK,Lingler JH,Schmidt KL, et al.Peer‐mentored research development meeting: a model for successful peer mentoring among junior level researchers.Acad Psychiatry.2008;32:493497.
  13. Hurria A,Balducci L,Naeim A, et al.Mentoring junior faculty in geriatric oncology: report from the cancer and aging research group.J Clin Oncol.2008;26:31253127.
  14. Marshall JC,Cook DJ,the Canadian Critical Care Trials Group.Investigator‐led clinical research consortia: the Canadian Critical Care Trials Group.Crit Care Med.2009;37(1):S165S172.
  15. Edward K.“Short stops”: peer support of scholarly activity.Acad Med.2002;77:939.
  16. Luckhaupt SE,Chin MH,Mangione CM,Phillips RS,Bell D,Leonard AC,Tsevat J.Mentorship in academic general internal medicine. Results of a survey of mentors.J Gen Intern Med.2005;20:10141018.
  17. Zerzan JT,Hess R,Schur E, et al.Making the most of mentors: a guide for mentees.Acad Med.2009;84:140144.
  18. Sambunjak D,Straus SE,Marusić A.Mentoring in academic medicine: a systematic review.JAMA.2006;296:11031115.
  19. Steiner J,Curtis P,Lanphear B, et al.Assessing the role of influential mentors in the research development of primary care fellows.Acad Med.2004;79:865872.
  20. Moss J,Teshima J,Leszcz M.Peer group mentoring of junior faculty.Acad Psychiatry.2008;32:230235.
  21. Files JA,Blair JE,Mayer AP,Ko MG.Facilitated peer mentorship: a pilot program for academic advancement of female medical faculty.J Womens Health.2008;17:10091015.
  22. Pololi L,Knight S.Mentoring faculty in academic medicine. A new paradigm?J Gen Intern Med.2005;20:866870.
Issue
Journal of Hospital Medicine - 6(1)
Issue
Journal of Hospital Medicine - 6(1)
Page Number
43-46
Page Number
43-46
Publications
Publications
Article Type
Display Headline
Research in progress conference for hospitalists provides valuable peer mentoring
Display Headline
Research in progress conference for hospitalists provides valuable peer mentoring
Legacy Keywords
research skills, teamwork
Legacy Keywords
research skills, teamwork
Sections
Article Source

Copyright © 2011 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Johns Hopkins University, School of Medicine, Johns Hopkins Bayview Medical Center, 5200 Eastern Avenue, Mason F. Lord Building, West Tower, 6th Floor, Collaborative Inpatient Medical Service Office, Baltimore, MD 21224
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Image
Disable zoom
Off
Media Files
Image
Disable zoom
Off

Consultation Improvement Teaching Module

Article Type
Changed
Sun, 05/28/2017 - 21:18
Display Headline
A case‐based teaching module combined with audit and feedback to improve the quality of consultations

An important role of the internist is that of inpatient medical consultant.13 As consultants, internists make recommendations regarding the patient's medical care and help the primary team to care for the patient. This requires familiarity with the body of knowledge of consultative medicine, as well as process skills that relate to working with teams of providers.1, 4, 5 For some physicians, the knowledge and skills of medical consultation are acquired during residency; however, many internists feel inadequately prepared for their roles of consultants.68 Because no specific requirements for medical consultation curricula during graduate medical education have been set forth, internists and other physicians do not receive uniform or comprehensive training in this area.3, 57, 9 Although internal medicine residents may gain experience while performing consultations on subspecialty rotations (eg, cardiology), the teaching on these blocks tends to be focused on the specialty content and less so on consultative principles.1, 4

As inpatient care is increasingly being taken over by hospitalists, the role of the hospitalist has expanded to include medical consultation. It is estimated that 92% of hospitalists care for patients on medical consultation services.8 The Society of Hospital Medicine (SHM) has also included medical consultation as one of the core competencies of the hospitalist.2 Therefore, it is essential that hospitalists master the knowledge and skills that are required to serve as effective consultants.10, 11

An educational strategy that has been shown to be effective in improving medical practice is audit and feedback.1215 Providing physicians with feedback on their clinical practice has been shown to improve performance more so than other educational methods.12 Practice‐based learning and improvement (PBLI) utilizes this strategy and it has become one of the core competencies stressed by the Accreditation Council for Graduate Medical Education (ACGME). It involves analyzing one's patient care practices in order to identify areas for improvement. In this study, we tested the impact of a newly developed one‐on‐one medical consultation educational module that was combined with audit and feedback in an attempt to improve the quality of the consultations being performed by our hospitalists.

Materials and Methods

Study Design and Setting

This single group pre‐post educational intervention study took place at Johns Hopkins Bayview Medical Center (JHBMC), a 353‐bed university‐affiliated tertiary care medical center in Baltimore, MD, during the 2006‐2007 academic year.

Study Subjects

All 7 members of the hospitalist group at JHBMC who were serving on the medical consultation service during the study period participated. The internal medicine residents who elected to rotate on the consultation service during the study period were also exposed to the case‐based module component of the intervention.

Intervention

The educational intervention was delivered as a one‐on‐one session and lasted approximately 1 hour. The time was spent on the following activities:

  • A true‐false pretest to assess knowledge based on clinical scenarios (Appendix 1).

  • A case‐based module emphasizing the core principles of consultative medicine.16 The module was purposively designed to teach and stimulate thought around 3 complex general medical consultations. These cases are followed by questions about scenarios. The cases specifically address the role of medical consultant and the ways to be most effective in this role based on the recommendations of experts in the field.1, 10 Additional details about the content and format can be viewed at http://www.jhcme.com/site.16 As the physician was working through the teaching cases, the teacher would facilitate discussion around wrong answers and issues that the learner wanted to discuss.

  • The true‐false test to assess knowledge was once again administered (the posttest was identical to the pretest).

  • For the hospitalist faculty members only (and not the residents), audit and feedback was utilized. The physician was shown 2 of his/her most recent consults and was asked to reflect upon the strengths and weaknesses of the consult. The hospitalist was explicitly asked to critique them in light of the knowledge they gained from the consultation module. The teacher also gave specific feedback, both positive and negative, about the written consultations with attention directed specifically toward: the number of recommendations, the specificity of the guidance (eg, exact dosing of medications), clear documentation of their name and contact information, and documentation that the suggestions were verbally passed on to the primary team.

 

Evaluation Data

Learner knowledge, both at baseline and after the case‐based module, was assessed using a written test.

Consultations performed before and after the intervention were compared. Copies of up to 5 consults done by each hospitalist during the year before or after the educational intervention were collected. Identifiers and dates were removed from the consults so that scorers did not know whether the consults were preintervention or postintervention. Consults were scored out of a possible total of 4 to 6 pointsdepending on whether specific elements were applicable. One point was given for each of the following: (1) number of recommendations 5; (2) specific details for all drugs listed [if applicable]; (3) specific details for imaging studies suggested [if applicable]; (4) specific follow‐up documented; (5) consultant's name being clearly written; and (6) verbal contact with the referring team documented. These 6 elements were included based on expert recommendation.10 All consults were scored by 2 hospitalists independently. Disagreements in scores were infrequent (on <10% of the 48 consults scored) and these were only off by 1 point for the overall score. The disagreements were settled by discussion and consensus. All consult scores were converted to a score out of 5, to allow comparisons to be made.

Following the intervention, each participant completed an overall assessment of the educational experience.

Data Analysis

We examined the frequency of responses for each variable and reviewed the distributions. The knowledge scores on the written pretests were not normally distributed and therefore when making comparisons to the posttest, we used the Wilcoxon rank signed test. In comparing the performance scores on the consults across the 2 time periods, we compared the results with both Wilcoxon rank signed test and paired t tests. Because the results were equivalent with both tests, the means from the t tests are shown. Data were analyzed using STATA version 8 (Stata Corp., College Station, TX).

Results

Study Subjects

Among the 14 hospitalist faculty members who were on staff during the study period, 7 were performing medical consults and therefore participated in the study. The 7 faculty members had a mean age of 35 years; 5 (71%) were female, and 5 (71%) were board‐certified in Internal Medicine. The average elapsed time since completion of residency was 5.1 years and average number of years practicing as a hospitalist was 3.8 years (Table 1).

Characteristics of the Faculty Members and House Officers Who Participated in the Study
Faculty (n = 7) 
Age in years, mean (SD)35.57 (5.1)
Female, n (%)5 (71%)
Board certified, n (%)5 (71%)
Years since completion of residency, mean (SD)5.1 (4.4)
Number of years in practice, mean (SD)3.8 (2.9)
Weeks spent in medical consult rotation, mean (SD)3.7 (0.8)
Have read consultation books, n (%)5 (71%)
Housestaff (n = 11) 
Age in years, mean (SD)29.1 (1.8)
Female, n (%)7 (64%)
Residency year, n (%) 
PGY10 (0%)
PGY22 (20%)
PGY37 (70%)
PGY41 (10%)
Weeks spent in medical consult rotation, mean (SD)1.5 (0.85)
Have read consultation books, n (%)5 (50%)

There were 12 house‐staff members who were on their medical consultation rotation during the study period and were exposed to the intervention. Of the 12 house‐staff members, 11 provided demographic information. Characteristics of the 11 house‐staff participants are also shown in Table 1.

Premodule vs. Postmodule Knowledge Assessment

Both faculty and house‐staff performed very well on the true/false pretest. The small changes in the median scores from pretest to posttest did not change significantly for the faculty (pretest: 11/14, posttest: 12/14; P = 0.08), but did reach statistical significance for the house‐staff (pretest: 10/14, posttest: 12/14; P = 0.03).

Audit and Feedback

Of the 7 faculty who participated in the study, 6 performed consults both before and after the intervention. Using the consult scoring system, the scores for all 6 physicians' consults improved after the intervention compared to their earlier consults (Table 2). For 1 faculty member, the consult scores were statistically significantly higher after the intervention (P = 0.017). When all consults completed by the hospitalists were compared before and after the training, there was statistically significant improvement in consult scores (P < 0.001) (Table 2).

Comparisons of Scores for the Consultations Performed Before and After the Intervention
 Preintervention (n =27)Postintervention (n = 21) 
ConsultantScores*MeanScores*MeanP value
  • Total possible score = 5.

  • P value obtained using t test. Significance of results was equivalent when analyzed using the Wilcoxon ranked sign test.

A2, 3, 3.75, 3, 2.52.83, 3, 3, 4, 43.40.093
B3, 3, 3, 3, 12.64, 3, 3, 2.53.10.18
C2, 1.671.84, 2, 33.00.11
D4, 2.5, 3.75, 2.5, 3.753.33.75, 33.40.45
E2, 3, 1, 2, 22.03, 3, 3.753.30.017
F3, 3.75, 2.5, 4, 23.12, 3.75, 4, 43.30.27
All 2.7 3.30.0006

Satisfaction with Consultation Curricula

All faculty and house‐staff participants felt that the intervention had an impact on them (19/19, 100%). Eighteen out of 19 participants (95%) would recommend the educational session to colleagues. After participating, 82% of learners felt confident in performing medical consultations. With respect to the audit and feedback process of reviewing their previously performed consultations, all physicians claimed that their written consultation notes would change in the future.

Discussion

This curricular intervention using a case‐based module combined with audit and feedback appears to have resulted not only in improved knowledge, but also changed physician behavior in the form of higher‐quality written consultations. The teaching sessions were also well received and valued by busy hospitalists.

A review of randomized trials of audit and feedback12 revealed that this strategy is effective in improving professional practice in a variety of areas, including laboratory overutilization,13, 14 clinical practice guideline adherence,15, 17 and antibiotic utilization.13 In 1 study, internal medicine specialists audited their consultation letters and most believed that there had been lasting improvements to their notes.18 However, this study did not objectively compare the consultation letters from before audit and feedback to those written afterward but instead relied solely on the respondents' self‐assessment. It is known that many residents and recent graduates of internal medicine programs feel inadequately prepared in the role of consultant.6, 8 This work describes a curricular intervention that served to augment confidence, knowledge, and actual performance in consultation medicine of physicians. Goldman et al.'s10 Ten Commandments for Effective Consultations, which were later modified by Salerno et al.,11 were highlighted in our case‐based teachings: determine the question being asked or how you can help the requesting physician, establish the urgency of the consultation, gather primary data, be as brief as appropriate in your report, provide specific recommendations, provide contingency plans and discuss their execution, define your role in conjunction with the requesting physician, offer educational information, communicate recommendations directly to the requesting physician, and provide daily follow‐up. These tenets informed the development of the consultation scoring system that was used to assess the quality of the written consultations produced by our consultant hospitalists.

Audit and feedback is similar to PBLI, one of the ACGME core competencies for residency training. Both attempt to engage individuals by having them analyze their patient care practices, looking critically to: (1) identify areas needing improvement, and (2) consider strategies that can be implemented to enhance clinical performance. We now show that consultative medicine is an area that appears to be responsive to a mixed methodological educational intervention that includes audit and feedback.

Faculty and house‐staff knowledge of consultative medicine was assessed both before and after the case‐based educational module. Both groups scored very highly on the true/false pretest, suggesting either that their knowledge was excellent at baseline or the test was not sufficiently challenging. If their knowledge was truly very high, then the intervention need not have focused on improving knowledge. It is our interpretation that the true/false knowledge assessment was not challenging enough and therefore failed to comprehensively characterize their knowledge of consultative medicine.

Several limitations of this study should be considered. First, the sample size was small, including only 7 faculty and 12 house‐staff members. However, these numbers were sufficient to show statistically significant overall improvements in both knowledge and on the consultation scores. Second, few consultations were performed by each faculty member, ranging from 2 to 5, before and after the intervention. This may explain why only 1 out of 6 faculty members showed statistically significant improvement in the quality of consults after the intervention. Third, the true/false format of the knowledge tests allowed the subjects to score very high on the pretest, thereby making it difficult to detect knowledge gained after the intervention. Fourth, the scale used to evaluate consults has not been previously validated. The elements assessed by this scale were decided upon based on guidance from the literature10 and the authors' expertise, thereby affording it content validity evidence.19 The recommendations that guided the scale's development have been shown to improve compliance with the recommendations put forth by the consultant.1, 11 Internal structure validity evidence was conferred by the high level of agreement in scores between the independent raters. Relation to other variables validity evidence may be considered because doctors D and F scored highest on this scale and they are the 2 physicians most experienced in consult medicine. Finally, the educational intervention was time‐intensive for both learners and teacher. It consisted of a 1 hour‐long one‐on‐one session. This can be difficult to incorporate into a busy hospitalist program. The intervention can be made more efficient by having learners take the web‐based module online independently, and then meeting with the teacher for the audit and feedback component.

This consult medicine curricular intervention involving audit and feedback was beneficial to hospitalists and resulted in improved consultation notes. While resource intensive, the one‐on‐one teaching session appears to have worked and resulted in outcomes that are meaningful with respect to patient care.

References
  1. Gross R, Caputo G.Kammerer and Gross' Medical Consultation: the Internist on Surgical, Obstetric, and Psychiatric Services.3rd ed.Baltimore:Williams and Wilkins;1998.
  2. Society of Hospital Medicine.Hospitalist as consultant.J Hosp Med.2006;1(S1):70.
  3. Deyo R.The internist as consultant.Arch Intern Med.1980;140:137138.
  4. Byyny R, Siegler M, Tarlov A.Development of an academic section of general internal medicine.Am J Med.1977;63(4):493498.
  5. Moore R, Kammerer W, McGlynn T, Trautlein J, Burnside J.Consultations in internal medicine: a training program resource.J Med Educ.1977;52(4):323327.
  6. Devor M, Renvall M, Ramsdell J.Practice patterns and the adequacy of residency training in consultation medicine.J Gen Intern Med.1993;8(10):554560.
  7. Bomalaski J, Martin G, Webster J.General internal medicine consultation: the last bridge.Arch Intern Med.1983;143:875876.
  8. Plauth W,Pantilat S, Wachter R, Fenton C.Hospitalists' perceptions of their residency training needs: results of a national survey.Am J Med.2001;111(3):247254.
  9. Robie P.The service and educational contributions of a general medicine consultation service.J Gen Intern Med.1986;1:225227.
  10. Goldman L, Lee T, Rudd P.Ten commandments for effective consultations.Arch Intern Med.1983;143:17531755.
  11. Salerno S, Hurst F, Halvorson S, Mercado D.Principles of effective consultation, an update for the 21st‐century consultant.Arch Intern Med.2007;167:271275.
  12. Jamtvedt G, Young J, Kristoffersen D, O'Brien M, Oxman A.Does telling people what they have been doing change what they do? A systematic review of the effects of audit and feedback.Qual Saf Health Care.2006;15:433436.
  13. Miyakis S, Karamanof G, Liontos M, Mountokalakis T.Factors contributing to inappropriate ordering of tests in an academic medical department and the effect of an educational feedback strategy.Postgrad Med J.2006;82:823829.
  14. Winkens R, Pop P, Grol R, et al.Effects of routine individual feedback over nine years on general practitioners' requests for tests.BMJ.1996;312:490.
  15. Kisuule F, Wright S, Barreto J, Zenilman J.Improving antibiotic utilization among hospitalists: a pilot academic detailing project with a public health approach.J Hosp Med.2008;3(1):6470.
  16. Feldman L, Minter‐Jordan M. The role of the medical consultant. Johns Hopkins Consultative Medicine Essentials for Hospitalists. Available at:http://www.jhcme.com/site/article.cfm?ID=8. Accessed April2009.
  17. Hysong S, Best R, Pugh J.Audit and feedback and clinical practice guideline adherence: making feedback actionable.Implement Sci.2006;1:9.
  18. Keely E, Myers K, Dojeiji S, Campbell C.Peer assessment of outpatient consultation letters—feasibility and satisfaction.BMC Med Educ.2007;7:13.
  19. Beckman TJ, Cook DA, Mandrekar JN.What is the validity evidence for assessment of clinical teaching?J Gen Intern Med.2005;20:11591164.
Article PDF
Issue
Journal of Hospital Medicine - 4(8)
Publications
Page Number
486-489
Legacy Keywords
audit and feedback, medical consultation, medical education
Sections
Article PDF
Article PDF

An important role of the internist is that of inpatient medical consultant.13 As consultants, internists make recommendations regarding the patient's medical care and help the primary team to care for the patient. This requires familiarity with the body of knowledge of consultative medicine, as well as process skills that relate to working with teams of providers.1, 4, 5 For some physicians, the knowledge and skills of medical consultation are acquired during residency; however, many internists feel inadequately prepared for their roles of consultants.68 Because no specific requirements for medical consultation curricula during graduate medical education have been set forth, internists and other physicians do not receive uniform or comprehensive training in this area.3, 57, 9 Although internal medicine residents may gain experience while performing consultations on subspecialty rotations (eg, cardiology), the teaching on these blocks tends to be focused on the specialty content and less so on consultative principles.1, 4

As inpatient care is increasingly being taken over by hospitalists, the role of the hospitalist has expanded to include medical consultation. It is estimated that 92% of hospitalists care for patients on medical consultation services.8 The Society of Hospital Medicine (SHM) has also included medical consultation as one of the core competencies of the hospitalist.2 Therefore, it is essential that hospitalists master the knowledge and skills that are required to serve as effective consultants.10, 11

An educational strategy that has been shown to be effective in improving medical practice is audit and feedback.1215 Providing physicians with feedback on their clinical practice has been shown to improve performance more so than other educational methods.12 Practice‐based learning and improvement (PBLI) utilizes this strategy and it has become one of the core competencies stressed by the Accreditation Council for Graduate Medical Education (ACGME). It involves analyzing one's patient care practices in order to identify areas for improvement. In this study, we tested the impact of a newly developed one‐on‐one medical consultation educational module that was combined with audit and feedback in an attempt to improve the quality of the consultations being performed by our hospitalists.

Materials and Methods

Study Design and Setting

This single group pre‐post educational intervention study took place at Johns Hopkins Bayview Medical Center (JHBMC), a 353‐bed university‐affiliated tertiary care medical center in Baltimore, MD, during the 2006‐2007 academic year.

Study Subjects

All 7 members of the hospitalist group at JHBMC who were serving on the medical consultation service during the study period participated. The internal medicine residents who elected to rotate on the consultation service during the study period were also exposed to the case‐based module component of the intervention.

Intervention

The educational intervention was delivered as a one‐on‐one session and lasted approximately 1 hour. The time was spent on the following activities:

  • A true‐false pretest to assess knowledge based on clinical scenarios (Appendix 1).

  • A case‐based module emphasizing the core principles of consultative medicine.16 The module was purposively designed to teach and stimulate thought around 3 complex general medical consultations. These cases are followed by questions about scenarios. The cases specifically address the role of medical consultant and the ways to be most effective in this role based on the recommendations of experts in the field.1, 10 Additional details about the content and format can be viewed at http://www.jhcme.com/site.16 As the physician was working through the teaching cases, the teacher would facilitate discussion around wrong answers and issues that the learner wanted to discuss.

  • The true‐false test to assess knowledge was once again administered (the posttest was identical to the pretest).

  • For the hospitalist faculty members only (and not the residents), audit and feedback was utilized. The physician was shown 2 of his/her most recent consults and was asked to reflect upon the strengths and weaknesses of the consult. The hospitalist was explicitly asked to critique them in light of the knowledge they gained from the consultation module. The teacher also gave specific feedback, both positive and negative, about the written consultations with attention directed specifically toward: the number of recommendations, the specificity of the guidance (eg, exact dosing of medications), clear documentation of their name and contact information, and documentation that the suggestions were verbally passed on to the primary team.

 

Evaluation Data

Learner knowledge, both at baseline and after the case‐based module, was assessed using a written test.

Consultations performed before and after the intervention were compared. Copies of up to 5 consults done by each hospitalist during the year before or after the educational intervention were collected. Identifiers and dates were removed from the consults so that scorers did not know whether the consults were preintervention or postintervention. Consults were scored out of a possible total of 4 to 6 pointsdepending on whether specific elements were applicable. One point was given for each of the following: (1) number of recommendations 5; (2) specific details for all drugs listed [if applicable]; (3) specific details for imaging studies suggested [if applicable]; (4) specific follow‐up documented; (5) consultant's name being clearly written; and (6) verbal contact with the referring team documented. These 6 elements were included based on expert recommendation.10 All consults were scored by 2 hospitalists independently. Disagreements in scores were infrequent (on <10% of the 48 consults scored) and these were only off by 1 point for the overall score. The disagreements were settled by discussion and consensus. All consult scores were converted to a score out of 5, to allow comparisons to be made.

Following the intervention, each participant completed an overall assessment of the educational experience.

Data Analysis

We examined the frequency of responses for each variable and reviewed the distributions. The knowledge scores on the written pretests were not normally distributed and therefore when making comparisons to the posttest, we used the Wilcoxon rank signed test. In comparing the performance scores on the consults across the 2 time periods, we compared the results with both Wilcoxon rank signed test and paired t tests. Because the results were equivalent with both tests, the means from the t tests are shown. Data were analyzed using STATA version 8 (Stata Corp., College Station, TX).

Results

Study Subjects

Among the 14 hospitalist faculty members who were on staff during the study period, 7 were performing medical consults and therefore participated in the study. The 7 faculty members had a mean age of 35 years; 5 (71%) were female, and 5 (71%) were board‐certified in Internal Medicine. The average elapsed time since completion of residency was 5.1 years and average number of years practicing as a hospitalist was 3.8 years (Table 1).

Characteristics of the Faculty Members and House Officers Who Participated in the Study
Faculty (n = 7) 
Age in years, mean (SD)35.57 (5.1)
Female, n (%)5 (71%)
Board certified, n (%)5 (71%)
Years since completion of residency, mean (SD)5.1 (4.4)
Number of years in practice, mean (SD)3.8 (2.9)
Weeks spent in medical consult rotation, mean (SD)3.7 (0.8)
Have read consultation books, n (%)5 (71%)
Housestaff (n = 11) 
Age in years, mean (SD)29.1 (1.8)
Female, n (%)7 (64%)
Residency year, n (%) 
PGY10 (0%)
PGY22 (20%)
PGY37 (70%)
PGY41 (10%)
Weeks spent in medical consult rotation, mean (SD)1.5 (0.85)
Have read consultation books, n (%)5 (50%)

There were 12 house‐staff members who were on their medical consultation rotation during the study period and were exposed to the intervention. Of the 12 house‐staff members, 11 provided demographic information. Characteristics of the 11 house‐staff participants are also shown in Table 1.

Premodule vs. Postmodule Knowledge Assessment

Both faculty and house‐staff performed very well on the true/false pretest. The small changes in the median scores from pretest to posttest did not change significantly for the faculty (pretest: 11/14, posttest: 12/14; P = 0.08), but did reach statistical significance for the house‐staff (pretest: 10/14, posttest: 12/14; P = 0.03).

Audit and Feedback

Of the 7 faculty who participated in the study, 6 performed consults both before and after the intervention. Using the consult scoring system, the scores for all 6 physicians' consults improved after the intervention compared to their earlier consults (Table 2). For 1 faculty member, the consult scores were statistically significantly higher after the intervention (P = 0.017). When all consults completed by the hospitalists were compared before and after the training, there was statistically significant improvement in consult scores (P < 0.001) (Table 2).

Comparisons of Scores for the Consultations Performed Before and After the Intervention
 Preintervention (n =27)Postintervention (n = 21) 
ConsultantScores*MeanScores*MeanP value
  • Total possible score = 5.

  • P value obtained using t test. Significance of results was equivalent when analyzed using the Wilcoxon ranked sign test.

A2, 3, 3.75, 3, 2.52.83, 3, 3, 4, 43.40.093
B3, 3, 3, 3, 12.64, 3, 3, 2.53.10.18
C2, 1.671.84, 2, 33.00.11
D4, 2.5, 3.75, 2.5, 3.753.33.75, 33.40.45
E2, 3, 1, 2, 22.03, 3, 3.753.30.017
F3, 3.75, 2.5, 4, 23.12, 3.75, 4, 43.30.27
All 2.7 3.30.0006

Satisfaction with Consultation Curricula

All faculty and house‐staff participants felt that the intervention had an impact on them (19/19, 100%). Eighteen out of 19 participants (95%) would recommend the educational session to colleagues. After participating, 82% of learners felt confident in performing medical consultations. With respect to the audit and feedback process of reviewing their previously performed consultations, all physicians claimed that their written consultation notes would change in the future.

Discussion

This curricular intervention using a case‐based module combined with audit and feedback appears to have resulted not only in improved knowledge, but also changed physician behavior in the form of higher‐quality written consultations. The teaching sessions were also well received and valued by busy hospitalists.

A review of randomized trials of audit and feedback12 revealed that this strategy is effective in improving professional practice in a variety of areas, including laboratory overutilization,13, 14 clinical practice guideline adherence,15, 17 and antibiotic utilization.13 In 1 study, internal medicine specialists audited their consultation letters and most believed that there had been lasting improvements to their notes.18 However, this study did not objectively compare the consultation letters from before audit and feedback to those written afterward but instead relied solely on the respondents' self‐assessment. It is known that many residents and recent graduates of internal medicine programs feel inadequately prepared in the role of consultant.6, 8 This work describes a curricular intervention that served to augment confidence, knowledge, and actual performance in consultation medicine of physicians. Goldman et al.'s10 Ten Commandments for Effective Consultations, which were later modified by Salerno et al.,11 were highlighted in our case‐based teachings: determine the question being asked or how you can help the requesting physician, establish the urgency of the consultation, gather primary data, be as brief as appropriate in your report, provide specific recommendations, provide contingency plans and discuss their execution, define your role in conjunction with the requesting physician, offer educational information, communicate recommendations directly to the requesting physician, and provide daily follow‐up. These tenets informed the development of the consultation scoring system that was used to assess the quality of the written consultations produced by our consultant hospitalists.

Audit and feedback is similar to PBLI, one of the ACGME core competencies for residency training. Both attempt to engage individuals by having them analyze their patient care practices, looking critically to: (1) identify areas needing improvement, and (2) consider strategies that can be implemented to enhance clinical performance. We now show that consultative medicine is an area that appears to be responsive to a mixed methodological educational intervention that includes audit and feedback.

Faculty and house‐staff knowledge of consultative medicine was assessed both before and after the case‐based educational module. Both groups scored very highly on the true/false pretest, suggesting either that their knowledge was excellent at baseline or the test was not sufficiently challenging. If their knowledge was truly very high, then the intervention need not have focused on improving knowledge. It is our interpretation that the true/false knowledge assessment was not challenging enough and therefore failed to comprehensively characterize their knowledge of consultative medicine.

Several limitations of this study should be considered. First, the sample size was small, including only 7 faculty and 12 house‐staff members. However, these numbers were sufficient to show statistically significant overall improvements in both knowledge and on the consultation scores. Second, few consultations were performed by each faculty member, ranging from 2 to 5, before and after the intervention. This may explain why only 1 out of 6 faculty members showed statistically significant improvement in the quality of consults after the intervention. Third, the true/false format of the knowledge tests allowed the subjects to score very high on the pretest, thereby making it difficult to detect knowledge gained after the intervention. Fourth, the scale used to evaluate consults has not been previously validated. The elements assessed by this scale were decided upon based on guidance from the literature10 and the authors' expertise, thereby affording it content validity evidence.19 The recommendations that guided the scale's development have been shown to improve compliance with the recommendations put forth by the consultant.1, 11 Internal structure validity evidence was conferred by the high level of agreement in scores between the independent raters. Relation to other variables validity evidence may be considered because doctors D and F scored highest on this scale and they are the 2 physicians most experienced in consult medicine. Finally, the educational intervention was time‐intensive for both learners and teacher. It consisted of a 1 hour‐long one‐on‐one session. This can be difficult to incorporate into a busy hospitalist program. The intervention can be made more efficient by having learners take the web‐based module online independently, and then meeting with the teacher for the audit and feedback component.

This consult medicine curricular intervention involving audit and feedback was beneficial to hospitalists and resulted in improved consultation notes. While resource intensive, the one‐on‐one teaching session appears to have worked and resulted in outcomes that are meaningful with respect to patient care.

An important role of the internist is that of inpatient medical consultant.13 As consultants, internists make recommendations regarding the patient's medical care and help the primary team to care for the patient. This requires familiarity with the body of knowledge of consultative medicine, as well as process skills that relate to working with teams of providers.1, 4, 5 For some physicians, the knowledge and skills of medical consultation are acquired during residency; however, many internists feel inadequately prepared for their roles of consultants.68 Because no specific requirements for medical consultation curricula during graduate medical education have been set forth, internists and other physicians do not receive uniform or comprehensive training in this area.3, 57, 9 Although internal medicine residents may gain experience while performing consultations on subspecialty rotations (eg, cardiology), the teaching on these blocks tends to be focused on the specialty content and less so on consultative principles.1, 4

As inpatient care is increasingly being taken over by hospitalists, the role of the hospitalist has expanded to include medical consultation. It is estimated that 92% of hospitalists care for patients on medical consultation services.8 The Society of Hospital Medicine (SHM) has also included medical consultation as one of the core competencies of the hospitalist.2 Therefore, it is essential that hospitalists master the knowledge and skills that are required to serve as effective consultants.10, 11

An educational strategy that has been shown to be effective in improving medical practice is audit and feedback.1215 Providing physicians with feedback on their clinical practice has been shown to improve performance more so than other educational methods.12 Practice‐based learning and improvement (PBLI) utilizes this strategy and it has become one of the core competencies stressed by the Accreditation Council for Graduate Medical Education (ACGME). It involves analyzing one's patient care practices in order to identify areas for improvement. In this study, we tested the impact of a newly developed one‐on‐one medical consultation educational module that was combined with audit and feedback in an attempt to improve the quality of the consultations being performed by our hospitalists.

Materials and Methods

Study Design and Setting

This single group pre‐post educational intervention study took place at Johns Hopkins Bayview Medical Center (JHBMC), a 353‐bed university‐affiliated tertiary care medical center in Baltimore, MD, during the 2006‐2007 academic year.

Study Subjects

All 7 members of the hospitalist group at JHBMC who were serving on the medical consultation service during the study period participated. The internal medicine residents who elected to rotate on the consultation service during the study period were also exposed to the case‐based module component of the intervention.

Intervention

The educational intervention was delivered as a one‐on‐one session and lasted approximately 1 hour. The time was spent on the following activities:

  • A true‐false pretest to assess knowledge based on clinical scenarios (Appendix 1).

  • A case‐based module emphasizing the core principles of consultative medicine.16 The module was purposively designed to teach and stimulate thought around 3 complex general medical consultations. These cases are followed by questions about scenarios. The cases specifically address the role of medical consultant and the ways to be most effective in this role based on the recommendations of experts in the field.1, 10 Additional details about the content and format can be viewed at http://www.jhcme.com/site.16 As the physician was working through the teaching cases, the teacher would facilitate discussion around wrong answers and issues that the learner wanted to discuss.

  • The true‐false test to assess knowledge was once again administered (the posttest was identical to the pretest).

  • For the hospitalist faculty members only (and not the residents), audit and feedback was utilized. The physician was shown 2 of his/her most recent consults and was asked to reflect upon the strengths and weaknesses of the consult. The hospitalist was explicitly asked to critique them in light of the knowledge they gained from the consultation module. The teacher also gave specific feedback, both positive and negative, about the written consultations with attention directed specifically toward: the number of recommendations, the specificity of the guidance (eg, exact dosing of medications), clear documentation of their name and contact information, and documentation that the suggestions were verbally passed on to the primary team.

 

Evaluation Data

Learner knowledge, both at baseline and after the case‐based module, was assessed using a written test.

Consultations performed before and after the intervention were compared. Copies of up to 5 consults done by each hospitalist during the year before or after the educational intervention were collected. Identifiers and dates were removed from the consults so that scorers did not know whether the consults were preintervention or postintervention. Consults were scored out of a possible total of 4 to 6 pointsdepending on whether specific elements were applicable. One point was given for each of the following: (1) number of recommendations 5; (2) specific details for all drugs listed [if applicable]; (3) specific details for imaging studies suggested [if applicable]; (4) specific follow‐up documented; (5) consultant's name being clearly written; and (6) verbal contact with the referring team documented. These 6 elements were included based on expert recommendation.10 All consults were scored by 2 hospitalists independently. Disagreements in scores were infrequent (on <10% of the 48 consults scored) and these were only off by 1 point for the overall score. The disagreements were settled by discussion and consensus. All consult scores were converted to a score out of 5, to allow comparisons to be made.

Following the intervention, each participant completed an overall assessment of the educational experience.

Data Analysis

We examined the frequency of responses for each variable and reviewed the distributions. The knowledge scores on the written pretests were not normally distributed and therefore when making comparisons to the posttest, we used the Wilcoxon rank signed test. In comparing the performance scores on the consults across the 2 time periods, we compared the results with both Wilcoxon rank signed test and paired t tests. Because the results were equivalent with both tests, the means from the t tests are shown. Data were analyzed using STATA version 8 (Stata Corp., College Station, TX).

Results

Study Subjects

Among the 14 hospitalist faculty members who were on staff during the study period, 7 were performing medical consults and therefore participated in the study. The 7 faculty members had a mean age of 35 years; 5 (71%) were female, and 5 (71%) were board‐certified in Internal Medicine. The average elapsed time since completion of residency was 5.1 years and average number of years practicing as a hospitalist was 3.8 years (Table 1).

Characteristics of the Faculty Members and House Officers Who Participated in the Study
Faculty (n = 7) 
Age in years, mean (SD)35.57 (5.1)
Female, n (%)5 (71%)
Board certified, n (%)5 (71%)
Years since completion of residency, mean (SD)5.1 (4.4)
Number of years in practice, mean (SD)3.8 (2.9)
Weeks spent in medical consult rotation, mean (SD)3.7 (0.8)
Have read consultation books, n (%)5 (71%)
Housestaff (n = 11) 
Age in years, mean (SD)29.1 (1.8)
Female, n (%)7 (64%)
Residency year, n (%) 
PGY10 (0%)
PGY22 (20%)
PGY37 (70%)
PGY41 (10%)
Weeks spent in medical consult rotation, mean (SD)1.5 (0.85)
Have read consultation books, n (%)5 (50%)

There were 12 house‐staff members who were on their medical consultation rotation during the study period and were exposed to the intervention. Of the 12 house‐staff members, 11 provided demographic information. Characteristics of the 11 house‐staff participants are also shown in Table 1.

Premodule vs. Postmodule Knowledge Assessment

Both faculty and house‐staff performed very well on the true/false pretest. The small changes in the median scores from pretest to posttest did not change significantly for the faculty (pretest: 11/14, posttest: 12/14; P = 0.08), but did reach statistical significance for the house‐staff (pretest: 10/14, posttest: 12/14; P = 0.03).

Audit and Feedback

Of the 7 faculty who participated in the study, 6 performed consults both before and after the intervention. Using the consult scoring system, the scores for all 6 physicians' consults improved after the intervention compared to their earlier consults (Table 2). For 1 faculty member, the consult scores were statistically significantly higher after the intervention (P = 0.017). When all consults completed by the hospitalists were compared before and after the training, there was statistically significant improvement in consult scores (P < 0.001) (Table 2).

Comparisons of Scores for the Consultations Performed Before and After the Intervention
 Preintervention (n =27)Postintervention (n = 21) 
ConsultantScores*MeanScores*MeanP value
  • Total possible score = 5.

  • P value obtained using t test. Significance of results was equivalent when analyzed using the Wilcoxon ranked sign test.

A2, 3, 3.75, 3, 2.52.83, 3, 3, 4, 43.40.093
B3, 3, 3, 3, 12.64, 3, 3, 2.53.10.18
C2, 1.671.84, 2, 33.00.11
D4, 2.5, 3.75, 2.5, 3.753.33.75, 33.40.45
E2, 3, 1, 2, 22.03, 3, 3.753.30.017
F3, 3.75, 2.5, 4, 23.12, 3.75, 4, 43.30.27
All 2.7 3.30.0006

Satisfaction with Consultation Curricula

All faculty and house‐staff participants felt that the intervention had an impact on them (19/19, 100%). Eighteen out of 19 participants (95%) would recommend the educational session to colleagues. After participating, 82% of learners felt confident in performing medical consultations. With respect to the audit and feedback process of reviewing their previously performed consultations, all physicians claimed that their written consultation notes would change in the future.

Discussion

This curricular intervention using a case‐based module combined with audit and feedback appears to have resulted not only in improved knowledge, but also changed physician behavior in the form of higher‐quality written consultations. The teaching sessions were also well received and valued by busy hospitalists.

A review of randomized trials of audit and feedback12 revealed that this strategy is effective in improving professional practice in a variety of areas, including laboratory overutilization,13, 14 clinical practice guideline adherence,15, 17 and antibiotic utilization.13 In 1 study, internal medicine specialists audited their consultation letters and most believed that there had been lasting improvements to their notes.18 However, this study did not objectively compare the consultation letters from before audit and feedback to those written afterward but instead relied solely on the respondents' self‐assessment. It is known that many residents and recent graduates of internal medicine programs feel inadequately prepared in the role of consultant.6, 8 This work describes a curricular intervention that served to augment confidence, knowledge, and actual performance in consultation medicine of physicians. Goldman et al.'s10 Ten Commandments for Effective Consultations, which were later modified by Salerno et al.,11 were highlighted in our case‐based teachings: determine the question being asked or how you can help the requesting physician, establish the urgency of the consultation, gather primary data, be as brief as appropriate in your report, provide specific recommendations, provide contingency plans and discuss their execution, define your role in conjunction with the requesting physician, offer educational information, communicate recommendations directly to the requesting physician, and provide daily follow‐up. These tenets informed the development of the consultation scoring system that was used to assess the quality of the written consultations produced by our consultant hospitalists.

Audit and feedback is similar to PBLI, one of the ACGME core competencies for residency training. Both attempt to engage individuals by having them analyze their patient care practices, looking critically to: (1) identify areas needing improvement, and (2) consider strategies that can be implemented to enhance clinical performance. We now show that consultative medicine is an area that appears to be responsive to a mixed methodological educational intervention that includes audit and feedback.

Faculty and house‐staff knowledge of consultative medicine was assessed both before and after the case‐based educational module. Both groups scored very highly on the true/false pretest, suggesting either that their knowledge was excellent at baseline or the test was not sufficiently challenging. If their knowledge was truly very high, then the intervention need not have focused on improving knowledge. It is our interpretation that the true/false knowledge assessment was not challenging enough and therefore failed to comprehensively characterize their knowledge of consultative medicine.

Several limitations of this study should be considered. First, the sample size was small, including only 7 faculty and 12 house‐staff members. However, these numbers were sufficient to show statistically significant overall improvements in both knowledge and on the consultation scores. Second, few consultations were performed by each faculty member, ranging from 2 to 5, before and after the intervention. This may explain why only 1 out of 6 faculty members showed statistically significant improvement in the quality of consults after the intervention. Third, the true/false format of the knowledge tests allowed the subjects to score very high on the pretest, thereby making it difficult to detect knowledge gained after the intervention. Fourth, the scale used to evaluate consults has not been previously validated. The elements assessed by this scale were decided upon based on guidance from the literature10 and the authors' expertise, thereby affording it content validity evidence.19 The recommendations that guided the scale's development have been shown to improve compliance with the recommendations put forth by the consultant.1, 11 Internal structure validity evidence was conferred by the high level of agreement in scores between the independent raters. Relation to other variables validity evidence may be considered because doctors D and F scored highest on this scale and they are the 2 physicians most experienced in consult medicine. Finally, the educational intervention was time‐intensive for both learners and teacher. It consisted of a 1 hour‐long one‐on‐one session. This can be difficult to incorporate into a busy hospitalist program. The intervention can be made more efficient by having learners take the web‐based module online independently, and then meeting with the teacher for the audit and feedback component.

This consult medicine curricular intervention involving audit and feedback was beneficial to hospitalists and resulted in improved consultation notes. While resource intensive, the one‐on‐one teaching session appears to have worked and resulted in outcomes that are meaningful with respect to patient care.

References
  1. Gross R, Caputo G.Kammerer and Gross' Medical Consultation: the Internist on Surgical, Obstetric, and Psychiatric Services.3rd ed.Baltimore:Williams and Wilkins;1998.
  2. Society of Hospital Medicine.Hospitalist as consultant.J Hosp Med.2006;1(S1):70.
  3. Deyo R.The internist as consultant.Arch Intern Med.1980;140:137138.
  4. Byyny R, Siegler M, Tarlov A.Development of an academic section of general internal medicine.Am J Med.1977;63(4):493498.
  5. Moore R, Kammerer W, McGlynn T, Trautlein J, Burnside J.Consultations in internal medicine: a training program resource.J Med Educ.1977;52(4):323327.
  6. Devor M, Renvall M, Ramsdell J.Practice patterns and the adequacy of residency training in consultation medicine.J Gen Intern Med.1993;8(10):554560.
  7. Bomalaski J, Martin G, Webster J.General internal medicine consultation: the last bridge.Arch Intern Med.1983;143:875876.
  8. Plauth W,Pantilat S, Wachter R, Fenton C.Hospitalists' perceptions of their residency training needs: results of a national survey.Am J Med.2001;111(3):247254.
  9. Robie P.The service and educational contributions of a general medicine consultation service.J Gen Intern Med.1986;1:225227.
  10. Goldman L, Lee T, Rudd P.Ten commandments for effective consultations.Arch Intern Med.1983;143:17531755.
  11. Salerno S, Hurst F, Halvorson S, Mercado D.Principles of effective consultation, an update for the 21st‐century consultant.Arch Intern Med.2007;167:271275.
  12. Jamtvedt G, Young J, Kristoffersen D, O'Brien M, Oxman A.Does telling people what they have been doing change what they do? A systematic review of the effects of audit and feedback.Qual Saf Health Care.2006;15:433436.
  13. Miyakis S, Karamanof G, Liontos M, Mountokalakis T.Factors contributing to inappropriate ordering of tests in an academic medical department and the effect of an educational feedback strategy.Postgrad Med J.2006;82:823829.
  14. Winkens R, Pop P, Grol R, et al.Effects of routine individual feedback over nine years on general practitioners' requests for tests.BMJ.1996;312:490.
  15. Kisuule F, Wright S, Barreto J, Zenilman J.Improving antibiotic utilization among hospitalists: a pilot academic detailing project with a public health approach.J Hosp Med.2008;3(1):6470.
  16. Feldman L, Minter‐Jordan M. The role of the medical consultant. Johns Hopkins Consultative Medicine Essentials for Hospitalists. Available at:http://www.jhcme.com/site/article.cfm?ID=8. Accessed April2009.
  17. Hysong S, Best R, Pugh J.Audit and feedback and clinical practice guideline adherence: making feedback actionable.Implement Sci.2006;1:9.
  18. Keely E, Myers K, Dojeiji S, Campbell C.Peer assessment of outpatient consultation letters—feasibility and satisfaction.BMC Med Educ.2007;7:13.
  19. Beckman TJ, Cook DA, Mandrekar JN.What is the validity evidence for assessment of clinical teaching?J Gen Intern Med.2005;20:11591164.
References
  1. Gross R, Caputo G.Kammerer and Gross' Medical Consultation: the Internist on Surgical, Obstetric, and Psychiatric Services.3rd ed.Baltimore:Williams and Wilkins;1998.
  2. Society of Hospital Medicine.Hospitalist as consultant.J Hosp Med.2006;1(S1):70.
  3. Deyo R.The internist as consultant.Arch Intern Med.1980;140:137138.
  4. Byyny R, Siegler M, Tarlov A.Development of an academic section of general internal medicine.Am J Med.1977;63(4):493498.
  5. Moore R, Kammerer W, McGlynn T, Trautlein J, Burnside J.Consultations in internal medicine: a training program resource.J Med Educ.1977;52(4):323327.
  6. Devor M, Renvall M, Ramsdell J.Practice patterns and the adequacy of residency training in consultation medicine.J Gen Intern Med.1993;8(10):554560.
  7. Bomalaski J, Martin G, Webster J.General internal medicine consultation: the last bridge.Arch Intern Med.1983;143:875876.
  8. Plauth W,Pantilat S, Wachter R, Fenton C.Hospitalists' perceptions of their residency training needs: results of a national survey.Am J Med.2001;111(3):247254.
  9. Robie P.The service and educational contributions of a general medicine consultation service.J Gen Intern Med.1986;1:225227.
  10. Goldman L, Lee T, Rudd P.Ten commandments for effective consultations.Arch Intern Med.1983;143:17531755.
  11. Salerno S, Hurst F, Halvorson S, Mercado D.Principles of effective consultation, an update for the 21st‐century consultant.Arch Intern Med.2007;167:271275.
  12. Jamtvedt G, Young J, Kristoffersen D, O'Brien M, Oxman A.Does telling people what they have been doing change what they do? A systematic review of the effects of audit and feedback.Qual Saf Health Care.2006;15:433436.
  13. Miyakis S, Karamanof G, Liontos M, Mountokalakis T.Factors contributing to inappropriate ordering of tests in an academic medical department and the effect of an educational feedback strategy.Postgrad Med J.2006;82:823829.
  14. Winkens R, Pop P, Grol R, et al.Effects of routine individual feedback over nine years on general practitioners' requests for tests.BMJ.1996;312:490.
  15. Kisuule F, Wright S, Barreto J, Zenilman J.Improving antibiotic utilization among hospitalists: a pilot academic detailing project with a public health approach.J Hosp Med.2008;3(1):6470.
  16. Feldman L, Minter‐Jordan M. The role of the medical consultant. Johns Hopkins Consultative Medicine Essentials for Hospitalists. Available at:http://www.jhcme.com/site/article.cfm?ID=8. Accessed April2009.
  17. Hysong S, Best R, Pugh J.Audit and feedback and clinical practice guideline adherence: making feedback actionable.Implement Sci.2006;1:9.
  18. Keely E, Myers K, Dojeiji S, Campbell C.Peer assessment of outpatient consultation letters—feasibility and satisfaction.BMC Med Educ.2007;7:13.
  19. Beckman TJ, Cook DA, Mandrekar JN.What is the validity evidence for assessment of clinical teaching?J Gen Intern Med.2005;20:11591164.
Issue
Journal of Hospital Medicine - 4(8)
Issue
Journal of Hospital Medicine - 4(8)
Page Number
486-489
Page Number
486-489
Publications
Publications
Article Type
Display Headline
A case‐based teaching module combined with audit and feedback to improve the quality of consultations
Display Headline
A case‐based teaching module combined with audit and feedback to improve the quality of consultations
Legacy Keywords
audit and feedback, medical consultation, medical education
Legacy Keywords
audit and feedback, medical consultation, medical education
Sections
Article Source

Copyright © 2009 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
The Collaborative Inpatient Medicine, Service (CIMS), Johns Hopkins Bayview Medical Center, 5200 Eastern Ave., MFL West, 6th Floor, Baltimore, MD 21224
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Image
Disable zoom
Off