Affiliations
Division of Hospital Medicine, University of California San Francisco
Email
sharpeb@medicine.ucsf.edu
Given name(s)
Bradley A.
Family name
Sharpe
Degrees
MD

Training Residents in Hospital Medicine: The Hospitalist Elective National Survey

Article Type
Changed
Sat, 09/29/2018 - 22:41

Hospital medicine has become the fastest growing medicine subspecialty, though no standardized hospitalist-focused educational program is required to become a practicing adult medicine hospitalist.1 Historically, adult hospitalists have had little additional training beyond residency, yet, as residency training adapts to duty hour restrictions, patient caps, and increasing attending oversight, it is not clear if traditional rotations and curricula provide adequate preparation for independent practice as an adult hospitalist.2-5 Several types of training and educational programs have emerged to fill this potential gap. These include hospital medicine fellowships, residency pathways, early career faculty development programs (eg, Society of Hospital Medicine/ Society of General Internal Medicine sponsored Academic Hospitalist Academy), and hospitalist-focused resident rotations.6-10 These activities are intended to ensure that residents and early career physicians gain the skills and competencies required to effectively practice hospital medicine.

Hospital medicine fellowships, residency pathways, and faculty development have been described previously.6-8 However, the prevalence and characteristics of hospital medicine-focused resident rotations are unknown, and these rotations are rarely publicized beyond local residency programs. Our study aims to determine the prevalence, purpose, and function of hospitalist-focused rotations within residency programs and explore the role these rotations have in preparing residents for a career in hospital medicine.

METHODS

Study Design, Setting, and Participants

We conducted a cross-sectional study involving the largest 100 Accreditation Council for Graduate Medical Education (ACGME) internal medicine residency programs. We chose the largest programs as we hypothesized that these programs would be most likely to have the infrastructure to support hospital medicine focused rotations compared to smaller programs. The UCSF Committee on Human Research approved this study.

Survey Development

We developed a study-specific survey (the Hospitalist Elective National Survey [HENS]) to assess the prevalence, structure, curricular goals, and perceived benefits of distinct hospitalist rotations as defined by individual resident programs. The survey prompted respondents to consider a “hospitalist-focused” rotation as one that is different from a traditional inpatient “ward” rotation and whose emphasis is on hospitalist-specific training, clinical skills, or career development. The 18-question survey (Appendix 1) included fixed choice, multiple choice, and open-ended responses.

Data Collection

Using publicly available data from the ACGME website (www.acgme.org), we identified the largest 100 medicine programs based on the total number of residents. This included programs with 81 or more residents. An electronic survey was e-mailed to the leadership of each program. In May 2015, surveys were sent to Residency Program Directors (PD), and if they did not respond after 2 attempts, then Associate Program Directors (APD) were contacted twice. If both these leaders did not respond, then the survey was sent to residency program administrators or Hospital Medicine Division Chiefs. Only one survey was completed per site.

Data Analysis

We used descriptive statistics to summarize quantitative data. Responses to open-ended qualitative questions about the goals, strengths, and design of rotations were analyzed using thematic analysis.11 During analysis, we iteratively developed and refined codes that identified important concepts that emerged from the data. Two members of the research team trained in qualitative data analysis coded these data independently (SL & JH).

RESULTS

Eighty-two residency program leaders (53 PD, 19 APD, 10 chiefs/admin) responded to the survey (82% total response rate). Among all responders, the prevalence of hospitalist-focused rotations was 50% (41/82). Of these 41 rotations, 85% (35/41) were elective rotations and 15% (6/41) were mandatory rotations. Hospitalist rotations ranged in existence from 1 to 15 years with a mean duration of 4.78 years (S.D. 3.5).

Of the 41 programs that did not have a hospital medicine-focused rotation, the key barriers identified were a lack of a well-defined model (29%), low faculty interest (15%), low resident interest (12%), and lack of funding (5%). Despite these barriers, 9 of these 41 programs (22%) stated they planned to start a rotation in the future – of which, 3 programs (7%) planned to start a rotation within the year.


Of the 41 established rotations, most were 1 month in duration (31/41, 76%) and most of the participants included second-year residents (30/41, 73%), and/or third-year residents (32/41, 78%). In addition to clinical work, most rotations had a nonclinical component that included teaching, research/scholarship, and/or work on quality improvement or patient safety (Table 1). Clinical activities, nonclinical activities, and curricular elements varied across institutions (Table 1).

Most programs with rotations (39/41, 95%) reported that their hospitalist rotation filled at least one gap in traditional residency curriculum. The most frequently identified gaps the rotation filled included: allowing progressive clinical autonomy (59%, 24/41), learning about quality improvement and high value care (41%, 17/41), and preparing to become a practicing hospitalist (39%, 16/41). Most respondents (66%, 27/41) reported that the rotation helped to prepare trainees for their first year as an attending.

Results of thematic analysis related to the goals, strengths, and design of rotations are shown in Table 2. Five themes emerged relating to autonomy, mentorship, hospitalist skills, real-world experience, and training and curriculum gaps. These themes describe the underlying structure in which these rotations promote career preparation and skill development.

 

 

DISCUSSION

The Hospital Elective National Survey provides insight into a growing component of hospitalist-focused training and preparation. Fifty percent of ACGME residency programs surveyed in this study had a hospitalist-focused rotation. Rotation characteristics were heterogeneous, perhaps reflecting both the homegrown nature of their development and the lack of national study or data to guide what constitutes an “ideal” rotation. Common functions of rotations included providing career mentorship and allowing for trainees to get experience “being a hospitalist.” Other key elements of the rotations included providing additional clinical autonomy and teaching material outside of traditional residency curricula such as quality improvement, patient safety, billing, and healthcare finances.

Prior research has explored other training for hospitalists such as fellowships, pathways, and faculty development.6-8 A hospital medicine fellowship provides extensive training but without a practice requirement in adult medicine (as now exists in pediatric hospital medicine), the impact of fellowship training may be limited by its scale.12,13 Longitudinal hospitalist residency pathways provide comprehensive skill development and often require an early career commitment from trainees.7 Faculty development can be another tool to foster career growth, though requires local investment from hospitalist groups that may not have the resources or experience to support this.8 Our study has highlighted that hospitalist-focused rotations within residency programs can train physicians for a career in hospital medicine. Hospitalist and residency leaders should consider that these rotations might be the only hospital medicine-focused training that new hospitalists will have. Given the variable nature of these rotations nationally, developing standards around core hospitalist competencies within these rotations should be a key component to career preparation and a goal for the field at large.14,15

Our study has limitations. The survey focused only on internal medicine as it is the most common training background of hospitalists; however, the field has grown to include other specialties including pediatrics, neurology, family medicine, and surgery. In addition, the survey reviewed the largest ACGME Internal Medicine programs to best evaluate prevalence and content—it may be that some smaller programs have rotations with different characteristics that we have not captured. Lastly, the survey reviewed the rotations through the lens of residency program leadership and not trainees. A future survey of trainees or early career hospitalists who participated in these rotations could provide a better understanding of their achievements and effectiveness.

CONCLUSION

We anticipate that the demand for hospitalist-focused training will continue to grow as more residents in training seek to enter the specialty. Hospitalist and residency program leaders have an opportunity within residency training programs to build new or further develop existing hospital medicine-focused rotations. The HENS survey demonstrates that hospitalist-focused rotations are prevalent in residency education and have the potential to play an important role in hospitalist training.

Disclosure

The authors declare no conflicts of interest in relation to this manuscript.

Files
References

1. Wachter RM, Goldman L. Zero to 50,000 – The 20th Anniversary of the Hospitalist. N Engl J Med. 2016;375:1009-1011. PubMed
2. Glasheen JJ, Siegal EM, Epstein K, Kutner J, Prochazka AV. Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists’ needs. J Gen Intern Med. 2008;23:1110-1115. PubMed
3. Glasheen JJ, Goldenberg J, Nelson JR. Achieving hospital medicine’s promise through internal medicine residency redesign. Mt Sinai J Med. 2008; 5:436-441. PubMed
4. Plauth WH 3rd, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists’ perceptions of their residency training needs: results of a national survey. Am J Med. 2001; 15;111:247-254. PubMed
5. Kumar A, Smeraglio A, Witteles R, Harman S, Nallamshetty, S, Rogers A, Harrington R, Ahuja N. A resident-created hospitalist curriculum for internal medicine housestaff. J Hosp Med. 2016;11:646-649. PubMed
6. Ranji, SR, Rosenman, DJ, Amin, AN, Kripalani, S. Hospital medicine fellowships: works in progress. Am J Med. 2006;119(1):72.e1-7. PubMed
7. Sweigart JR, Tad-Y D, Kneeland P, Williams MV, Glasheen JJ. Hospital medicine resident training tracks: developing the hospital medicine pipeline. J Hosp Med. 2017;12:173-176. PubMed
8. Sehgal NL, Sharpe BA, Auerbach AA, Wachter RM. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6:161-166. PubMed
9. Academic Hospitalist Academy. Course Description, Objectives and Society Sponsorship. Available at: https://academichospitalist.org/. Accessed August 23, 2017. 
10. Amin AN. A successful hospitalist rotation for senior medicine residents. Med Educ. 2003;37:1042. PubMed
11. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77-101. 
12. American Board of Medical Specialties. ABMS Officially Recognizes Pediatric Hospital Medicine Subspecialty Certification Available at: http://www.abms.org/news-events/abms-officially-recognizes-pediatric-hospital-medicine-subspecialty-certification/. Accessed August 23, 2017. PubMed
13. Wiese J. Residency training: beginning with the end in mind. J Gen Intern Med. 2008; 23(7):1122-1123. PubMed
14. Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN. Core competencies in hospital medicine: development and methodology. J Hosp Med. 2006; 1 Suppl 1:48-56. PubMed
15. Nichani S, Crocker J, Fitterman N, Lukela M. Updating the core competencies in hospital medicine – 2017 revision: introduction and methodology. J Hosp Med. 2017;4:283-287. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(9)
Publications
Topics
Page Number
623-625. Published online first March 26, 2018
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Hospital medicine has become the fastest growing medicine subspecialty, though no standardized hospitalist-focused educational program is required to become a practicing adult medicine hospitalist.1 Historically, adult hospitalists have had little additional training beyond residency, yet, as residency training adapts to duty hour restrictions, patient caps, and increasing attending oversight, it is not clear if traditional rotations and curricula provide adequate preparation for independent practice as an adult hospitalist.2-5 Several types of training and educational programs have emerged to fill this potential gap. These include hospital medicine fellowships, residency pathways, early career faculty development programs (eg, Society of Hospital Medicine/ Society of General Internal Medicine sponsored Academic Hospitalist Academy), and hospitalist-focused resident rotations.6-10 These activities are intended to ensure that residents and early career physicians gain the skills and competencies required to effectively practice hospital medicine.

Hospital medicine fellowships, residency pathways, and faculty development have been described previously.6-8 However, the prevalence and characteristics of hospital medicine-focused resident rotations are unknown, and these rotations are rarely publicized beyond local residency programs. Our study aims to determine the prevalence, purpose, and function of hospitalist-focused rotations within residency programs and explore the role these rotations have in preparing residents for a career in hospital medicine.

METHODS

Study Design, Setting, and Participants

We conducted a cross-sectional study involving the largest 100 Accreditation Council for Graduate Medical Education (ACGME) internal medicine residency programs. We chose the largest programs as we hypothesized that these programs would be most likely to have the infrastructure to support hospital medicine focused rotations compared to smaller programs. The UCSF Committee on Human Research approved this study.

Survey Development

We developed a study-specific survey (the Hospitalist Elective National Survey [HENS]) to assess the prevalence, structure, curricular goals, and perceived benefits of distinct hospitalist rotations as defined by individual resident programs. The survey prompted respondents to consider a “hospitalist-focused” rotation as one that is different from a traditional inpatient “ward” rotation and whose emphasis is on hospitalist-specific training, clinical skills, or career development. The 18-question survey (Appendix 1) included fixed choice, multiple choice, and open-ended responses.

Data Collection

Using publicly available data from the ACGME website (www.acgme.org), we identified the largest 100 medicine programs based on the total number of residents. This included programs with 81 or more residents. An electronic survey was e-mailed to the leadership of each program. In May 2015, surveys were sent to Residency Program Directors (PD), and if they did not respond after 2 attempts, then Associate Program Directors (APD) were contacted twice. If both these leaders did not respond, then the survey was sent to residency program administrators or Hospital Medicine Division Chiefs. Only one survey was completed per site.

Data Analysis

We used descriptive statistics to summarize quantitative data. Responses to open-ended qualitative questions about the goals, strengths, and design of rotations were analyzed using thematic analysis.11 During analysis, we iteratively developed and refined codes that identified important concepts that emerged from the data. Two members of the research team trained in qualitative data analysis coded these data independently (SL & JH).

RESULTS

Eighty-two residency program leaders (53 PD, 19 APD, 10 chiefs/admin) responded to the survey (82% total response rate). Among all responders, the prevalence of hospitalist-focused rotations was 50% (41/82). Of these 41 rotations, 85% (35/41) were elective rotations and 15% (6/41) were mandatory rotations. Hospitalist rotations ranged in existence from 1 to 15 years with a mean duration of 4.78 years (S.D. 3.5).

Of the 41 programs that did not have a hospital medicine-focused rotation, the key barriers identified were a lack of a well-defined model (29%), low faculty interest (15%), low resident interest (12%), and lack of funding (5%). Despite these barriers, 9 of these 41 programs (22%) stated they planned to start a rotation in the future – of which, 3 programs (7%) planned to start a rotation within the year.


Of the 41 established rotations, most were 1 month in duration (31/41, 76%) and most of the participants included second-year residents (30/41, 73%), and/or third-year residents (32/41, 78%). In addition to clinical work, most rotations had a nonclinical component that included teaching, research/scholarship, and/or work on quality improvement or patient safety (Table 1). Clinical activities, nonclinical activities, and curricular elements varied across institutions (Table 1).

Most programs with rotations (39/41, 95%) reported that their hospitalist rotation filled at least one gap in traditional residency curriculum. The most frequently identified gaps the rotation filled included: allowing progressive clinical autonomy (59%, 24/41), learning about quality improvement and high value care (41%, 17/41), and preparing to become a practicing hospitalist (39%, 16/41). Most respondents (66%, 27/41) reported that the rotation helped to prepare trainees for their first year as an attending.

Results of thematic analysis related to the goals, strengths, and design of rotations are shown in Table 2. Five themes emerged relating to autonomy, mentorship, hospitalist skills, real-world experience, and training and curriculum gaps. These themes describe the underlying structure in which these rotations promote career preparation and skill development.

 

 

DISCUSSION

The Hospital Elective National Survey provides insight into a growing component of hospitalist-focused training and preparation. Fifty percent of ACGME residency programs surveyed in this study had a hospitalist-focused rotation. Rotation characteristics were heterogeneous, perhaps reflecting both the homegrown nature of their development and the lack of national study or data to guide what constitutes an “ideal” rotation. Common functions of rotations included providing career mentorship and allowing for trainees to get experience “being a hospitalist.” Other key elements of the rotations included providing additional clinical autonomy and teaching material outside of traditional residency curricula such as quality improvement, patient safety, billing, and healthcare finances.

Prior research has explored other training for hospitalists such as fellowships, pathways, and faculty development.6-8 A hospital medicine fellowship provides extensive training but without a practice requirement in adult medicine (as now exists in pediatric hospital medicine), the impact of fellowship training may be limited by its scale.12,13 Longitudinal hospitalist residency pathways provide comprehensive skill development and often require an early career commitment from trainees.7 Faculty development can be another tool to foster career growth, though requires local investment from hospitalist groups that may not have the resources or experience to support this.8 Our study has highlighted that hospitalist-focused rotations within residency programs can train physicians for a career in hospital medicine. Hospitalist and residency leaders should consider that these rotations might be the only hospital medicine-focused training that new hospitalists will have. Given the variable nature of these rotations nationally, developing standards around core hospitalist competencies within these rotations should be a key component to career preparation and a goal for the field at large.14,15

Our study has limitations. The survey focused only on internal medicine as it is the most common training background of hospitalists; however, the field has grown to include other specialties including pediatrics, neurology, family medicine, and surgery. In addition, the survey reviewed the largest ACGME Internal Medicine programs to best evaluate prevalence and content—it may be that some smaller programs have rotations with different characteristics that we have not captured. Lastly, the survey reviewed the rotations through the lens of residency program leadership and not trainees. A future survey of trainees or early career hospitalists who participated in these rotations could provide a better understanding of their achievements and effectiveness.

CONCLUSION

We anticipate that the demand for hospitalist-focused training will continue to grow as more residents in training seek to enter the specialty. Hospitalist and residency program leaders have an opportunity within residency training programs to build new or further develop existing hospital medicine-focused rotations. The HENS survey demonstrates that hospitalist-focused rotations are prevalent in residency education and have the potential to play an important role in hospitalist training.

Disclosure

The authors declare no conflicts of interest in relation to this manuscript.

Hospital medicine has become the fastest growing medicine subspecialty, though no standardized hospitalist-focused educational program is required to become a practicing adult medicine hospitalist.1 Historically, adult hospitalists have had little additional training beyond residency, yet, as residency training adapts to duty hour restrictions, patient caps, and increasing attending oversight, it is not clear if traditional rotations and curricula provide adequate preparation for independent practice as an adult hospitalist.2-5 Several types of training and educational programs have emerged to fill this potential gap. These include hospital medicine fellowships, residency pathways, early career faculty development programs (eg, Society of Hospital Medicine/ Society of General Internal Medicine sponsored Academic Hospitalist Academy), and hospitalist-focused resident rotations.6-10 These activities are intended to ensure that residents and early career physicians gain the skills and competencies required to effectively practice hospital medicine.

Hospital medicine fellowships, residency pathways, and faculty development have been described previously.6-8 However, the prevalence and characteristics of hospital medicine-focused resident rotations are unknown, and these rotations are rarely publicized beyond local residency programs. Our study aims to determine the prevalence, purpose, and function of hospitalist-focused rotations within residency programs and explore the role these rotations have in preparing residents for a career in hospital medicine.

METHODS

Study Design, Setting, and Participants

We conducted a cross-sectional study involving the largest 100 Accreditation Council for Graduate Medical Education (ACGME) internal medicine residency programs. We chose the largest programs as we hypothesized that these programs would be most likely to have the infrastructure to support hospital medicine focused rotations compared to smaller programs. The UCSF Committee on Human Research approved this study.

Survey Development

We developed a study-specific survey (the Hospitalist Elective National Survey [HENS]) to assess the prevalence, structure, curricular goals, and perceived benefits of distinct hospitalist rotations as defined by individual resident programs. The survey prompted respondents to consider a “hospitalist-focused” rotation as one that is different from a traditional inpatient “ward” rotation and whose emphasis is on hospitalist-specific training, clinical skills, or career development. The 18-question survey (Appendix 1) included fixed choice, multiple choice, and open-ended responses.

Data Collection

Using publicly available data from the ACGME website (www.acgme.org), we identified the largest 100 medicine programs based on the total number of residents. This included programs with 81 or more residents. An electronic survey was e-mailed to the leadership of each program. In May 2015, surveys were sent to Residency Program Directors (PD), and if they did not respond after 2 attempts, then Associate Program Directors (APD) were contacted twice. If both these leaders did not respond, then the survey was sent to residency program administrators or Hospital Medicine Division Chiefs. Only one survey was completed per site.

Data Analysis

We used descriptive statistics to summarize quantitative data. Responses to open-ended qualitative questions about the goals, strengths, and design of rotations were analyzed using thematic analysis.11 During analysis, we iteratively developed and refined codes that identified important concepts that emerged from the data. Two members of the research team trained in qualitative data analysis coded these data independently (SL & JH).

RESULTS

Eighty-two residency program leaders (53 PD, 19 APD, 10 chiefs/admin) responded to the survey (82% total response rate). Among all responders, the prevalence of hospitalist-focused rotations was 50% (41/82). Of these 41 rotations, 85% (35/41) were elective rotations and 15% (6/41) were mandatory rotations. Hospitalist rotations ranged in existence from 1 to 15 years with a mean duration of 4.78 years (S.D. 3.5).

Of the 41 programs that did not have a hospital medicine-focused rotation, the key barriers identified were a lack of a well-defined model (29%), low faculty interest (15%), low resident interest (12%), and lack of funding (5%). Despite these barriers, 9 of these 41 programs (22%) stated they planned to start a rotation in the future – of which, 3 programs (7%) planned to start a rotation within the year.


Of the 41 established rotations, most were 1 month in duration (31/41, 76%) and most of the participants included second-year residents (30/41, 73%), and/or third-year residents (32/41, 78%). In addition to clinical work, most rotations had a nonclinical component that included teaching, research/scholarship, and/or work on quality improvement or patient safety (Table 1). Clinical activities, nonclinical activities, and curricular elements varied across institutions (Table 1).

Most programs with rotations (39/41, 95%) reported that their hospitalist rotation filled at least one gap in traditional residency curriculum. The most frequently identified gaps the rotation filled included: allowing progressive clinical autonomy (59%, 24/41), learning about quality improvement and high value care (41%, 17/41), and preparing to become a practicing hospitalist (39%, 16/41). Most respondents (66%, 27/41) reported that the rotation helped to prepare trainees for their first year as an attending.

Results of thematic analysis related to the goals, strengths, and design of rotations are shown in Table 2. Five themes emerged relating to autonomy, mentorship, hospitalist skills, real-world experience, and training and curriculum gaps. These themes describe the underlying structure in which these rotations promote career preparation and skill development.

 

 

DISCUSSION

The Hospital Elective National Survey provides insight into a growing component of hospitalist-focused training and preparation. Fifty percent of ACGME residency programs surveyed in this study had a hospitalist-focused rotation. Rotation characteristics were heterogeneous, perhaps reflecting both the homegrown nature of their development and the lack of national study or data to guide what constitutes an “ideal” rotation. Common functions of rotations included providing career mentorship and allowing for trainees to get experience “being a hospitalist.” Other key elements of the rotations included providing additional clinical autonomy and teaching material outside of traditional residency curricula such as quality improvement, patient safety, billing, and healthcare finances.

Prior research has explored other training for hospitalists such as fellowships, pathways, and faculty development.6-8 A hospital medicine fellowship provides extensive training but without a practice requirement in adult medicine (as now exists in pediatric hospital medicine), the impact of fellowship training may be limited by its scale.12,13 Longitudinal hospitalist residency pathways provide comprehensive skill development and often require an early career commitment from trainees.7 Faculty development can be another tool to foster career growth, though requires local investment from hospitalist groups that may not have the resources or experience to support this.8 Our study has highlighted that hospitalist-focused rotations within residency programs can train physicians for a career in hospital medicine. Hospitalist and residency leaders should consider that these rotations might be the only hospital medicine-focused training that new hospitalists will have. Given the variable nature of these rotations nationally, developing standards around core hospitalist competencies within these rotations should be a key component to career preparation and a goal for the field at large.14,15

Our study has limitations. The survey focused only on internal medicine as it is the most common training background of hospitalists; however, the field has grown to include other specialties including pediatrics, neurology, family medicine, and surgery. In addition, the survey reviewed the largest ACGME Internal Medicine programs to best evaluate prevalence and content—it may be that some smaller programs have rotations with different characteristics that we have not captured. Lastly, the survey reviewed the rotations through the lens of residency program leadership and not trainees. A future survey of trainees or early career hospitalists who participated in these rotations could provide a better understanding of their achievements and effectiveness.

CONCLUSION

We anticipate that the demand for hospitalist-focused training will continue to grow as more residents in training seek to enter the specialty. Hospitalist and residency program leaders have an opportunity within residency training programs to build new or further develop existing hospital medicine-focused rotations. The HENS survey demonstrates that hospitalist-focused rotations are prevalent in residency education and have the potential to play an important role in hospitalist training.

Disclosure

The authors declare no conflicts of interest in relation to this manuscript.

References

1. Wachter RM, Goldman L. Zero to 50,000 – The 20th Anniversary of the Hospitalist. N Engl J Med. 2016;375:1009-1011. PubMed
2. Glasheen JJ, Siegal EM, Epstein K, Kutner J, Prochazka AV. Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists’ needs. J Gen Intern Med. 2008;23:1110-1115. PubMed
3. Glasheen JJ, Goldenberg J, Nelson JR. Achieving hospital medicine’s promise through internal medicine residency redesign. Mt Sinai J Med. 2008; 5:436-441. PubMed
4. Plauth WH 3rd, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists’ perceptions of their residency training needs: results of a national survey. Am J Med. 2001; 15;111:247-254. PubMed
5. Kumar A, Smeraglio A, Witteles R, Harman S, Nallamshetty, S, Rogers A, Harrington R, Ahuja N. A resident-created hospitalist curriculum for internal medicine housestaff. J Hosp Med. 2016;11:646-649. PubMed
6. Ranji, SR, Rosenman, DJ, Amin, AN, Kripalani, S. Hospital medicine fellowships: works in progress. Am J Med. 2006;119(1):72.e1-7. PubMed
7. Sweigart JR, Tad-Y D, Kneeland P, Williams MV, Glasheen JJ. Hospital medicine resident training tracks: developing the hospital medicine pipeline. J Hosp Med. 2017;12:173-176. PubMed
8. Sehgal NL, Sharpe BA, Auerbach AA, Wachter RM. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6:161-166. PubMed
9. Academic Hospitalist Academy. Course Description, Objectives and Society Sponsorship. Available at: https://academichospitalist.org/. Accessed August 23, 2017. 
10. Amin AN. A successful hospitalist rotation for senior medicine residents. Med Educ. 2003;37:1042. PubMed
11. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77-101. 
12. American Board of Medical Specialties. ABMS Officially Recognizes Pediatric Hospital Medicine Subspecialty Certification Available at: http://www.abms.org/news-events/abms-officially-recognizes-pediatric-hospital-medicine-subspecialty-certification/. Accessed August 23, 2017. PubMed
13. Wiese J. Residency training: beginning with the end in mind. J Gen Intern Med. 2008; 23(7):1122-1123. PubMed
14. Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN. Core competencies in hospital medicine: development and methodology. J Hosp Med. 2006; 1 Suppl 1:48-56. PubMed
15. Nichani S, Crocker J, Fitterman N, Lukela M. Updating the core competencies in hospital medicine – 2017 revision: introduction and methodology. J Hosp Med. 2017;4:283-287. PubMed

References

1. Wachter RM, Goldman L. Zero to 50,000 – The 20th Anniversary of the Hospitalist. N Engl J Med. 2016;375:1009-1011. PubMed
2. Glasheen JJ, Siegal EM, Epstein K, Kutner J, Prochazka AV. Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists’ needs. J Gen Intern Med. 2008;23:1110-1115. PubMed
3. Glasheen JJ, Goldenberg J, Nelson JR. Achieving hospital medicine’s promise through internal medicine residency redesign. Mt Sinai J Med. 2008; 5:436-441. PubMed
4. Plauth WH 3rd, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists’ perceptions of their residency training needs: results of a national survey. Am J Med. 2001; 15;111:247-254. PubMed
5. Kumar A, Smeraglio A, Witteles R, Harman S, Nallamshetty, S, Rogers A, Harrington R, Ahuja N. A resident-created hospitalist curriculum for internal medicine housestaff. J Hosp Med. 2016;11:646-649. PubMed
6. Ranji, SR, Rosenman, DJ, Amin, AN, Kripalani, S. Hospital medicine fellowships: works in progress. Am J Med. 2006;119(1):72.e1-7. PubMed
7. Sweigart JR, Tad-Y D, Kneeland P, Williams MV, Glasheen JJ. Hospital medicine resident training tracks: developing the hospital medicine pipeline. J Hosp Med. 2017;12:173-176. PubMed
8. Sehgal NL, Sharpe BA, Auerbach AA, Wachter RM. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6:161-166. PubMed
9. Academic Hospitalist Academy. Course Description, Objectives and Society Sponsorship. Available at: https://academichospitalist.org/. Accessed August 23, 2017. 
10. Amin AN. A successful hospitalist rotation for senior medicine residents. Med Educ. 2003;37:1042. PubMed
11. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77-101. 
12. American Board of Medical Specialties. ABMS Officially Recognizes Pediatric Hospital Medicine Subspecialty Certification Available at: http://www.abms.org/news-events/abms-officially-recognizes-pediatric-hospital-medicine-subspecialty-certification/. Accessed August 23, 2017. PubMed
13. Wiese J. Residency training: beginning with the end in mind. J Gen Intern Med. 2008; 23(7):1122-1123. PubMed
14. Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN. Core competencies in hospital medicine: development and methodology. J Hosp Med. 2006; 1 Suppl 1:48-56. PubMed
15. Nichani S, Crocker J, Fitterman N, Lukela M. Updating the core competencies in hospital medicine – 2017 revision: introduction and methodology. J Hosp Med. 2017;4:283-287. PubMed

Issue
Journal of Hospital Medicine 13(9)
Issue
Journal of Hospital Medicine 13(9)
Page Number
623-625. Published online first March 26, 2018
Page Number
623-625. Published online first March 26, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Steven Ludwin, MD, Assistant Professor of Medicine, Division of Hospital Medicine, University of California, San Francisco, 533 Parnassus Avenue, Box 0131, San Francisco, CA 94113; Telephone: 415-476-4814; Fax: 415-502-1963; E-mail: Steven.Ludwin@ucsf.edu
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Update in Hospital Medicine: Practical Lessons from the Literature

Article Type
Changed
Sat, 09/29/2018 - 22:22

The practice of hospital medicine continues to grow in its scope and complexity. The authors of this article conducted a review of the literature including articles published between March 2016 and March 2017. The key articles selected were of a high methodological quality, had clear findings, and had a high potential for an impact on clinical practice. Twenty articles were presented at the Update in Hospital Medicine at the 2017 Society of Hospital Medicine (SHM) and Society of General Internal Medicine (SGIM) annual meetings selected by the presentation teams (B.A.S., A.B. at SGIM and R.E.T., C.M. at SHM). Through an iterative voting process, 9 articles were selected for inclusion in this review. Each author ranked their top 5 articles from 1 to 5. The points were tallied for each article, and the 5 articles with the most points were included. A second round of voting identified the remaining 4 articles for inclusion. Each article is summarized below, and the key points are highlighted in Table 1.

ESSENTIAL PUBLICATIONS

Prevalence of Pulmonary Embolism among Patients Hospitalized for Syncope. Prandoni P et al. New England Journal of Medicine, 2016;375(16):1524-31.1

Background

Pulmonary embolism (PE), a potentially fatal disease, is rarely considered as a likely cause of syncope. To determine the prevalence of PE among patients presenting with their first episode of syncope, the authors performed a systematic workup for pulmonary embolism in adult patients admitted for syncope at 11 hospitals in Italy.

Findings

Of the 2584 patients who presented to the emergency department (ED) with syncope during the study, 560 patients were admitted and met the inclusion criteria. A modified Wells Score was applied, and a D-dimer was measured on every hospitalized patient. Those with a high pretest probability, a Wells Score of 4.0 or higher, or a positive D-dimer underwent further testing for pulmonary embolism by a CT scan, a ventilation perfusion scan, or an autopsy. Ninety-seven of the 560 patients admitted to the hospital for syncope were found to have a PE (17%). One in 4 patients (25%) with no clear cause for syncope was found to have a PE, and 1 in 4 patients with PE had no tachycardia, tachypnea, hypotension, or clinical signs of DVT.

Cautions

Nearly 72% of the patients with common explanations for syncope, such as vasovagal, drug-induced, or volume depletion, were discharged from the ED and not included in the study. The authors focused on the prevalence of PE. The causation between PE and syncope is not clear in each of the patients. Of the patients’ diagnosis by a CT, only 67% of the PEs were found to be in a main pulmonary artery or lobar artery. The other 33% were segmental or subsegmental. Of those diagnosed by a ventilation perfusion scan, 50% of the patients had 25% or more of the area of both lungs involved. The other 50% involved less than 25% of the area of both lungs. Also, it is important to note that 75% of the patients admitted to the hospital in this study were 70 years of age or older.

Implications

After common diagnoses are ruled out, it is important to consider pulmonary embolism in patients hospitalized with syncope. Providers should calculate a Wells Score and measure a D-dimer to guide the decision making.

Assessing the Risks Associated with MRI in Patients with a Pacemaker or Defibrillator. Russo RJ et al. New England Journal of Medicine, 2017;376(8):755-64.2

Background

Magnetic resonance imaging (MRI) in patients with implantable cardiac devices is considered a safety risk due to the potential of cardiac lead heating and subsequent myocardial injury or alterations of the pacing properties. Although manufacturers have developed “MRI-conditional” devices designed to reduce these risks, still 2 million people in the United States and 6 million people worldwide have “non–MRI-conditional” devices. The authors evaluated the event rates in patients with “non-MRI-conditional” devices undergoing an MRI.

 

 

Findings

The authors prospectively followed up 1500 adults with cardiac devices placed since 2001 who received nonthoracic MRIs according to a specific protocol available in the supplemental materials published with this article in the New England Journal of Medicine. Of the 1000 patients with pacemakers only, they observed 5 atrial arrhythmias and 6 electrical resets. Of the 500 patients with implantable cardioverter defibrillators (ICDs), they observed 1 atrial arrhythmia and 1 generator failure (although this case had deviated from the protocol). All of the atrial arrhythmias were self-terminating. No deaths, lead failure requiring an immediate replacement, a loss of capture, or ventricular arrhythmias were observed.

Cautions

Patients who were pacing dependent were excluded. No devices implanted before 2001 were included in the study, and the MRIs performed were only 1.5 Tesla (a lower field strength than the also available 3 Tesla MRIs).

Implications

It is safe to proceed with 1.5 Tesla nonthoracic MRIs in patients, following the protocol outlined in this article, with non–MRI conditional cardiac devices implanted since 2001.

Culture If Spikes? Indications and Yield of Blood Cultures in Hospitalized Medical Patients. Linsenmeyer K et al. Journal of Hospital Medicine, 2016;11(5):336-40.3

Background

Blood cultures are frequently drawn for the evaluation of an inpatient fever. This “culture if spikes” approach may lead to unnecessary testing and false positive results. In this study, the authors evaluated rates of true positive and false positive blood cultures in the setting of an inpatient fever.

Findings

The patients hospitalized on the general medicine or cardiology floors at a Veterans Affairs teaching hospital were prospectively followed over 7 months. A total of 576 blood cultures were ordered among 323 unique patients. The patients were older (average age of 70 years) and predominantly male (94%). The true-positive rate for cultures, determined by a consensus among the microbiology and infectious disease departments based on a review of clinical and laboratory data, was 3.6% compared with a false-positive rate of 2.3%. The clinical characteristics associated with a higher likelihood of a true positive included: the indication for a culture as a follow-up from a previous culture (likelihood ratio [LR] 3.4), a working diagnosis of bacteremia or endocarditis (LR 3.7), and the constellation of fever and leukocytosis in a patient who has not been on antibiotics (LR 5.6).

Cautions

This study was performed at a single center with patients in the medicine and cardiology services, and thus, the data is representative of clinical practice patterns specific to that site.

Implications

Reflexive ordering of blood cultures for inpatient fever is of a low yield with a false-positive rate that approximates the true positive rate. A large number of patients are tested unnecessarily, and for those with positive tests, physicians are as likely to be misled as they are certain to truly identify a pathogen. The positive predictive value of blood cultures is improved when drawn on patients who are not on antibiotics and when the patient has a specific diagnosis, such as pneumonia, previous bacteremia, or suspected endocarditis.

Incidence of and Risk Factors for Chronic Opioid Use among Opioid-Naive Patients in the Postoperative Period. Sun EC et al. JAMA Internal Medicine, 2016;176(9):1286-93.4

Background

Each day in the United States, 650,000 opioid prescriptions are filled, and 78 people suffer an opiate-related death. Opioids are frequently prescribed for inpatient management of postoperative pain. In this study, authors compared the development of chronic opioid use between patients who had undergone surgery and those who had not.

Findings

This was a retrospective analysis of a nationwide insurance claims database. A total of 641,941 opioid-naive patients underwent 1 of 11 designated surgeries in the study period and were compared with 18,011,137 opioid-naive patients who did not undergo surgery. Chronic opioid use was defined as the filling of 10 or more prescriptions or receiving more than a 120-day supply between 90 and 365 days postoperatively (or following the assigned faux surgical date in those not having surgery). This was observed in a small proportion of the surgical patients (less than 0.5%). However, several procedures were associated with the increased odds of postoperative chronic opioid use, including a simple mastectomy (Odds ratio [OR] 2.65), a cesarean delivery (OR 1.28), an open appendectomy (OR 1.69), an open and laparoscopic cholecystectomy (ORs 3.60 and 1.62, respectively), and a total hip and total knee arthroplasty (ORs 2.52 and 5.10, respectively). Also, male sex, age greater than 50 years, preoperative benzodiazepines or antidepressants, and a history of drug abuse were associated with increased odds.

Cautions

This study was limited by the claims-based data and that the nonsurgical population was inherently different from the surgical population in ways that could lead to confounding.

 

 

Implications

In perioperative care, there is a need to focus on multimodal approaches to pain and to implement opioid reducing and sparing strategies that might include options such as acetaminophen, NSAIDs, neuropathic pain medications, and Lidocaine patches. Moreover, at discharge, careful consideration should be given to the quantity and duration of the postoperative opioids.

Rapid Rule-out of Acute Myocardial Infarction with a Single High-Sensitivity Cardiac Troponin T Measurement below the Limit of Detection: A Collaborative Meta-Analysis. Pickering JW et al. Annals of Internal Medicine, 2017;166:715-24.5

Background

High-sensitivity cardiac troponin testing (hs-cTnT) is now available in the United States. Studies have found that these can play a significant role in a rapid rule-out of acute myocardial infarction (AMI).

Findings

In this meta-analysis, the authors identified 11 studies with 9241 participants that prospectively evaluated patients presenting to the emergency department (ED) with chest pain, underwent an ECG, and had hs-cTnT drawn. A total of 30% of the patients were classified as low risk with negative hs-cTnT and negative ECG (defined as no ST changes or T-wave inversions indicative of ischemia). Among the low risk patients, only 14 of the 2825 (0.5%) had AMI according to the Global Task Forces definition.6 Seven of these were in patients with hs-cTnT drawn within 3 hours of a chest pain onset. The pooled negative predictive value was 99.0% (CI 93.8%–99.8%).

Cautions

The heterogeneity between the studies in this meta-analysis, especially in the exclusion criteria, warrants careful consideration when being implemented in new settings. A more sensitive test will result in more positive troponins due to different limits of detection. Thus, medical teams and institutions need to plan accordingly. Caution should be taken for any patient presenting within 3 hours of a chest pain onset.

Implications

Rapid rule-out protocols—which include clinical evaluation, a negative ECG, and a negative high-sensitivity cardiac troponin—identify a large proportion of low-risk patients who are unlikely to have a true AMI.

Prevalence and Localization of Pulmonary Embolism in Unexplained Acute Exacerbations of COPD: A Systematic Review and Meta-analysis. Aleva FE et al. Chest, 2017;151(3):544-54.7

Background

Acute exacerbations of chronic obstructive pulmonary disease (AE-COPD) are frequent. In up to 30%, no clear trigger is found. Previous studies suggested that 1 in 4 of these patients may have a pulmonary embolus (PE).7 This study reviewed the literature and meta-data to describe the prevalence, the embolism location, and the clinical predictors of PE among patients with unexplained AE-COPD.

Findings

A systematic review of the literature and meta-analysis identified 7 studies with 880 patients. In the pooled analysis, 16% had PE (range: 3%–29%). Of the 120 patients with PE, two-thirds were in lobar or larger arteries and one-third in segmental or smaller. Pleuritic chest pain and signs of cardiac compromise (hypotension, syncope, and right-sided heart failure) were associated with PE.

Cautions

This study was heterogeneous leading to a broad confidence interval for prevalence ranging from 8%–25%. Given the frequency of AE-COPD with no identified trigger, physicians need to attend to risks of repeat radiation exposure when considering an evaluation for PE.

Implications

One in 6 patients with unexplained AE-COPD was found to have PE; the odds were greater in those with pleuritic chest pain or signs of cardiac compromise. In patients with AE-COPD with an unclear trigger, the providers should consider an evaluation for PE by using a clinical prediction rule and/or a D-dimer.

Sitting at Patients’ Bedsides May Improve Patients’ Perceptions of Physician Communication Skills. Merel SE et al. Journal of Hospital Medicine, 2016;11(12):865-8.9

Background

Sitting at a patient’s bedside in the inpatient setting is considered a best practice, yet it has not been widely adopted. The authors conducted a cluster-randomized trial of physicians on a single 28-bed hospitalist only run unit where physicians were assigned to sitting or standing for the first 3 days of a 7-day workweek assignment. New admissions or transfers to the unit were considered eligible for the study.

Findings

Sixteen hospitalists saw on an average 13 patients daily during the study (a total of 159 patients were included in the analysis after 52 patients were excluded or declined to participate). The hospitalists were 69% female, and 81% had been in practice 3 years or less. The average time spent in the patient’s room was 12:00 minutes while seated and 12:10 minutes while standing. There was no difference in the patients’ perception of the amount of time spent—the patients overestimated this by 4 minutes in both groups. Sitting was associated with higher ratings for “listening carefully” and “explaining things in a way that was easy to understand.” There was no difference in ratings on the physicians interrupting the patient when talking or in treating patients with courtesy and respect.

 

 

Cautions

The study had a small sample size, was limited to English-speaking patients, and was a single-site study. It involved only attending-level physicians and did not involve nonphysician team members. The physicians were not blinded and were aware that the interactions were monitored, perhaps creating a Hawthorne effect. The analysis did not control for other factors such as the severity of the illness, the number of consultants used, or the degree of health literacy.

Implications

This study supports an important best practice highlighted in etiquette-based medicine 10: sitting at the bedside provided a benefit in the patient’s perception of communication by physicians without a negative effect on the physician’s workflow.

The Duration of Antibiotic Treatment in Community-Acquired Pneumonia: A Multi-Center Randomized Clinical Trial. Uranga A et al. JAMA Intern Medicine, 2016;176(9):1257-65.11

Background

The optimal duration of treatment for community-acquired pneumonia (CAP) is unclear; a growing body of evidence suggests shorter and longer durations may be equivalent.

Findings

At 4 hospitals in Spain, 312 adults with a mean age of 65 years and a diagnosis of CAP (non-ICU) were randomized to a short (5 days) versus a long (provider discretion) course of antibiotics. In the short-course group, the antibiotics were stopped after 5 days if the body temperature had been 37.8o C or less for 48 hours, and no more than 1 sign of clinical instability was present (SBP < 90 mmHg, HR >100/min, RR > 24/min, O2Sat < 90%). The median number of antibiotic days was 5 for the short-course group and 10 for the long-course group (P < .01). There was no difference in the resolution of pneumonia symptoms at 10 days or 30 days or in 30-day mortality. There were no differences in in-hospital side effects. However, 30-day readmissions were higher in the long-course group compared with the short-course group (6.6% vs 1.4%; P = .02). The results were similar across all of the Pneumonia Severity Index (PSI) classes.

Cautions

Most of the patients were not severely ill (~60% PSI I-III), the level of comorbid disease was low, and nearly 80% of the patients received fluoroquinolone. There was a significant cross over with 30% of patients assigned to the short-course group receiving antibiotics for more than 5 days.

Implications

Inpatient providers should aim to treat patients with community-acquired pneumonia (regardless of the severity of the illness) for 5 days. At day 5, if the patient is afebrile and has no signs of clinical instability, clinicians should be comfortable stopping antibiotics.

Is the Era of Intravenous Proton Pump Inhibitors Coming to an End in Patients with Bleeding Peptic Ulcers? A Meta-Analysis of the Published Literature. Jian Z et al. British Journal of Clinical Pharmacology, 2016;82(3):880-9.12

Background

Guidelines recommend intravenous proton pump inhibitors (PPI) after an endoscopy for patients with a bleeding peptic ulcer. Yet, acid suppression with oral PPI is deemed equivalent to the intravenous route.

Findings

This systematic review and meta-analysis identified 7 randomized controlled trials involving 859 patients. After an endoscopy, the patients were randomized to receive either oral or intravenous PPI. Most of the patients had “high-risk” peptic ulcers (active bleeding, a visible vessel, an adherent clot). The PPI dose and frequency varied between the studies. Re-bleeding rates were no different between the oral and intravenous route at 72 hours (2.4% vs 5.1%; P = .26), 7 days (5.6% vs 6.8%; P =.68), or 30 days (7.9% vs 8.8%; P = .62). There was also no difference in 30-day mortality (2.1% vs 2.4%; P = .88), and the length of stay was the same in both groups. Side effects were not reported.

Cautions

This systematic review and meta-analysis included multiple heterogeneous small studies of moderate quality. A large number of patients were excluded, increasing the risk of a selection bias.

Implications

There is no clear indication for intravenous PPI in the treatment of bleeding peptic ulcers following an endoscopy. Converting to oral PPI is equivalent to intravenous and is a safe, effective, and cost-saving option for patients with bleeding peptic ulcers.

References

1. Prandoni P, Lensing AW, Prins MH, et al. Prevalence of pulmonary embolism among patients hospitalized for syncope. N Engl J Med. 2016; 375(16):1524-1531. PubMed
2. Russo RJ, Costa HS, Silva PD, et al. Assessing the risks associated with MRI in patients with a pacemaker or defibrillator. N Engl J Med. 2017;376(8):755-764. PubMed
3. Linsenmeyer K, Gupta K, Strymish JM, Dhanani M, Brecher SM, Breu AC. Culture if spikes? Indications and yield of blood cultures in hospitalized medical patients. J Hosp Med. 2016;11(5):336-340. PubMed
4. Sun EC, Darnall BD, Baker LC, Mackey S. Incidence of and risk factors for chronic opioid use among opioid-naive patients in the postoperative period. JAMA Intern Med. 2016;176(9):1286-1293. PubMed
5. Pickering JW, Than MP, Cullen L, et al. Rapid rule-out of acute myocardial infarction with a single high-sensitivity cardiac troponin T measurement below the limit of detection: A collaborative meta-analysis. Ann Intern Med. 2017;166(10):715-724. PubMed
6. Thygesen K, Alpert JS, White HD, Jaffe AS, Apple FS, Galvani M, et al; Joint ESC/ACCF/AHA/WHF Task Force for the Redefinition of Myocardial Infarction. Universal definition of myocardial infarction. Circulation. 2007;116:2634-2653. PubMed
7. Aleva FE, Voets LWLM, Simons SO, de Mast Q, van der Ven AJAM, Heijdra YF. Prevalence and localization of pulmonary embolism in unexplained acute exacerbations of COPD: A systematic review and meta-analysis. Chest. 2017; 151(3):544-554. PubMed
8. Rizkallah J, Man SFP, Sin DD. Prevalence of pulmonary embolism in acute exacerbations of COPD: A systematic review and meta-analysis. Chest. 2009;135(3):786-793. PubMed
9. Merel SE, McKinney CM, Ufkes P, Kwan AC, White AA. Sitting at patients’ bedsides may improve patients’ perceptions of physician communication skills. J Hosp Med. 2016;11(12):865-868. PubMed
10. Kahn MW. Etiquette-based medicine. N Engl J Med. 2008;358(19):1988-1989. PubMed
11. Uranga A, España PP, Bilbao A, et al. Duration of antibiotic treatment in community-acquired pneumonia: A multicenter randomized clinical trial. JAMA Intern Med. 2016;176(9):1257-1265. PubMed
12. Jian Z, Li H, Race NS, Ma T, Jin H, Yin Z. Is the era of intravenous proton pump inhibitors coming to an end in patients with bleeding peptic ulcers? Meta-analysis of the published literature. Br J Clin Pharmacol. 2016;82(3):880-889. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(9)
Publications
Topics
Page Number
626-630. Published online first February 27, 2018
Sections
Article PDF
Article PDF
Related Articles

The practice of hospital medicine continues to grow in its scope and complexity. The authors of this article conducted a review of the literature including articles published between March 2016 and March 2017. The key articles selected were of a high methodological quality, had clear findings, and had a high potential for an impact on clinical practice. Twenty articles were presented at the Update in Hospital Medicine at the 2017 Society of Hospital Medicine (SHM) and Society of General Internal Medicine (SGIM) annual meetings selected by the presentation teams (B.A.S., A.B. at SGIM and R.E.T., C.M. at SHM). Through an iterative voting process, 9 articles were selected for inclusion in this review. Each author ranked their top 5 articles from 1 to 5. The points were tallied for each article, and the 5 articles with the most points were included. A second round of voting identified the remaining 4 articles for inclusion. Each article is summarized below, and the key points are highlighted in Table 1.

ESSENTIAL PUBLICATIONS

Prevalence of Pulmonary Embolism among Patients Hospitalized for Syncope. Prandoni P et al. New England Journal of Medicine, 2016;375(16):1524-31.1

Background

Pulmonary embolism (PE), a potentially fatal disease, is rarely considered as a likely cause of syncope. To determine the prevalence of PE among patients presenting with their first episode of syncope, the authors performed a systematic workup for pulmonary embolism in adult patients admitted for syncope at 11 hospitals in Italy.

Findings

Of the 2584 patients who presented to the emergency department (ED) with syncope during the study, 560 patients were admitted and met the inclusion criteria. A modified Wells Score was applied, and a D-dimer was measured on every hospitalized patient. Those with a high pretest probability, a Wells Score of 4.0 or higher, or a positive D-dimer underwent further testing for pulmonary embolism by a CT scan, a ventilation perfusion scan, or an autopsy. Ninety-seven of the 560 patients admitted to the hospital for syncope were found to have a PE (17%). One in 4 patients (25%) with no clear cause for syncope was found to have a PE, and 1 in 4 patients with PE had no tachycardia, tachypnea, hypotension, or clinical signs of DVT.

Cautions

Nearly 72% of the patients with common explanations for syncope, such as vasovagal, drug-induced, or volume depletion, were discharged from the ED and not included in the study. The authors focused on the prevalence of PE. The causation between PE and syncope is not clear in each of the patients. Of the patients’ diagnosis by a CT, only 67% of the PEs were found to be in a main pulmonary artery or lobar artery. The other 33% were segmental or subsegmental. Of those diagnosed by a ventilation perfusion scan, 50% of the patients had 25% or more of the area of both lungs involved. The other 50% involved less than 25% of the area of both lungs. Also, it is important to note that 75% of the patients admitted to the hospital in this study were 70 years of age or older.

Implications

After common diagnoses are ruled out, it is important to consider pulmonary embolism in patients hospitalized with syncope. Providers should calculate a Wells Score and measure a D-dimer to guide the decision making.

Assessing the Risks Associated with MRI in Patients with a Pacemaker or Defibrillator. Russo RJ et al. New England Journal of Medicine, 2017;376(8):755-64.2

Background

Magnetic resonance imaging (MRI) in patients with implantable cardiac devices is considered a safety risk due to the potential of cardiac lead heating and subsequent myocardial injury or alterations of the pacing properties. Although manufacturers have developed “MRI-conditional” devices designed to reduce these risks, still 2 million people in the United States and 6 million people worldwide have “non–MRI-conditional” devices. The authors evaluated the event rates in patients with “non-MRI-conditional” devices undergoing an MRI.

 

 

Findings

The authors prospectively followed up 1500 adults with cardiac devices placed since 2001 who received nonthoracic MRIs according to a specific protocol available in the supplemental materials published with this article in the New England Journal of Medicine. Of the 1000 patients with pacemakers only, they observed 5 atrial arrhythmias and 6 electrical resets. Of the 500 patients with implantable cardioverter defibrillators (ICDs), they observed 1 atrial arrhythmia and 1 generator failure (although this case had deviated from the protocol). All of the atrial arrhythmias were self-terminating. No deaths, lead failure requiring an immediate replacement, a loss of capture, or ventricular arrhythmias were observed.

Cautions

Patients who were pacing dependent were excluded. No devices implanted before 2001 were included in the study, and the MRIs performed were only 1.5 Tesla (a lower field strength than the also available 3 Tesla MRIs).

Implications

It is safe to proceed with 1.5 Tesla nonthoracic MRIs in patients, following the protocol outlined in this article, with non–MRI conditional cardiac devices implanted since 2001.

Culture If Spikes? Indications and Yield of Blood Cultures in Hospitalized Medical Patients. Linsenmeyer K et al. Journal of Hospital Medicine, 2016;11(5):336-40.3

Background

Blood cultures are frequently drawn for the evaluation of an inpatient fever. This “culture if spikes” approach may lead to unnecessary testing and false positive results. In this study, the authors evaluated rates of true positive and false positive blood cultures in the setting of an inpatient fever.

Findings

The patients hospitalized on the general medicine or cardiology floors at a Veterans Affairs teaching hospital were prospectively followed over 7 months. A total of 576 blood cultures were ordered among 323 unique patients. The patients were older (average age of 70 years) and predominantly male (94%). The true-positive rate for cultures, determined by a consensus among the microbiology and infectious disease departments based on a review of clinical and laboratory data, was 3.6% compared with a false-positive rate of 2.3%. The clinical characteristics associated with a higher likelihood of a true positive included: the indication for a culture as a follow-up from a previous culture (likelihood ratio [LR] 3.4), a working diagnosis of bacteremia or endocarditis (LR 3.7), and the constellation of fever and leukocytosis in a patient who has not been on antibiotics (LR 5.6).

Cautions

This study was performed at a single center with patients in the medicine and cardiology services, and thus, the data is representative of clinical practice patterns specific to that site.

Implications

Reflexive ordering of blood cultures for inpatient fever is of a low yield with a false-positive rate that approximates the true positive rate. A large number of patients are tested unnecessarily, and for those with positive tests, physicians are as likely to be misled as they are certain to truly identify a pathogen. The positive predictive value of blood cultures is improved when drawn on patients who are not on antibiotics and when the patient has a specific diagnosis, such as pneumonia, previous bacteremia, or suspected endocarditis.

Incidence of and Risk Factors for Chronic Opioid Use among Opioid-Naive Patients in the Postoperative Period. Sun EC et al. JAMA Internal Medicine, 2016;176(9):1286-93.4

Background

Each day in the United States, 650,000 opioid prescriptions are filled, and 78 people suffer an opiate-related death. Opioids are frequently prescribed for inpatient management of postoperative pain. In this study, authors compared the development of chronic opioid use between patients who had undergone surgery and those who had not.

Findings

This was a retrospective analysis of a nationwide insurance claims database. A total of 641,941 opioid-naive patients underwent 1 of 11 designated surgeries in the study period and were compared with 18,011,137 opioid-naive patients who did not undergo surgery. Chronic opioid use was defined as the filling of 10 or more prescriptions or receiving more than a 120-day supply between 90 and 365 days postoperatively (or following the assigned faux surgical date in those not having surgery). This was observed in a small proportion of the surgical patients (less than 0.5%). However, several procedures were associated with the increased odds of postoperative chronic opioid use, including a simple mastectomy (Odds ratio [OR] 2.65), a cesarean delivery (OR 1.28), an open appendectomy (OR 1.69), an open and laparoscopic cholecystectomy (ORs 3.60 and 1.62, respectively), and a total hip and total knee arthroplasty (ORs 2.52 and 5.10, respectively). Also, male sex, age greater than 50 years, preoperative benzodiazepines or antidepressants, and a history of drug abuse were associated with increased odds.

Cautions

This study was limited by the claims-based data and that the nonsurgical population was inherently different from the surgical population in ways that could lead to confounding.

 

 

Implications

In perioperative care, there is a need to focus on multimodal approaches to pain and to implement opioid reducing and sparing strategies that might include options such as acetaminophen, NSAIDs, neuropathic pain medications, and Lidocaine patches. Moreover, at discharge, careful consideration should be given to the quantity and duration of the postoperative opioids.

Rapid Rule-out of Acute Myocardial Infarction with a Single High-Sensitivity Cardiac Troponin T Measurement below the Limit of Detection: A Collaborative Meta-Analysis. Pickering JW et al. Annals of Internal Medicine, 2017;166:715-24.5

Background

High-sensitivity cardiac troponin testing (hs-cTnT) is now available in the United States. Studies have found that these can play a significant role in a rapid rule-out of acute myocardial infarction (AMI).

Findings

In this meta-analysis, the authors identified 11 studies with 9241 participants that prospectively evaluated patients presenting to the emergency department (ED) with chest pain, underwent an ECG, and had hs-cTnT drawn. A total of 30% of the patients were classified as low risk with negative hs-cTnT and negative ECG (defined as no ST changes or T-wave inversions indicative of ischemia). Among the low risk patients, only 14 of the 2825 (0.5%) had AMI according to the Global Task Forces definition.6 Seven of these were in patients with hs-cTnT drawn within 3 hours of a chest pain onset. The pooled negative predictive value was 99.0% (CI 93.8%–99.8%).

Cautions

The heterogeneity between the studies in this meta-analysis, especially in the exclusion criteria, warrants careful consideration when being implemented in new settings. A more sensitive test will result in more positive troponins due to different limits of detection. Thus, medical teams and institutions need to plan accordingly. Caution should be taken for any patient presenting within 3 hours of a chest pain onset.

Implications

Rapid rule-out protocols—which include clinical evaluation, a negative ECG, and a negative high-sensitivity cardiac troponin—identify a large proportion of low-risk patients who are unlikely to have a true AMI.

Prevalence and Localization of Pulmonary Embolism in Unexplained Acute Exacerbations of COPD: A Systematic Review and Meta-analysis. Aleva FE et al. Chest, 2017;151(3):544-54.7

Background

Acute exacerbations of chronic obstructive pulmonary disease (AE-COPD) are frequent. In up to 30%, no clear trigger is found. Previous studies suggested that 1 in 4 of these patients may have a pulmonary embolus (PE).7 This study reviewed the literature and meta-data to describe the prevalence, the embolism location, and the clinical predictors of PE among patients with unexplained AE-COPD.

Findings

A systematic review of the literature and meta-analysis identified 7 studies with 880 patients. In the pooled analysis, 16% had PE (range: 3%–29%). Of the 120 patients with PE, two-thirds were in lobar or larger arteries and one-third in segmental or smaller. Pleuritic chest pain and signs of cardiac compromise (hypotension, syncope, and right-sided heart failure) were associated with PE.

Cautions

This study was heterogeneous leading to a broad confidence interval for prevalence ranging from 8%–25%. Given the frequency of AE-COPD with no identified trigger, physicians need to attend to risks of repeat radiation exposure when considering an evaluation for PE.

Implications

One in 6 patients with unexplained AE-COPD was found to have PE; the odds were greater in those with pleuritic chest pain or signs of cardiac compromise. In patients with AE-COPD with an unclear trigger, the providers should consider an evaluation for PE by using a clinical prediction rule and/or a D-dimer.

Sitting at Patients’ Bedsides May Improve Patients’ Perceptions of Physician Communication Skills. Merel SE et al. Journal of Hospital Medicine, 2016;11(12):865-8.9

Background

Sitting at a patient’s bedside in the inpatient setting is considered a best practice, yet it has not been widely adopted. The authors conducted a cluster-randomized trial of physicians on a single 28-bed hospitalist only run unit where physicians were assigned to sitting or standing for the first 3 days of a 7-day workweek assignment. New admissions or transfers to the unit were considered eligible for the study.

Findings

Sixteen hospitalists saw on an average 13 patients daily during the study (a total of 159 patients were included in the analysis after 52 patients were excluded or declined to participate). The hospitalists were 69% female, and 81% had been in practice 3 years or less. The average time spent in the patient’s room was 12:00 minutes while seated and 12:10 minutes while standing. There was no difference in the patients’ perception of the amount of time spent—the patients overestimated this by 4 minutes in both groups. Sitting was associated with higher ratings for “listening carefully” and “explaining things in a way that was easy to understand.” There was no difference in ratings on the physicians interrupting the patient when talking or in treating patients with courtesy and respect.

 

 

Cautions

The study had a small sample size, was limited to English-speaking patients, and was a single-site study. It involved only attending-level physicians and did not involve nonphysician team members. The physicians were not blinded and were aware that the interactions were monitored, perhaps creating a Hawthorne effect. The analysis did not control for other factors such as the severity of the illness, the number of consultants used, or the degree of health literacy.

Implications

This study supports an important best practice highlighted in etiquette-based medicine 10: sitting at the bedside provided a benefit in the patient’s perception of communication by physicians without a negative effect on the physician’s workflow.

The Duration of Antibiotic Treatment in Community-Acquired Pneumonia: A Multi-Center Randomized Clinical Trial. Uranga A et al. JAMA Intern Medicine, 2016;176(9):1257-65.11

Background

The optimal duration of treatment for community-acquired pneumonia (CAP) is unclear; a growing body of evidence suggests shorter and longer durations may be equivalent.

Findings

At 4 hospitals in Spain, 312 adults with a mean age of 65 years and a diagnosis of CAP (non-ICU) were randomized to a short (5 days) versus a long (provider discretion) course of antibiotics. In the short-course group, the antibiotics were stopped after 5 days if the body temperature had been 37.8o C or less for 48 hours, and no more than 1 sign of clinical instability was present (SBP < 90 mmHg, HR >100/min, RR > 24/min, O2Sat < 90%). The median number of antibiotic days was 5 for the short-course group and 10 for the long-course group (P < .01). There was no difference in the resolution of pneumonia symptoms at 10 days or 30 days or in 30-day mortality. There were no differences in in-hospital side effects. However, 30-day readmissions were higher in the long-course group compared with the short-course group (6.6% vs 1.4%; P = .02). The results were similar across all of the Pneumonia Severity Index (PSI) classes.

Cautions

Most of the patients were not severely ill (~60% PSI I-III), the level of comorbid disease was low, and nearly 80% of the patients received fluoroquinolone. There was a significant cross over with 30% of patients assigned to the short-course group receiving antibiotics for more than 5 days.

Implications

Inpatient providers should aim to treat patients with community-acquired pneumonia (regardless of the severity of the illness) for 5 days. At day 5, if the patient is afebrile and has no signs of clinical instability, clinicians should be comfortable stopping antibiotics.

Is the Era of Intravenous Proton Pump Inhibitors Coming to an End in Patients with Bleeding Peptic Ulcers? A Meta-Analysis of the Published Literature. Jian Z et al. British Journal of Clinical Pharmacology, 2016;82(3):880-9.12

Background

Guidelines recommend intravenous proton pump inhibitors (PPI) after an endoscopy for patients with a bleeding peptic ulcer. Yet, acid suppression with oral PPI is deemed equivalent to the intravenous route.

Findings

This systematic review and meta-analysis identified 7 randomized controlled trials involving 859 patients. After an endoscopy, the patients were randomized to receive either oral or intravenous PPI. Most of the patients had “high-risk” peptic ulcers (active bleeding, a visible vessel, an adherent clot). The PPI dose and frequency varied between the studies. Re-bleeding rates were no different between the oral and intravenous route at 72 hours (2.4% vs 5.1%; P = .26), 7 days (5.6% vs 6.8%; P =.68), or 30 days (7.9% vs 8.8%; P = .62). There was also no difference in 30-day mortality (2.1% vs 2.4%; P = .88), and the length of stay was the same in both groups. Side effects were not reported.

Cautions

This systematic review and meta-analysis included multiple heterogeneous small studies of moderate quality. A large number of patients were excluded, increasing the risk of a selection bias.

Implications

There is no clear indication for intravenous PPI in the treatment of bleeding peptic ulcers following an endoscopy. Converting to oral PPI is equivalent to intravenous and is a safe, effective, and cost-saving option for patients with bleeding peptic ulcers.

The practice of hospital medicine continues to grow in its scope and complexity. The authors of this article conducted a review of the literature including articles published between March 2016 and March 2017. The key articles selected were of a high methodological quality, had clear findings, and had a high potential for an impact on clinical practice. Twenty articles were presented at the Update in Hospital Medicine at the 2017 Society of Hospital Medicine (SHM) and Society of General Internal Medicine (SGIM) annual meetings selected by the presentation teams (B.A.S., A.B. at SGIM and R.E.T., C.M. at SHM). Through an iterative voting process, 9 articles were selected for inclusion in this review. Each author ranked their top 5 articles from 1 to 5. The points were tallied for each article, and the 5 articles with the most points were included. A second round of voting identified the remaining 4 articles for inclusion. Each article is summarized below, and the key points are highlighted in Table 1.

ESSENTIAL PUBLICATIONS

Prevalence of Pulmonary Embolism among Patients Hospitalized for Syncope. Prandoni P et al. New England Journal of Medicine, 2016;375(16):1524-31.1

Background

Pulmonary embolism (PE), a potentially fatal disease, is rarely considered as a likely cause of syncope. To determine the prevalence of PE among patients presenting with their first episode of syncope, the authors performed a systematic workup for pulmonary embolism in adult patients admitted for syncope at 11 hospitals in Italy.

Findings

Of the 2584 patients who presented to the emergency department (ED) with syncope during the study, 560 patients were admitted and met the inclusion criteria. A modified Wells Score was applied, and a D-dimer was measured on every hospitalized patient. Those with a high pretest probability, a Wells Score of 4.0 or higher, or a positive D-dimer underwent further testing for pulmonary embolism by a CT scan, a ventilation perfusion scan, or an autopsy. Ninety-seven of the 560 patients admitted to the hospital for syncope were found to have a PE (17%). One in 4 patients (25%) with no clear cause for syncope was found to have a PE, and 1 in 4 patients with PE had no tachycardia, tachypnea, hypotension, or clinical signs of DVT.

Cautions

Nearly 72% of the patients with common explanations for syncope, such as vasovagal, drug-induced, or volume depletion, were discharged from the ED and not included in the study. The authors focused on the prevalence of PE. The causation between PE and syncope is not clear in each of the patients. Of the patients’ diagnosis by a CT, only 67% of the PEs were found to be in a main pulmonary artery or lobar artery. The other 33% were segmental or subsegmental. Of those diagnosed by a ventilation perfusion scan, 50% of the patients had 25% or more of the area of both lungs involved. The other 50% involved less than 25% of the area of both lungs. Also, it is important to note that 75% of the patients admitted to the hospital in this study were 70 years of age or older.

Implications

After common diagnoses are ruled out, it is important to consider pulmonary embolism in patients hospitalized with syncope. Providers should calculate a Wells Score and measure a D-dimer to guide the decision making.

Assessing the Risks Associated with MRI in Patients with a Pacemaker or Defibrillator. Russo RJ et al. New England Journal of Medicine, 2017;376(8):755-64.2

Background

Magnetic resonance imaging (MRI) in patients with implantable cardiac devices is considered a safety risk due to the potential of cardiac lead heating and subsequent myocardial injury or alterations of the pacing properties. Although manufacturers have developed “MRI-conditional” devices designed to reduce these risks, still 2 million people in the United States and 6 million people worldwide have “non–MRI-conditional” devices. The authors evaluated the event rates in patients with “non-MRI-conditional” devices undergoing an MRI.

 

 

Findings

The authors prospectively followed up 1500 adults with cardiac devices placed since 2001 who received nonthoracic MRIs according to a specific protocol available in the supplemental materials published with this article in the New England Journal of Medicine. Of the 1000 patients with pacemakers only, they observed 5 atrial arrhythmias and 6 electrical resets. Of the 500 patients with implantable cardioverter defibrillators (ICDs), they observed 1 atrial arrhythmia and 1 generator failure (although this case had deviated from the protocol). All of the atrial arrhythmias were self-terminating. No deaths, lead failure requiring an immediate replacement, a loss of capture, or ventricular arrhythmias were observed.

Cautions

Patients who were pacing dependent were excluded. No devices implanted before 2001 were included in the study, and the MRIs performed were only 1.5 Tesla (a lower field strength than the also available 3 Tesla MRIs).

Implications

It is safe to proceed with 1.5 Tesla nonthoracic MRIs in patients, following the protocol outlined in this article, with non–MRI conditional cardiac devices implanted since 2001.

Culture If Spikes? Indications and Yield of Blood Cultures in Hospitalized Medical Patients. Linsenmeyer K et al. Journal of Hospital Medicine, 2016;11(5):336-40.3

Background

Blood cultures are frequently drawn for the evaluation of an inpatient fever. This “culture if spikes” approach may lead to unnecessary testing and false positive results. In this study, the authors evaluated rates of true positive and false positive blood cultures in the setting of an inpatient fever.

Findings

The patients hospitalized on the general medicine or cardiology floors at a Veterans Affairs teaching hospital were prospectively followed over 7 months. A total of 576 blood cultures were ordered among 323 unique patients. The patients were older (average age of 70 years) and predominantly male (94%). The true-positive rate for cultures, determined by a consensus among the microbiology and infectious disease departments based on a review of clinical and laboratory data, was 3.6% compared with a false-positive rate of 2.3%. The clinical characteristics associated with a higher likelihood of a true positive included: the indication for a culture as a follow-up from a previous culture (likelihood ratio [LR] 3.4), a working diagnosis of bacteremia or endocarditis (LR 3.7), and the constellation of fever and leukocytosis in a patient who has not been on antibiotics (LR 5.6).

Cautions

This study was performed at a single center with patients in the medicine and cardiology services, and thus, the data is representative of clinical practice patterns specific to that site.

Implications

Reflexive ordering of blood cultures for inpatient fever is of a low yield with a false-positive rate that approximates the true positive rate. A large number of patients are tested unnecessarily, and for those with positive tests, physicians are as likely to be misled as they are certain to truly identify a pathogen. The positive predictive value of blood cultures is improved when drawn on patients who are not on antibiotics and when the patient has a specific diagnosis, such as pneumonia, previous bacteremia, or suspected endocarditis.

Incidence of and Risk Factors for Chronic Opioid Use among Opioid-Naive Patients in the Postoperative Period. Sun EC et al. JAMA Internal Medicine, 2016;176(9):1286-93.4

Background

Each day in the United States, 650,000 opioid prescriptions are filled, and 78 people suffer an opiate-related death. Opioids are frequently prescribed for inpatient management of postoperative pain. In this study, authors compared the development of chronic opioid use between patients who had undergone surgery and those who had not.

Findings

This was a retrospective analysis of a nationwide insurance claims database. A total of 641,941 opioid-naive patients underwent 1 of 11 designated surgeries in the study period and were compared with 18,011,137 opioid-naive patients who did not undergo surgery. Chronic opioid use was defined as the filling of 10 or more prescriptions or receiving more than a 120-day supply between 90 and 365 days postoperatively (or following the assigned faux surgical date in those not having surgery). This was observed in a small proportion of the surgical patients (less than 0.5%). However, several procedures were associated with the increased odds of postoperative chronic opioid use, including a simple mastectomy (Odds ratio [OR] 2.65), a cesarean delivery (OR 1.28), an open appendectomy (OR 1.69), an open and laparoscopic cholecystectomy (ORs 3.60 and 1.62, respectively), and a total hip and total knee arthroplasty (ORs 2.52 and 5.10, respectively). Also, male sex, age greater than 50 years, preoperative benzodiazepines or antidepressants, and a history of drug abuse were associated with increased odds.

Cautions

This study was limited by the claims-based data and that the nonsurgical population was inherently different from the surgical population in ways that could lead to confounding.

 

 

Implications

In perioperative care, there is a need to focus on multimodal approaches to pain and to implement opioid reducing and sparing strategies that might include options such as acetaminophen, NSAIDs, neuropathic pain medications, and Lidocaine patches. Moreover, at discharge, careful consideration should be given to the quantity and duration of the postoperative opioids.

Rapid Rule-out of Acute Myocardial Infarction with a Single High-Sensitivity Cardiac Troponin T Measurement below the Limit of Detection: A Collaborative Meta-Analysis. Pickering JW et al. Annals of Internal Medicine, 2017;166:715-24.5

Background

High-sensitivity cardiac troponin testing (hs-cTnT) is now available in the United States. Studies have found that these can play a significant role in a rapid rule-out of acute myocardial infarction (AMI).

Findings

In this meta-analysis, the authors identified 11 studies with 9241 participants that prospectively evaluated patients presenting to the emergency department (ED) with chest pain, underwent an ECG, and had hs-cTnT drawn. A total of 30% of the patients were classified as low risk with negative hs-cTnT and negative ECG (defined as no ST changes or T-wave inversions indicative of ischemia). Among the low risk patients, only 14 of the 2825 (0.5%) had AMI according to the Global Task Forces definition.6 Seven of these were in patients with hs-cTnT drawn within 3 hours of a chest pain onset. The pooled negative predictive value was 99.0% (CI 93.8%–99.8%).

Cautions

The heterogeneity between the studies in this meta-analysis, especially in the exclusion criteria, warrants careful consideration when being implemented in new settings. A more sensitive test will result in more positive troponins due to different limits of detection. Thus, medical teams and institutions need to plan accordingly. Caution should be taken for any patient presenting within 3 hours of a chest pain onset.

Implications

Rapid rule-out protocols—which include clinical evaluation, a negative ECG, and a negative high-sensitivity cardiac troponin—identify a large proportion of low-risk patients who are unlikely to have a true AMI.

Prevalence and Localization of Pulmonary Embolism in Unexplained Acute Exacerbations of COPD: A Systematic Review and Meta-analysis. Aleva FE et al. Chest, 2017;151(3):544-54.7

Background

Acute exacerbations of chronic obstructive pulmonary disease (AE-COPD) are frequent. In up to 30%, no clear trigger is found. Previous studies suggested that 1 in 4 of these patients may have a pulmonary embolus (PE).7 This study reviewed the literature and meta-data to describe the prevalence, the embolism location, and the clinical predictors of PE among patients with unexplained AE-COPD.

Findings

A systematic review of the literature and meta-analysis identified 7 studies with 880 patients. In the pooled analysis, 16% had PE (range: 3%–29%). Of the 120 patients with PE, two-thirds were in lobar or larger arteries and one-third in segmental or smaller. Pleuritic chest pain and signs of cardiac compromise (hypotension, syncope, and right-sided heart failure) were associated with PE.

Cautions

This study was heterogeneous leading to a broad confidence interval for prevalence ranging from 8%–25%. Given the frequency of AE-COPD with no identified trigger, physicians need to attend to risks of repeat radiation exposure when considering an evaluation for PE.

Implications

One in 6 patients with unexplained AE-COPD was found to have PE; the odds were greater in those with pleuritic chest pain or signs of cardiac compromise. In patients with AE-COPD with an unclear trigger, the providers should consider an evaluation for PE by using a clinical prediction rule and/or a D-dimer.

Sitting at Patients’ Bedsides May Improve Patients’ Perceptions of Physician Communication Skills. Merel SE et al. Journal of Hospital Medicine, 2016;11(12):865-8.9

Background

Sitting at a patient’s bedside in the inpatient setting is considered a best practice, yet it has not been widely adopted. The authors conducted a cluster-randomized trial of physicians on a single 28-bed hospitalist only run unit where physicians were assigned to sitting or standing for the first 3 days of a 7-day workweek assignment. New admissions or transfers to the unit were considered eligible for the study.

Findings

Sixteen hospitalists saw on an average 13 patients daily during the study (a total of 159 patients were included in the analysis after 52 patients were excluded or declined to participate). The hospitalists were 69% female, and 81% had been in practice 3 years or less. The average time spent in the patient’s room was 12:00 minutes while seated and 12:10 minutes while standing. There was no difference in the patients’ perception of the amount of time spent—the patients overestimated this by 4 minutes in both groups. Sitting was associated with higher ratings for “listening carefully” and “explaining things in a way that was easy to understand.” There was no difference in ratings on the physicians interrupting the patient when talking or in treating patients with courtesy and respect.

 

 

Cautions

The study had a small sample size, was limited to English-speaking patients, and was a single-site study. It involved only attending-level physicians and did not involve nonphysician team members. The physicians were not blinded and were aware that the interactions were monitored, perhaps creating a Hawthorne effect. The analysis did not control for other factors such as the severity of the illness, the number of consultants used, or the degree of health literacy.

Implications

This study supports an important best practice highlighted in etiquette-based medicine 10: sitting at the bedside provided a benefit in the patient’s perception of communication by physicians without a negative effect on the physician’s workflow.

The Duration of Antibiotic Treatment in Community-Acquired Pneumonia: A Multi-Center Randomized Clinical Trial. Uranga A et al. JAMA Intern Medicine, 2016;176(9):1257-65.11

Background

The optimal duration of treatment for community-acquired pneumonia (CAP) is unclear; a growing body of evidence suggests shorter and longer durations may be equivalent.

Findings

At 4 hospitals in Spain, 312 adults with a mean age of 65 years and a diagnosis of CAP (non-ICU) were randomized to a short (5 days) versus a long (provider discretion) course of antibiotics. In the short-course group, the antibiotics were stopped after 5 days if the body temperature had been 37.8o C or less for 48 hours, and no more than 1 sign of clinical instability was present (SBP < 90 mmHg, HR >100/min, RR > 24/min, O2Sat < 90%). The median number of antibiotic days was 5 for the short-course group and 10 for the long-course group (P < .01). There was no difference in the resolution of pneumonia symptoms at 10 days or 30 days or in 30-day mortality. There were no differences in in-hospital side effects. However, 30-day readmissions were higher in the long-course group compared with the short-course group (6.6% vs 1.4%; P = .02). The results were similar across all of the Pneumonia Severity Index (PSI) classes.

Cautions

Most of the patients were not severely ill (~60% PSI I-III), the level of comorbid disease was low, and nearly 80% of the patients received fluoroquinolone. There was a significant cross over with 30% of patients assigned to the short-course group receiving antibiotics for more than 5 days.

Implications

Inpatient providers should aim to treat patients with community-acquired pneumonia (regardless of the severity of the illness) for 5 days. At day 5, if the patient is afebrile and has no signs of clinical instability, clinicians should be comfortable stopping antibiotics.

Is the Era of Intravenous Proton Pump Inhibitors Coming to an End in Patients with Bleeding Peptic Ulcers? A Meta-Analysis of the Published Literature. Jian Z et al. British Journal of Clinical Pharmacology, 2016;82(3):880-9.12

Background

Guidelines recommend intravenous proton pump inhibitors (PPI) after an endoscopy for patients with a bleeding peptic ulcer. Yet, acid suppression with oral PPI is deemed equivalent to the intravenous route.

Findings

This systematic review and meta-analysis identified 7 randomized controlled trials involving 859 patients. After an endoscopy, the patients were randomized to receive either oral or intravenous PPI. Most of the patients had “high-risk” peptic ulcers (active bleeding, a visible vessel, an adherent clot). The PPI dose and frequency varied between the studies. Re-bleeding rates were no different between the oral and intravenous route at 72 hours (2.4% vs 5.1%; P = .26), 7 days (5.6% vs 6.8%; P =.68), or 30 days (7.9% vs 8.8%; P = .62). There was also no difference in 30-day mortality (2.1% vs 2.4%; P = .88), and the length of stay was the same in both groups. Side effects were not reported.

Cautions

This systematic review and meta-analysis included multiple heterogeneous small studies of moderate quality. A large number of patients were excluded, increasing the risk of a selection bias.

Implications

There is no clear indication for intravenous PPI in the treatment of bleeding peptic ulcers following an endoscopy. Converting to oral PPI is equivalent to intravenous and is a safe, effective, and cost-saving option for patients with bleeding peptic ulcers.

References

1. Prandoni P, Lensing AW, Prins MH, et al. Prevalence of pulmonary embolism among patients hospitalized for syncope. N Engl J Med. 2016; 375(16):1524-1531. PubMed
2. Russo RJ, Costa HS, Silva PD, et al. Assessing the risks associated with MRI in patients with a pacemaker or defibrillator. N Engl J Med. 2017;376(8):755-764. PubMed
3. Linsenmeyer K, Gupta K, Strymish JM, Dhanani M, Brecher SM, Breu AC. Culture if spikes? Indications and yield of blood cultures in hospitalized medical patients. J Hosp Med. 2016;11(5):336-340. PubMed
4. Sun EC, Darnall BD, Baker LC, Mackey S. Incidence of and risk factors for chronic opioid use among opioid-naive patients in the postoperative period. JAMA Intern Med. 2016;176(9):1286-1293. PubMed
5. Pickering JW, Than MP, Cullen L, et al. Rapid rule-out of acute myocardial infarction with a single high-sensitivity cardiac troponin T measurement below the limit of detection: A collaborative meta-analysis. Ann Intern Med. 2017;166(10):715-724. PubMed
6. Thygesen K, Alpert JS, White HD, Jaffe AS, Apple FS, Galvani M, et al; Joint ESC/ACCF/AHA/WHF Task Force for the Redefinition of Myocardial Infarction. Universal definition of myocardial infarction. Circulation. 2007;116:2634-2653. PubMed
7. Aleva FE, Voets LWLM, Simons SO, de Mast Q, van der Ven AJAM, Heijdra YF. Prevalence and localization of pulmonary embolism in unexplained acute exacerbations of COPD: A systematic review and meta-analysis. Chest. 2017; 151(3):544-554. PubMed
8. Rizkallah J, Man SFP, Sin DD. Prevalence of pulmonary embolism in acute exacerbations of COPD: A systematic review and meta-analysis. Chest. 2009;135(3):786-793. PubMed
9. Merel SE, McKinney CM, Ufkes P, Kwan AC, White AA. Sitting at patients’ bedsides may improve patients’ perceptions of physician communication skills. J Hosp Med. 2016;11(12):865-868. PubMed
10. Kahn MW. Etiquette-based medicine. N Engl J Med. 2008;358(19):1988-1989. PubMed
11. Uranga A, España PP, Bilbao A, et al. Duration of antibiotic treatment in community-acquired pneumonia: A multicenter randomized clinical trial. JAMA Intern Med. 2016;176(9):1257-1265. PubMed
12. Jian Z, Li H, Race NS, Ma T, Jin H, Yin Z. Is the era of intravenous proton pump inhibitors coming to an end in patients with bleeding peptic ulcers? Meta-analysis of the published literature. Br J Clin Pharmacol. 2016;82(3):880-889. PubMed

References

1. Prandoni P, Lensing AW, Prins MH, et al. Prevalence of pulmonary embolism among patients hospitalized for syncope. N Engl J Med. 2016; 375(16):1524-1531. PubMed
2. Russo RJ, Costa HS, Silva PD, et al. Assessing the risks associated with MRI in patients with a pacemaker or defibrillator. N Engl J Med. 2017;376(8):755-764. PubMed
3. Linsenmeyer K, Gupta K, Strymish JM, Dhanani M, Brecher SM, Breu AC. Culture if spikes? Indications and yield of blood cultures in hospitalized medical patients. J Hosp Med. 2016;11(5):336-340. PubMed
4. Sun EC, Darnall BD, Baker LC, Mackey S. Incidence of and risk factors for chronic opioid use among opioid-naive patients in the postoperative period. JAMA Intern Med. 2016;176(9):1286-1293. PubMed
5. Pickering JW, Than MP, Cullen L, et al. Rapid rule-out of acute myocardial infarction with a single high-sensitivity cardiac troponin T measurement below the limit of detection: A collaborative meta-analysis. Ann Intern Med. 2017;166(10):715-724. PubMed
6. Thygesen K, Alpert JS, White HD, Jaffe AS, Apple FS, Galvani M, et al; Joint ESC/ACCF/AHA/WHF Task Force for the Redefinition of Myocardial Infarction. Universal definition of myocardial infarction. Circulation. 2007;116:2634-2653. PubMed
7. Aleva FE, Voets LWLM, Simons SO, de Mast Q, van der Ven AJAM, Heijdra YF. Prevalence and localization of pulmonary embolism in unexplained acute exacerbations of COPD: A systematic review and meta-analysis. Chest. 2017; 151(3):544-554. PubMed
8. Rizkallah J, Man SFP, Sin DD. Prevalence of pulmonary embolism in acute exacerbations of COPD: A systematic review and meta-analysis. Chest. 2009;135(3):786-793. PubMed
9. Merel SE, McKinney CM, Ufkes P, Kwan AC, White AA. Sitting at patients’ bedsides may improve patients’ perceptions of physician communication skills. J Hosp Med. 2016;11(12):865-868. PubMed
10. Kahn MW. Etiquette-based medicine. N Engl J Med. 2008;358(19):1988-1989. PubMed
11. Uranga A, España PP, Bilbao A, et al. Duration of antibiotic treatment in community-acquired pneumonia: A multicenter randomized clinical trial. JAMA Intern Med. 2016;176(9):1257-1265. PubMed
12. Jian Z, Li H, Race NS, Ma T, Jin H, Yin Z. Is the era of intravenous proton pump inhibitors coming to an end in patients with bleeding peptic ulcers? Meta-analysis of the published literature. Br J Clin Pharmacol. 2016;82(3):880-889. PubMed

Issue
Journal of Hospital Medicine 13(9)
Issue
Journal of Hospital Medicine 13(9)
Page Number
626-630. Published online first February 27, 2018
Page Number
626-630. Published online first February 27, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Alfred Burger MD, FACP, SFHM, Senior Associate Program Director, Internal Medicine Residency,Mount Sinai Beth Israel, Icahn School of Medicine at Mount Sinai, 350 East 17th Street Baird Hall, 20th Floor, New York, NY 10003; Telephone: 212-420-2690; Fax: 212-420-4615; Email: Alfred.burger@mountsinai.org
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gate On Date
Tue, 03/13/2018 - 06:00
Un-Gate On Date
Tue, 02/27/2018 - 06:00
Use ProPublica
Gating Strategy
First Peek Free
Article PDF Media

Standardized attending rounds to improve the patient experience: A pragmatic cluster randomized controlled trial

Article Type
Changed
Fri, 12/14/2018 - 08:29
Display Headline
Standardized attending rounds to improve the patient experience: A pragmatic cluster randomized controlled trial

Patient experience has recently received heightened attention given evidence supporting an association between patient experience and quality of care,1 and the coupling of patient satisfaction to reimbursement rates for Medicare patients.2 Patient experience is often assessed through surveys of patient satisfaction, which correlates with patient perceptions of nurse and physician communication.3 Teaching hospitals introduce variables that may impact communication, including the involvement of multiple levels of care providers and competing patient care vs. educational priorities. Patients admitted to teaching services express decreased satisfaction with coordination and overall care compared with patients on nonteaching services.4

Clinical supervision of trainees on teaching services is primarily achieved through attending rounds (AR), where patients’ clinical presentations and management are discussed with an attending physician. Poor communication during AR may negatively affect the patient experience through inefficient care coordination among the inter-professional care team or through implementation of interventions without patients’ knowledge or input.5-11 Although patient engagement in rounds has been associated with higher patient satisfaction with rounds,12-19 AR and case presentations often occur at a distance from the patient’s bedside.20,21 Furthermore, AR vary in the time allotted per patient and the extent of participation of nurses and other allied health professionals. Standardized bedside rounding processes have been shown to improve efficiency, decrease daily resident work hours,22 and improve nurse-physician teamwork.23

Despite these benefits, recent prospective studies of bedside AR interventions have not improved patient satisfaction with rounds. One involved the implementation of interprofessional patient-centered bedside rounds on a nonteaching service,24 while the other evaluated the impact of integrating athletic principles into multidisciplinary work rounds.25 Work at our institution had sought to develop AR practice recommendations to foster an optimal patient experience, while maintaining provider workflow efficiency, facilitating interdisciplinary communication, and advancing trainee education.26 Using these AR recommendations, we conducted a prospective randomized controlled trial to evaluate the impact of implementing a standardized bedside AR model on patient satisfaction with rounds. We also assessed attending physician and trainee satisfaction with rounds, and perceived and actual AR duration.

METHODS

Setting and Participants

This trial was conducted on the internal medicine teaching service of the University of California San Francisco Medical Center from September 3, 2013 to November 27, 2013. The service is comprised of 8 teams, with a total average daily census of 80 to 90 patients. Teams are comprised of an attending physician, a senior resident (in the second or third year of residency training), 2 interns, and a third- and/or fourth-year medical student.

 

 

This trial, which was approved by the University of California, San Francisco Committee on Human Research (UCSF CHR) and was registered with ClinicalTrials.gov (NCT01931553), was classified under Quality Improvement and did not require informed consent of patients or providers.

Intervention Description

We conducted a cluster randomized trial to evaluate the impact of a bundled set of 5 AR practice recommendations, adapted from published work,26 on patient experience, as well as on attending and trainee satisfaction: 1) huddling to establish the rounding schedule and priorities; 2) conducting bedside rounds; 3) integrating bedside nurses; 4) completing real-time order entry using bedside computers; 5) updating the whiteboard in each patient’s room with care plan information.

At the beginning of each month, study investigators (Nader Najafi and Bradley Monash) led a 1.5-hour workshop to train attending physicians and trainees allocated to the intervention arm on the recommended AR practices. Participants also received informational handouts to be referenced during AR. Attending physicians and trainees randomized to the control arm continued usual rounding practices. Control teams were notified that there would be observers on rounds but were not informed of the study aims.

Randomization and Team Assignments

The medicine service was divided into 2 arms, each comprised of 4 teams. Using a coin flip, Cluster 1 (Teams A, B, C and D) was randomized to the intervention, and Cluster 2 (Teams E, F, G and H) was randomized to the control. This design was pragmatically chosen to ensure that 1 team from each arm would admit patients daily. Allocation concealment of attending physicians and trainees was not possible given the nature of the intervention. Patients were blinded to study arm allocation.

MEASURES AND OUTCOMES

Adherence to Practice Recommendations

Thirty premedical students served as volunteer AR auditors. Each auditor received orientation and training in data collection techniques during a single 2-hour workshop. The auditors, blinded to study arm allocation, independently observed morning AR during weekdays and recorded the completion of the following elements as a dichotomous (yes/no) outcome: pre-rounds huddle, participation of nurse in AR, real-time order entry, and whiteboard use. They recorded the duration of AR per day for each team (minutes) and the rounding model for each patient rounding encounter during AR (bedside, hallway, or card flip).23 Bedside rounds were defined as presentation and discussion of the patient care plan in the presence of the patient. Hallway rounds were defined as presentation and discussion of the patient care plan partially outside the patient’s room and partially in the presence of the patient. Card-flip rounds were defined as presentation and discussion of the patient care plan entirely outside of the patient’s room without the team seeing the patient together. Two auditors simultaneously observed a random subset of patient-rounding encounters to evaluate inter-rater reliability, and the concordance between auditor observations was good (Pearson correlation = 0.66).27

Patient-Related Outcomes

The primary outcome was patient satisfaction with AR, assessed using a survey adapted from published work.12,14,28,29 Patients were approached to complete the questionnaire after they had experienced at least 1 AR. Patients were excluded if they were non-English-speaking, unavailable (eg, off the unit for testing or treatment), in isolation, or had impaired mental status. For patients admitted multiple times during the study period, only the first questionnaire was used. Survey questions included patient involvement in decision-making, quality of communication between patient and medicine team, and the perception that the medicine team cared about the patient. Patients were asked to state their level of agreement with each item on a 5-point Likert scale. We obtained data on patient demographics from administrative datasets.

Healthcare Provider Outcomes

Attending physicians and trainees on service for at least 7 consecutive days were sent an electronic survey, adapted from published work.25,30 Questions assessed satisfaction with AR, perceived value of bedside rounds, and extent of patient and nursing involvement.Level of agreement with each item was captured on a continuous scale; 0 = strongly disagree to 100 = strongly agree, or from 0 (far too little) to 100 (far too much), with 50 equating to “about right.” Attending physicians and trainees were also asked to estimate the average duration of AR (in minutes).

Statistical Analyses

Analyses were blinded to study arm allocation and followed intention-to-treat principles. One attending physician crossed over from intervention to control arm; patient surveys associated with this attending (n = 4) were excluded to avoid contamination. No trainees crossed over.

Demographic and clinical characteristics of patients who completed the survey are reported (Appendix). To compare patient satisfaction scores, we used a random-effects regression model to account for correlation among responses within teams within randomized clusters, defining teams by attending physician. As this correlation was negligible and not statistically significant, we did not adjust ordinary linear regression models for clustering. Given observed differences in patient characteristics, we adjusted for a number of covariates (eg, age, gender, insurance payer, race, marital status, trial group arm).

We conducted simple linear regression for attending and trainee satisfaction comparisons between arms, adjusting only for trainee type (eg, resident, intern, and medical student).

We compared the frequency with which intervention and control teams adhered to the 5 recommended AR practices using chi-square tests. We used independent Student’s t tests to compare total duration of AR by teams within each arm, as well as mean time spent per patient.

This trial had a fixed number of arms (n = 2), each of fixed size (n = 600), based on the average monthly inpatient census on the medicine service. This fixed sample size, with 80% power and α = 0.05, will be able to detect a 0.16 difference in patient satisfaction scores between groups.

All analyses were conducted using SAS® v 9.4 (SAS Institute, Inc., Cary, NC).

 

 

RESULTS

We observed 241 AR involving 1855 patient rounding encounters in the intervention arm and 264 AR involving 1903 patient rounding encounters in the control arm (response rates shown in Figure 1).

Study flow diagram
Figure 1
Intervention teams adopted each of the recommended AR practices at significantly higher rates compared to control teams, with the largest difference occurring for AR occurring at the bedside (52.9% vs. 5.4%; Figure 2).
Prevalence of recommended rounding practices
Figure 2
Teams in the intervention arm demonstrated highest adherence to the pre-rounds huddle (78.1%) and lowest adherence to whiteboard use (29.9%).

Patient Satisfaction and Clinical Outcomes

Five hundred ninety-five patients were allocated to the intervention arm and 605 were allocated to the control arm (Figure 1). Mean age, gender, race, marital status, primary language, and insurance provider did not differ between intervention and control arms (Table 1).

Hospitalized Patient Characteristics by Intervention and Control Arms
Table 1
One hundred forty-six (24.5%) and 141 (23.3%) patients completed surveys in the intervention and control arms, respectively. Patients who completed surveys in each arm were younger and more likely to have commercial insurance (Appendix).

Patients in the intervention arm reported significantly higher satisfaction with AR and felt more cared for by their medicine team (Table 2).
Patient, Attending, and Trainee Satisfaction by Randomized Arm
Table 2
Patient-perceived quality of communication and shared decision-making did not differ between arms.

Actual and Perceived Duration of Attending Rounds

The intervention shortened the total duration of AR by 8 minutes on average (143 vs. 151 minutes, P = 0.052) and the time spent per patient by 4 minutes on average (19 vs. 23 minutes, P < 0.001). Despite this, trainees in the intervention arm perceived AR to last longer (mean estimated time: 167 min vs. 152 min, P < 0.001).

Healthcare Provider Outcomes

We observed 79 attending physicians and trainees in the intervention arm and 78 in the control arm, with survey response rates shown in Figure 1. Attending physicians in the intervention and the control arms reported high levels of satisfaction with the quality of AR (Table 2). Attending physicians in the intervention arm were more likely to report an appropriate level of patient involvement and nurse involvement.

Although trainees in the intervention and control arms reported high levels of satisfaction with the quality of AR, trainees in the intervention arm reported lower satisfaction with AR compared with control arm trainees (Table 2). Trainees in the intervention arm reported that AR involved less autonomy, efficiency, and teaching. Trainees in the intervention arm also scored patient involvement more towards the “far too much” end of the scale compared with “about right” in the control arm. However, trainees in the intervention arm perceived nurse involvement closer to “about right,” as opposed to “far too little” in the control arm.

CONCLUSION/DISCUSSION

Training internal medicine teams to adhere to 5 recommended AR practices increased patient satisfaction with AR and the perception that patients were more cared for by their medicine team. Despite the intervention potentially shortening the duration of AR, attending physicians and trainees perceived AR to last longer, and trainee satisfaction with AR decreased.

Teams in the intervention arm adhered to all recommended rounding practices at higher rates than the control teams. Although intervention teams rounded at the bedside 53% of the time, they were encouraged to bedside round only on patients who desired to participate in rounds, were not altered, and for whom the clinical discussion was not too sensitive to occur at the bedside. Of the recommended rounding behaviors, the lowest adherence was seen with whiteboard use.

A major component of the intervention was to move the clinical presentation to the patient’s bedside. Most patients prefer being included in rounds and partaking in trainee education.12-19,28,29,31-33 Patients may also perceive that more time is spent with them during bedside case presentations,14,28 and exposure to providers conferring on their care may enhance patient confidence in the care being delivered.12 Although a recent study of patient-centered bedside rounding on a nonteaching service did not result in increased patient satisfaction,24 teaching services may offer more opportunities for improvement in care coordination and communication.4

Other aspects of the intervention may have contributed to increased patient satisfaction with AR. The pre-rounds huddle may have helped teams prioritize which patients required more time or would benefit most from bedside rounds. The involvement of nurses in AR may have bolstered communication and team dynamics, enhancing the patient’s perception of interprofessional collaboration. Real-time order entry might have led to more efficient implementation of the care plan, and whiteboard use may have helped to keep patients abreast of the care plan.

Patients in the intervention arm felt more cared for by their medicine teams but did not report improvements in communication or in shared decision-making. Prior work highlights that limited patient engagement, activation, and shared decision-making may occur during AR.24,34 Patient-physician communication during AR is challenged by time pressures and competing priorities, including the “need” for trainees to demonstrate their medical knowledge and clinical skills. Efforts that encourage bedside rounding should include communication training with respect to patient engagement and shared decision-making.

Attending physicians reported positive attitudes toward bedside rounding, consistent with prior studies.13,21,31 However, trainees in the intervention arm expressed decreased satisfaction with AR, estimating that AR took longer and reporting too much patient involvement. Prior studies reflect similar bedside-rounding concerns, including perceived workflow inefficiencies, infringement on teaching opportunities, and time constraints.12,20,35 Trainees are under intense time pressures to complete their work, attend educational conferences, and leave the hospital to attend afternoon clinic or to comply with duty-hour restrictions. Trainees value succinctness,12,35,36 so the perception that intervention AR lasted longer likely contributed to trainee dissatisfaction.

Reduced trainee satisfaction with intervention AR may have also stemmed from the perception of decreased autonomy and less teaching, both valued by trainees.20,35,36 The intervention itself reduced trainee autonomy because usual practice at our hospital involves residents deciding where and how to round. Attending physician presence at the bedside during rounds may have further infringed on trainee autonomy if the patient looked to the attending for answers, or if the attending was seen as the AR leader. Attending physicians may mitigate the risk of compromising trainee autonomy by allowing the trainee to speak first, ensuring the trainee is positioned closer to, and at eye level with, the patient, and redirecting patient questions to the trainee as appropriate. Optimizing trainee experience with bedside AR requires preparation and training of attending physicians, who may feel inadequately prepared to lead bedside rounds and conduct bedside teaching.37 Faculty must learn how to preserve team efficiency, create a safe, nonpunitive bedside environment that fosters the trainee-patient relationship, and ensure rounds remain educational.36,38,39

The intervention reduced the average time spent on AR and time spent per patient. Studies examining the relationship between bedside rounding and duration of rounds have yielded mixed results: some have demonstrated no effect of bedside rounds on rounding time,28,40 while others report longer rounding times.37 The pre-rounds huddle and real-time order writing may have enhanced workflow efficiency.

Our study has several limitations. These results reflect the experience of a single large academic medical center and may not be generalizable to other settings. Although overall patient response to the survey was low and may not be representative of the entire patient population, response rates in the intervention and control arms were equivalent. Non-English speaking patients may have preferences that were not reflected in our survey results, and we did not otherwise quantify individual reasons for survey noncompletion. The presence of auditors on AR may have introduced observer bias. There may have been crossover effect; however, observed prevalence of individual practices remained low in the control arm. The 1.5-hour workshop may have inadequately equipped trainees with the complex skills required to lead and participate in bedside rounding, and more training, experience, and feedback may have yielded different results. For instance, residents with more exposure to bedside rounding express greater appreciation of its role in education and patient care.20 While adherence to some of the recommended practices remained low, we did not employ a full range of change-management techniques. Instead, we opted for a “low intensity” intervention (eg, single workshop, handouts) that relied on voluntary adoption by medicine teams and that we hoped other institutions could reproduce. Finally, we did not assess the relative impact of individual rounding behaviors on the measured outcomes.

In conclusion, training medicine teams to adhere to a standardized bedside AR model increased patient satisfaction with rounds. Concomitant trainee dissatisfaction may require further experience and training of attending physicians and trainees to ensure successful adoption.

Acknowledgements

 

 

We would like to thank all patients, providers, and trainees who participated in this study. We would also like to acknowledge the following volunteer auditors who observed teams daily: Arianna Abundo, Elahhe Afkhamnejad, Yolanda Banuelos, Laila Fozoun, Soe Yupar Khin, Tam Thien Le, Wing Sum Li, Yaqiao Li, Mengyao Liu, Tzyy-Harn Lo, Shynh-Herng Lo, David Lowe, Danoush Paborji, Sa Nan Park, Urmila Powale, Redha Fouad Qabazard, Monique Quiroz, John-Luke Marcelo Rivera, Manfred Roy Luna Salvador, Tobias Gowen Squier-Roper, Flora Yan Ting, Francesca Natasha T. Tizon, Emily Claire Trautner, Stephen Weiner, Alice Wilson, Kimberly Woo, Bingling J Wu, Johnny Wu, Brenda Yee. Statistical expertise was provided by Joan Hilton from the UCSF Clinical and Translational Science Institute (CTSI), which is supported by the National Center for Advancing Translational Sciences, National Institutes of Health, through UCSF-CTSI Grant Number UL1 TR000004. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. Thanks also to Oralia Schatzman, Andrea Mazzini, and Erika Huie for their administrative support, and John Hillman for data-related support. Special thanks to Kirsten Kangelaris and Andrew Auerbach for their valuable feedback throughout, and to Maria Novelero and Robert Wachter for their divisional support of this project. 

Disclosure

The authors report no financial conflicts of interest.

Files
References

1. Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1):1-18. PubMed
2. Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) Fact Sheet. August 2013. Centers for Medicare and Medicaid Services (CMS). Baltimore, MD.http://www.hcahpsonline.org/files/August_2013_HCAHPS_Fact_Sheet3.pdf. Accessed December 1, 2015.
3. Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17:41-48. PubMed

4. Wray CM, Flores A, Padula WV, Prochaska MT, Meltzer DO, Arora VM. Measuring patient experiences on hospitalist and teaching services: Patient responses to a 30-day postdischarge questionnaire. J Hosp Med. 2016;11(2):99-104. PubMed
5. Bharwani AM, Harris GC, Southwick FS. Perspective: A business school view of medical interprofessional rounds: transforming rounding groups into rounding teams. Acad Med. 2012;87(12):1768-1771. PubMed
6. Chand DV. Observational study using the tools of lean six sigma to improve the efficiency of the resident rounding process. J Grad Med Educ. 2011;3(2):144-150. PubMed

7. Stickrath C, Noble M, Prochazka A, et al. Attending rounds in the current era: what is and is not happening. JAMA Intern Med. 2013;173(12):1084-1089. PubMed
8. Weber H, Stöckli M, Nübling M, Langewitz WA. Communication during ward rounds in internal medicine. An analysis of patient-nurse-physician interactions using RIAS. Patient Educ Couns. 2007;67(3):343-348. PubMed
9. McMahon GT, Katz JT, Thorndike ME, Levy BD, Loscalzo J. Evaluation of a redesign initiative in an internal-medicine residency. N Engl J Med. 2010;362(14):1304-1311. PubMed

10. Amoss J. Attending rounds: where do we go from here?: comment on “Attending rounds in the current era”. JAMA Intern Med. 2013;173(12):1089-1090. PubMed
11. Curley C, McEachern JE, Speroff T. A firm trial of interdisciplinary rounds on the inpatient medical wards: an intervention designed using continuous quality improvement. Med Care. 1998;36(suppl 8):AS4-A12. PubMed
12. Wang-Cheng RM, Barnas GP, Sigmann P, Riendl PA, Young MJ. Bedside case presentations: why patients like them but learners don’t. J Gen Intern Med. 1989;4(4):284-287. PubMed

13. Chauke, HL, Pattinson RC. Ward rounds—bedside or conference room? S Afr Med J. 2006;96(5):398-400. PubMed
14. Lehmann LS, Brancati FL, Chen MC, Roter D, Dobs AS. The effect of bedside case presentations on patients’ perceptions of their medical care. N Engl J Med. 1997;336(16):336, 1150-1155. PubMed
15. Simons RJ, Baily RG, Zelis R, Zwillich CW. The physiologic and psychological effects of the bedside presentation. N Engl J Med. 1989;321(18):1273-1275. PubMed

16. Wise TN, Feldheim D, Mann LS, Boyle E, Rustgi VK. Patients’ reactions to house staff work rounds. Psychosomatics. 1985;26(8):669-672. PubMed
17. Linfors EW, Neelon FA. Sounding Boards. The case of bedside rounds. N Engl J Med. 1980;303(21):1230-1233. PubMed
18. Nair BR, Coughlan JL, Hensley MJ. Student and patient perspectives on bedside teaching. Med Educ. 1997;31(5):341-346. PubMed

19. Romano J. Patients’ attitudes and behavior in ward round teaching. JAMA. 1941;117(9):664-667.
20. Gonzalo JD, Masters PA, Simons RJ, Chuang CH. Attending rounds and bedside case presentations: medical student and medicine resident experiences and attitudes. Teach Learn Med. 2009;21(2):105-110. PubMed
21. Shoeb M, Khanna R, Fang M, et al. Internal medicine rounding practices and the Accreditation Council for Graduate Medical Education core competencies. J Hosp Med. 2014;9(4):239-243. PubMed

22. Calderon AS, Blackmore CC, Williams BL, et al. Transforming ward rounds through rounding-in-flow. J Grad Med Educ. 2014;6(4):750-755. PubMed
23. Henkin S, Chon TY, Christopherson ML, Halvorsen AJ, Worden LM, Ratelle JT. Improving nurse-physician teamwork through interprofessional bedside rounding. J Multidiscip Healthc. 2016;9:201-205. PubMed
24. O’Leary KJ, Killarney A, Hansen LO, et al. Effect of patient-centred bedside rounds on hospitalised patients’ decision control, activation and satisfaction with care. BMJ Qual Saf. 2016;25:921-928. PubMed

25. Southwick F, Lewis M, Treloar D, et al. Applying athletic principles to medical rounds to improve teaching and patient care. Acad Med. 2014;89(7):1018-1023. PubMed
26. Najafi N, Monash B, Mourad M, et al. Improving attending rounds: Qualitative reflections from multidisciplinary providers. Hosp Pract (1995). 2015;43(3):186-190. PubMed
27. Altman DG. Practical Statistics For Medical Research. Boca Raton, FL: Chapman & Hall/CRC; 2006.

28. Gonzalo JD, Chuang CH, Huang G, Smith C. The return of bedside rounds: an educational intervention. J Gen Intern Med. 2010;25(8):792-798. PubMed
29. Fletcher KE, Rankey DS, Stern DT. Bedside interactions from the other side of the bedrail. J Gen Intern Med. 2005;20(1):58-61. PubMed

30. Gatorounds: Applying Championship Athletic Principles to Healthcare. University of Florida Health. http://gatorounds.med.ufl.edu/surveys/. Accessed March 1, 2013.
31. Gonzalo JD, Heist BS, Duffy BL, et al. The value of bedside rounds: a multicenter qualitative study. Teach Learn Med. 2013;25(4):326-333. PubMed
32. Rogers HD, Carline JD, Paauw DS. Examination room presentations in general internal medicine clinic: patients’ and students’ perceptions. Acad Med. 2003;78(9):945-949. PubMed

 

 

33. Fletcher KE, Furney SL, Stern DT. Patients speak: what’s really important about bedside interactions with physician teams. Teach Learn Med. 2007;19(2):120-127. PubMed
34. Satterfield JM, Bereknyei S, Hilton JF, et al. The prevalence of social and behavioral topics and related educational opportunities during attending rounds. Acad Med. 2014; 89(11):1548-1557. PubMed
35. Kroenke K, Simmons JO, Copley JB, Smith C. Attending rounds: a survey of physician attitudes. J Gen Intern Med. 1990;5(3):229-233. PubMed

36. Castiglioni A, Shewchuk RM, Willett LL, Heudebert GR, Centor RM. A pilot study using nominal group technique to assess residents’ perceptions of successful attending rounds. J Gen Intern Med. 2008;23(7):1060-1065. PubMed
37. Crumlish CM, Yialamas MA, McMahon GT. Quantification of bedside teaching by an academic hospitalist group. J Hosp Med. 2009;4(5):304-307. PubMed
38. Gonzalo JD, Wolpaw DR, Lehman E, Chuang CH. Patient-centered interprofessional collaborative care: factors associated with bedside interprofessional rounds. J Gen Intern Med. 2014;29(7):1040-1047. PubMed

39. Roy B, Castiglioni A, Kraemer RR, et al. Using cognitive mapping to define key domains for successful attending rounds. J Gen Intern Med. 2012;27(11):1492-1498. PubMed
40. Bhansali P, Birch S, Campbell JK, et al. A time-motion study of inpatient rounds using a family-centered rounds model. Hosp Pediatr. 2013;3(1):31-38. PubMed

Article PDF
Issue
Journal of Hospital Medicine - 12(3)
Publications
Topics
Page Number
143-149
Sections
Files
Files
Article PDF
Article PDF

Patient experience has recently received heightened attention given evidence supporting an association between patient experience and quality of care,1 and the coupling of patient satisfaction to reimbursement rates for Medicare patients.2 Patient experience is often assessed through surveys of patient satisfaction, which correlates with patient perceptions of nurse and physician communication.3 Teaching hospitals introduce variables that may impact communication, including the involvement of multiple levels of care providers and competing patient care vs. educational priorities. Patients admitted to teaching services express decreased satisfaction with coordination and overall care compared with patients on nonteaching services.4

Clinical supervision of trainees on teaching services is primarily achieved through attending rounds (AR), where patients’ clinical presentations and management are discussed with an attending physician. Poor communication during AR may negatively affect the patient experience through inefficient care coordination among the inter-professional care team or through implementation of interventions without patients’ knowledge or input.5-11 Although patient engagement in rounds has been associated with higher patient satisfaction with rounds,12-19 AR and case presentations often occur at a distance from the patient’s bedside.20,21 Furthermore, AR vary in the time allotted per patient and the extent of participation of nurses and other allied health professionals. Standardized bedside rounding processes have been shown to improve efficiency, decrease daily resident work hours,22 and improve nurse-physician teamwork.23

Despite these benefits, recent prospective studies of bedside AR interventions have not improved patient satisfaction with rounds. One involved the implementation of interprofessional patient-centered bedside rounds on a nonteaching service,24 while the other evaluated the impact of integrating athletic principles into multidisciplinary work rounds.25 Work at our institution had sought to develop AR practice recommendations to foster an optimal patient experience, while maintaining provider workflow efficiency, facilitating interdisciplinary communication, and advancing trainee education.26 Using these AR recommendations, we conducted a prospective randomized controlled trial to evaluate the impact of implementing a standardized bedside AR model on patient satisfaction with rounds. We also assessed attending physician and trainee satisfaction with rounds, and perceived and actual AR duration.

METHODS

Setting and Participants

This trial was conducted on the internal medicine teaching service of the University of California San Francisco Medical Center from September 3, 2013 to November 27, 2013. The service is comprised of 8 teams, with a total average daily census of 80 to 90 patients. Teams are comprised of an attending physician, a senior resident (in the second or third year of residency training), 2 interns, and a third- and/or fourth-year medical student.

 

 

This trial, which was approved by the University of California, San Francisco Committee on Human Research (UCSF CHR) and was registered with ClinicalTrials.gov (NCT01931553), was classified under Quality Improvement and did not require informed consent of patients or providers.

Intervention Description

We conducted a cluster randomized trial to evaluate the impact of a bundled set of 5 AR practice recommendations, adapted from published work,26 on patient experience, as well as on attending and trainee satisfaction: 1) huddling to establish the rounding schedule and priorities; 2) conducting bedside rounds; 3) integrating bedside nurses; 4) completing real-time order entry using bedside computers; 5) updating the whiteboard in each patient’s room with care plan information.

At the beginning of each month, study investigators (Nader Najafi and Bradley Monash) led a 1.5-hour workshop to train attending physicians and trainees allocated to the intervention arm on the recommended AR practices. Participants also received informational handouts to be referenced during AR. Attending physicians and trainees randomized to the control arm continued usual rounding practices. Control teams were notified that there would be observers on rounds but were not informed of the study aims.

Randomization and Team Assignments

The medicine service was divided into 2 arms, each comprised of 4 teams. Using a coin flip, Cluster 1 (Teams A, B, C and D) was randomized to the intervention, and Cluster 2 (Teams E, F, G and H) was randomized to the control. This design was pragmatically chosen to ensure that 1 team from each arm would admit patients daily. Allocation concealment of attending physicians and trainees was not possible given the nature of the intervention. Patients were blinded to study arm allocation.

MEASURES AND OUTCOMES

Adherence to Practice Recommendations

Thirty premedical students served as volunteer AR auditors. Each auditor received orientation and training in data collection techniques during a single 2-hour workshop. The auditors, blinded to study arm allocation, independently observed morning AR during weekdays and recorded the completion of the following elements as a dichotomous (yes/no) outcome: pre-rounds huddle, participation of nurse in AR, real-time order entry, and whiteboard use. They recorded the duration of AR per day for each team (minutes) and the rounding model for each patient rounding encounter during AR (bedside, hallway, or card flip).23 Bedside rounds were defined as presentation and discussion of the patient care plan in the presence of the patient. Hallway rounds were defined as presentation and discussion of the patient care plan partially outside the patient’s room and partially in the presence of the patient. Card-flip rounds were defined as presentation and discussion of the patient care plan entirely outside of the patient’s room without the team seeing the patient together. Two auditors simultaneously observed a random subset of patient-rounding encounters to evaluate inter-rater reliability, and the concordance between auditor observations was good (Pearson correlation = 0.66).27

Patient-Related Outcomes

The primary outcome was patient satisfaction with AR, assessed using a survey adapted from published work.12,14,28,29 Patients were approached to complete the questionnaire after they had experienced at least 1 AR. Patients were excluded if they were non-English-speaking, unavailable (eg, off the unit for testing or treatment), in isolation, or had impaired mental status. For patients admitted multiple times during the study period, only the first questionnaire was used. Survey questions included patient involvement in decision-making, quality of communication between patient and medicine team, and the perception that the medicine team cared about the patient. Patients were asked to state their level of agreement with each item on a 5-point Likert scale. We obtained data on patient demographics from administrative datasets.

Healthcare Provider Outcomes

Attending physicians and trainees on service for at least 7 consecutive days were sent an electronic survey, adapted from published work.25,30 Questions assessed satisfaction with AR, perceived value of bedside rounds, and extent of patient and nursing involvement.Level of agreement with each item was captured on a continuous scale; 0 = strongly disagree to 100 = strongly agree, or from 0 (far too little) to 100 (far too much), with 50 equating to “about right.” Attending physicians and trainees were also asked to estimate the average duration of AR (in minutes).

Statistical Analyses

Analyses were blinded to study arm allocation and followed intention-to-treat principles. One attending physician crossed over from intervention to control arm; patient surveys associated with this attending (n = 4) were excluded to avoid contamination. No trainees crossed over.

Demographic and clinical characteristics of patients who completed the survey are reported (Appendix). To compare patient satisfaction scores, we used a random-effects regression model to account for correlation among responses within teams within randomized clusters, defining teams by attending physician. As this correlation was negligible and not statistically significant, we did not adjust ordinary linear regression models for clustering. Given observed differences in patient characteristics, we adjusted for a number of covariates (eg, age, gender, insurance payer, race, marital status, trial group arm).

We conducted simple linear regression for attending and trainee satisfaction comparisons between arms, adjusting only for trainee type (eg, resident, intern, and medical student).

We compared the frequency with which intervention and control teams adhered to the 5 recommended AR practices using chi-square tests. We used independent Student’s t tests to compare total duration of AR by teams within each arm, as well as mean time spent per patient.

This trial had a fixed number of arms (n = 2), each of fixed size (n = 600), based on the average monthly inpatient census on the medicine service. This fixed sample size, with 80% power and α = 0.05, will be able to detect a 0.16 difference in patient satisfaction scores between groups.

All analyses were conducted using SAS® v 9.4 (SAS Institute, Inc., Cary, NC).

 

 

RESULTS

We observed 241 AR involving 1855 patient rounding encounters in the intervention arm and 264 AR involving 1903 patient rounding encounters in the control arm (response rates shown in Figure 1).

Study flow diagram
Figure 1
Intervention teams adopted each of the recommended AR practices at significantly higher rates compared to control teams, with the largest difference occurring for AR occurring at the bedside (52.9% vs. 5.4%; Figure 2).
Prevalence of recommended rounding practices
Figure 2
Teams in the intervention arm demonstrated highest adherence to the pre-rounds huddle (78.1%) and lowest adherence to whiteboard use (29.9%).

Patient Satisfaction and Clinical Outcomes

Five hundred ninety-five patients were allocated to the intervention arm and 605 were allocated to the control arm (Figure 1). Mean age, gender, race, marital status, primary language, and insurance provider did not differ between intervention and control arms (Table 1).

Hospitalized Patient Characteristics by Intervention and Control Arms
Table 1
One hundred forty-six (24.5%) and 141 (23.3%) patients completed surveys in the intervention and control arms, respectively. Patients who completed surveys in each arm were younger and more likely to have commercial insurance (Appendix).

Patients in the intervention arm reported significantly higher satisfaction with AR and felt more cared for by their medicine team (Table 2).
Patient, Attending, and Trainee Satisfaction by Randomized Arm
Table 2
Patient-perceived quality of communication and shared decision-making did not differ between arms.

Actual and Perceived Duration of Attending Rounds

The intervention shortened the total duration of AR by 8 minutes on average (143 vs. 151 minutes, P = 0.052) and the time spent per patient by 4 minutes on average (19 vs. 23 minutes, P < 0.001). Despite this, trainees in the intervention arm perceived AR to last longer (mean estimated time: 167 min vs. 152 min, P < 0.001).

Healthcare Provider Outcomes

We observed 79 attending physicians and trainees in the intervention arm and 78 in the control arm, with survey response rates shown in Figure 1. Attending physicians in the intervention and the control arms reported high levels of satisfaction with the quality of AR (Table 2). Attending physicians in the intervention arm were more likely to report an appropriate level of patient involvement and nurse involvement.

Although trainees in the intervention and control arms reported high levels of satisfaction with the quality of AR, trainees in the intervention arm reported lower satisfaction with AR compared with control arm trainees (Table 2). Trainees in the intervention arm reported that AR involved less autonomy, efficiency, and teaching. Trainees in the intervention arm also scored patient involvement more towards the “far too much” end of the scale compared with “about right” in the control arm. However, trainees in the intervention arm perceived nurse involvement closer to “about right,” as opposed to “far too little” in the control arm.

CONCLUSION/DISCUSSION

Training internal medicine teams to adhere to 5 recommended AR practices increased patient satisfaction with AR and the perception that patients were more cared for by their medicine team. Despite the intervention potentially shortening the duration of AR, attending physicians and trainees perceived AR to last longer, and trainee satisfaction with AR decreased.

Teams in the intervention arm adhered to all recommended rounding practices at higher rates than the control teams. Although intervention teams rounded at the bedside 53% of the time, they were encouraged to bedside round only on patients who desired to participate in rounds, were not altered, and for whom the clinical discussion was not too sensitive to occur at the bedside. Of the recommended rounding behaviors, the lowest adherence was seen with whiteboard use.

A major component of the intervention was to move the clinical presentation to the patient’s bedside. Most patients prefer being included in rounds and partaking in trainee education.12-19,28,29,31-33 Patients may also perceive that more time is spent with them during bedside case presentations,14,28 and exposure to providers conferring on their care may enhance patient confidence in the care being delivered.12 Although a recent study of patient-centered bedside rounding on a nonteaching service did not result in increased patient satisfaction,24 teaching services may offer more opportunities for improvement in care coordination and communication.4

Other aspects of the intervention may have contributed to increased patient satisfaction with AR. The pre-rounds huddle may have helped teams prioritize which patients required more time or would benefit most from bedside rounds. The involvement of nurses in AR may have bolstered communication and team dynamics, enhancing the patient’s perception of interprofessional collaboration. Real-time order entry might have led to more efficient implementation of the care plan, and whiteboard use may have helped to keep patients abreast of the care plan.

Patients in the intervention arm felt more cared for by their medicine teams but did not report improvements in communication or in shared decision-making. Prior work highlights that limited patient engagement, activation, and shared decision-making may occur during AR.24,34 Patient-physician communication during AR is challenged by time pressures and competing priorities, including the “need” for trainees to demonstrate their medical knowledge and clinical skills. Efforts that encourage bedside rounding should include communication training with respect to patient engagement and shared decision-making.

Attending physicians reported positive attitudes toward bedside rounding, consistent with prior studies.13,21,31 However, trainees in the intervention arm expressed decreased satisfaction with AR, estimating that AR took longer and reporting too much patient involvement. Prior studies reflect similar bedside-rounding concerns, including perceived workflow inefficiencies, infringement on teaching opportunities, and time constraints.12,20,35 Trainees are under intense time pressures to complete their work, attend educational conferences, and leave the hospital to attend afternoon clinic or to comply with duty-hour restrictions. Trainees value succinctness,12,35,36 so the perception that intervention AR lasted longer likely contributed to trainee dissatisfaction.

Reduced trainee satisfaction with intervention AR may have also stemmed from the perception of decreased autonomy and less teaching, both valued by trainees.20,35,36 The intervention itself reduced trainee autonomy because usual practice at our hospital involves residents deciding where and how to round. Attending physician presence at the bedside during rounds may have further infringed on trainee autonomy if the patient looked to the attending for answers, or if the attending was seen as the AR leader. Attending physicians may mitigate the risk of compromising trainee autonomy by allowing the trainee to speak first, ensuring the trainee is positioned closer to, and at eye level with, the patient, and redirecting patient questions to the trainee as appropriate. Optimizing trainee experience with bedside AR requires preparation and training of attending physicians, who may feel inadequately prepared to lead bedside rounds and conduct bedside teaching.37 Faculty must learn how to preserve team efficiency, create a safe, nonpunitive bedside environment that fosters the trainee-patient relationship, and ensure rounds remain educational.36,38,39

The intervention reduced the average time spent on AR and time spent per patient. Studies examining the relationship between bedside rounding and duration of rounds have yielded mixed results: some have demonstrated no effect of bedside rounds on rounding time,28,40 while others report longer rounding times.37 The pre-rounds huddle and real-time order writing may have enhanced workflow efficiency.

Our study has several limitations. These results reflect the experience of a single large academic medical center and may not be generalizable to other settings. Although overall patient response to the survey was low and may not be representative of the entire patient population, response rates in the intervention and control arms were equivalent. Non-English speaking patients may have preferences that were not reflected in our survey results, and we did not otherwise quantify individual reasons for survey noncompletion. The presence of auditors on AR may have introduced observer bias. There may have been crossover effect; however, observed prevalence of individual practices remained low in the control arm. The 1.5-hour workshop may have inadequately equipped trainees with the complex skills required to lead and participate in bedside rounding, and more training, experience, and feedback may have yielded different results. For instance, residents with more exposure to bedside rounding express greater appreciation of its role in education and patient care.20 While adherence to some of the recommended practices remained low, we did not employ a full range of change-management techniques. Instead, we opted for a “low intensity” intervention (eg, single workshop, handouts) that relied on voluntary adoption by medicine teams and that we hoped other institutions could reproduce. Finally, we did not assess the relative impact of individual rounding behaviors on the measured outcomes.

In conclusion, training medicine teams to adhere to a standardized bedside AR model increased patient satisfaction with rounds. Concomitant trainee dissatisfaction may require further experience and training of attending physicians and trainees to ensure successful adoption.

Acknowledgements

 

 

We would like to thank all patients, providers, and trainees who participated in this study. We would also like to acknowledge the following volunteer auditors who observed teams daily: Arianna Abundo, Elahhe Afkhamnejad, Yolanda Banuelos, Laila Fozoun, Soe Yupar Khin, Tam Thien Le, Wing Sum Li, Yaqiao Li, Mengyao Liu, Tzyy-Harn Lo, Shynh-Herng Lo, David Lowe, Danoush Paborji, Sa Nan Park, Urmila Powale, Redha Fouad Qabazard, Monique Quiroz, John-Luke Marcelo Rivera, Manfred Roy Luna Salvador, Tobias Gowen Squier-Roper, Flora Yan Ting, Francesca Natasha T. Tizon, Emily Claire Trautner, Stephen Weiner, Alice Wilson, Kimberly Woo, Bingling J Wu, Johnny Wu, Brenda Yee. Statistical expertise was provided by Joan Hilton from the UCSF Clinical and Translational Science Institute (CTSI), which is supported by the National Center for Advancing Translational Sciences, National Institutes of Health, through UCSF-CTSI Grant Number UL1 TR000004. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. Thanks also to Oralia Schatzman, Andrea Mazzini, and Erika Huie for their administrative support, and John Hillman for data-related support. Special thanks to Kirsten Kangelaris and Andrew Auerbach for their valuable feedback throughout, and to Maria Novelero and Robert Wachter for their divisional support of this project. 

Disclosure

The authors report no financial conflicts of interest.

Patient experience has recently received heightened attention given evidence supporting an association between patient experience and quality of care,1 and the coupling of patient satisfaction to reimbursement rates for Medicare patients.2 Patient experience is often assessed through surveys of patient satisfaction, which correlates with patient perceptions of nurse and physician communication.3 Teaching hospitals introduce variables that may impact communication, including the involvement of multiple levels of care providers and competing patient care vs. educational priorities. Patients admitted to teaching services express decreased satisfaction with coordination and overall care compared with patients on nonteaching services.4

Clinical supervision of trainees on teaching services is primarily achieved through attending rounds (AR), where patients’ clinical presentations and management are discussed with an attending physician. Poor communication during AR may negatively affect the patient experience through inefficient care coordination among the inter-professional care team or through implementation of interventions without patients’ knowledge or input.5-11 Although patient engagement in rounds has been associated with higher patient satisfaction with rounds,12-19 AR and case presentations often occur at a distance from the patient’s bedside.20,21 Furthermore, AR vary in the time allotted per patient and the extent of participation of nurses and other allied health professionals. Standardized bedside rounding processes have been shown to improve efficiency, decrease daily resident work hours,22 and improve nurse-physician teamwork.23

Despite these benefits, recent prospective studies of bedside AR interventions have not improved patient satisfaction with rounds. One involved the implementation of interprofessional patient-centered bedside rounds on a nonteaching service,24 while the other evaluated the impact of integrating athletic principles into multidisciplinary work rounds.25 Work at our institution had sought to develop AR practice recommendations to foster an optimal patient experience, while maintaining provider workflow efficiency, facilitating interdisciplinary communication, and advancing trainee education.26 Using these AR recommendations, we conducted a prospective randomized controlled trial to evaluate the impact of implementing a standardized bedside AR model on patient satisfaction with rounds. We also assessed attending physician and trainee satisfaction with rounds, and perceived and actual AR duration.

METHODS

Setting and Participants

This trial was conducted on the internal medicine teaching service of the University of California San Francisco Medical Center from September 3, 2013 to November 27, 2013. The service is comprised of 8 teams, with a total average daily census of 80 to 90 patients. Teams are comprised of an attending physician, a senior resident (in the second or third year of residency training), 2 interns, and a third- and/or fourth-year medical student.

 

 

This trial, which was approved by the University of California, San Francisco Committee on Human Research (UCSF CHR) and was registered with ClinicalTrials.gov (NCT01931553), was classified under Quality Improvement and did not require informed consent of patients or providers.

Intervention Description

We conducted a cluster randomized trial to evaluate the impact of a bundled set of 5 AR practice recommendations, adapted from published work,26 on patient experience, as well as on attending and trainee satisfaction: 1) huddling to establish the rounding schedule and priorities; 2) conducting bedside rounds; 3) integrating bedside nurses; 4) completing real-time order entry using bedside computers; 5) updating the whiteboard in each patient’s room with care plan information.

At the beginning of each month, study investigators (Nader Najafi and Bradley Monash) led a 1.5-hour workshop to train attending physicians and trainees allocated to the intervention arm on the recommended AR practices. Participants also received informational handouts to be referenced during AR. Attending physicians and trainees randomized to the control arm continued usual rounding practices. Control teams were notified that there would be observers on rounds but were not informed of the study aims.

Randomization and Team Assignments

The medicine service was divided into 2 arms, each comprised of 4 teams. Using a coin flip, Cluster 1 (Teams A, B, C and D) was randomized to the intervention, and Cluster 2 (Teams E, F, G and H) was randomized to the control. This design was pragmatically chosen to ensure that 1 team from each arm would admit patients daily. Allocation concealment of attending physicians and trainees was not possible given the nature of the intervention. Patients were blinded to study arm allocation.

MEASURES AND OUTCOMES

Adherence to Practice Recommendations

Thirty premedical students served as volunteer AR auditors. Each auditor received orientation and training in data collection techniques during a single 2-hour workshop. The auditors, blinded to study arm allocation, independently observed morning AR during weekdays and recorded the completion of the following elements as a dichotomous (yes/no) outcome: pre-rounds huddle, participation of nurse in AR, real-time order entry, and whiteboard use. They recorded the duration of AR per day for each team (minutes) and the rounding model for each patient rounding encounter during AR (bedside, hallway, or card flip).23 Bedside rounds were defined as presentation and discussion of the patient care plan in the presence of the patient. Hallway rounds were defined as presentation and discussion of the patient care plan partially outside the patient’s room and partially in the presence of the patient. Card-flip rounds were defined as presentation and discussion of the patient care plan entirely outside of the patient’s room without the team seeing the patient together. Two auditors simultaneously observed a random subset of patient-rounding encounters to evaluate inter-rater reliability, and the concordance between auditor observations was good (Pearson correlation = 0.66).27

Patient-Related Outcomes

The primary outcome was patient satisfaction with AR, assessed using a survey adapted from published work.12,14,28,29 Patients were approached to complete the questionnaire after they had experienced at least 1 AR. Patients were excluded if they were non-English-speaking, unavailable (eg, off the unit for testing or treatment), in isolation, or had impaired mental status. For patients admitted multiple times during the study period, only the first questionnaire was used. Survey questions included patient involvement in decision-making, quality of communication between patient and medicine team, and the perception that the medicine team cared about the patient. Patients were asked to state their level of agreement with each item on a 5-point Likert scale. We obtained data on patient demographics from administrative datasets.

Healthcare Provider Outcomes

Attending physicians and trainees on service for at least 7 consecutive days were sent an electronic survey, adapted from published work.25,30 Questions assessed satisfaction with AR, perceived value of bedside rounds, and extent of patient and nursing involvement.Level of agreement with each item was captured on a continuous scale; 0 = strongly disagree to 100 = strongly agree, or from 0 (far too little) to 100 (far too much), with 50 equating to “about right.” Attending physicians and trainees were also asked to estimate the average duration of AR (in minutes).

Statistical Analyses

Analyses were blinded to study arm allocation and followed intention-to-treat principles. One attending physician crossed over from intervention to control arm; patient surveys associated with this attending (n = 4) were excluded to avoid contamination. No trainees crossed over.

Demographic and clinical characteristics of patients who completed the survey are reported (Appendix). To compare patient satisfaction scores, we used a random-effects regression model to account for correlation among responses within teams within randomized clusters, defining teams by attending physician. As this correlation was negligible and not statistically significant, we did not adjust ordinary linear regression models for clustering. Given observed differences in patient characteristics, we adjusted for a number of covariates (eg, age, gender, insurance payer, race, marital status, trial group arm).

We conducted simple linear regression for attending and trainee satisfaction comparisons between arms, adjusting only for trainee type (eg, resident, intern, and medical student).

We compared the frequency with which intervention and control teams adhered to the 5 recommended AR practices using chi-square tests. We used independent Student’s t tests to compare total duration of AR by teams within each arm, as well as mean time spent per patient.

This trial had a fixed number of arms (n = 2), each of fixed size (n = 600), based on the average monthly inpatient census on the medicine service. This fixed sample size, with 80% power and α = 0.05, will be able to detect a 0.16 difference in patient satisfaction scores between groups.

All analyses were conducted using SAS® v 9.4 (SAS Institute, Inc., Cary, NC).

 

 

RESULTS

We observed 241 AR involving 1855 patient rounding encounters in the intervention arm and 264 AR involving 1903 patient rounding encounters in the control arm (response rates shown in Figure 1).

Study flow diagram
Figure 1
Intervention teams adopted each of the recommended AR practices at significantly higher rates compared to control teams, with the largest difference occurring for AR occurring at the bedside (52.9% vs. 5.4%; Figure 2).
Prevalence of recommended rounding practices
Figure 2
Teams in the intervention arm demonstrated highest adherence to the pre-rounds huddle (78.1%) and lowest adherence to whiteboard use (29.9%).

Patient Satisfaction and Clinical Outcomes

Five hundred ninety-five patients were allocated to the intervention arm and 605 were allocated to the control arm (Figure 1). Mean age, gender, race, marital status, primary language, and insurance provider did not differ between intervention and control arms (Table 1).

Hospitalized Patient Characteristics by Intervention and Control Arms
Table 1
One hundred forty-six (24.5%) and 141 (23.3%) patients completed surveys in the intervention and control arms, respectively. Patients who completed surveys in each arm were younger and more likely to have commercial insurance (Appendix).

Patients in the intervention arm reported significantly higher satisfaction with AR and felt more cared for by their medicine team (Table 2).
Patient, Attending, and Trainee Satisfaction by Randomized Arm
Table 2
Patient-perceived quality of communication and shared decision-making did not differ between arms.

Actual and Perceived Duration of Attending Rounds

The intervention shortened the total duration of AR by 8 minutes on average (143 vs. 151 minutes, P = 0.052) and the time spent per patient by 4 minutes on average (19 vs. 23 minutes, P < 0.001). Despite this, trainees in the intervention arm perceived AR to last longer (mean estimated time: 167 min vs. 152 min, P < 0.001).

Healthcare Provider Outcomes

We observed 79 attending physicians and trainees in the intervention arm and 78 in the control arm, with survey response rates shown in Figure 1. Attending physicians in the intervention and the control arms reported high levels of satisfaction with the quality of AR (Table 2). Attending physicians in the intervention arm were more likely to report an appropriate level of patient involvement and nurse involvement.

Although trainees in the intervention and control arms reported high levels of satisfaction with the quality of AR, trainees in the intervention arm reported lower satisfaction with AR compared with control arm trainees (Table 2). Trainees in the intervention arm reported that AR involved less autonomy, efficiency, and teaching. Trainees in the intervention arm also scored patient involvement more towards the “far too much” end of the scale compared with “about right” in the control arm. However, trainees in the intervention arm perceived nurse involvement closer to “about right,” as opposed to “far too little” in the control arm.

CONCLUSION/DISCUSSION

Training internal medicine teams to adhere to 5 recommended AR practices increased patient satisfaction with AR and the perception that patients were more cared for by their medicine team. Despite the intervention potentially shortening the duration of AR, attending physicians and trainees perceived AR to last longer, and trainee satisfaction with AR decreased.

Teams in the intervention arm adhered to all recommended rounding practices at higher rates than the control teams. Although intervention teams rounded at the bedside 53% of the time, they were encouraged to bedside round only on patients who desired to participate in rounds, were not altered, and for whom the clinical discussion was not too sensitive to occur at the bedside. Of the recommended rounding behaviors, the lowest adherence was seen with whiteboard use.

A major component of the intervention was to move the clinical presentation to the patient’s bedside. Most patients prefer being included in rounds and partaking in trainee education.12-19,28,29,31-33 Patients may also perceive that more time is spent with them during bedside case presentations,14,28 and exposure to providers conferring on their care may enhance patient confidence in the care being delivered.12 Although a recent study of patient-centered bedside rounding on a nonteaching service did not result in increased patient satisfaction,24 teaching services may offer more opportunities for improvement in care coordination and communication.4

Other aspects of the intervention may have contributed to increased patient satisfaction with AR. The pre-rounds huddle may have helped teams prioritize which patients required more time or would benefit most from bedside rounds. The involvement of nurses in AR may have bolstered communication and team dynamics, enhancing the patient’s perception of interprofessional collaboration. Real-time order entry might have led to more efficient implementation of the care plan, and whiteboard use may have helped to keep patients abreast of the care plan.

Patients in the intervention arm felt more cared for by their medicine teams but did not report improvements in communication or in shared decision-making. Prior work highlights that limited patient engagement, activation, and shared decision-making may occur during AR.24,34 Patient-physician communication during AR is challenged by time pressures and competing priorities, including the “need” for trainees to demonstrate their medical knowledge and clinical skills. Efforts that encourage bedside rounding should include communication training with respect to patient engagement and shared decision-making.

Attending physicians reported positive attitudes toward bedside rounding, consistent with prior studies.13,21,31 However, trainees in the intervention arm expressed decreased satisfaction with AR, estimating that AR took longer and reporting too much patient involvement. Prior studies reflect similar bedside-rounding concerns, including perceived workflow inefficiencies, infringement on teaching opportunities, and time constraints.12,20,35 Trainees are under intense time pressures to complete their work, attend educational conferences, and leave the hospital to attend afternoon clinic or to comply with duty-hour restrictions. Trainees value succinctness,12,35,36 so the perception that intervention AR lasted longer likely contributed to trainee dissatisfaction.

Reduced trainee satisfaction with intervention AR may have also stemmed from the perception of decreased autonomy and less teaching, both valued by trainees.20,35,36 The intervention itself reduced trainee autonomy because usual practice at our hospital involves residents deciding where and how to round. Attending physician presence at the bedside during rounds may have further infringed on trainee autonomy if the patient looked to the attending for answers, or if the attending was seen as the AR leader. Attending physicians may mitigate the risk of compromising trainee autonomy by allowing the trainee to speak first, ensuring the trainee is positioned closer to, and at eye level with, the patient, and redirecting patient questions to the trainee as appropriate. Optimizing trainee experience with bedside AR requires preparation and training of attending physicians, who may feel inadequately prepared to lead bedside rounds and conduct bedside teaching.37 Faculty must learn how to preserve team efficiency, create a safe, nonpunitive bedside environment that fosters the trainee-patient relationship, and ensure rounds remain educational.36,38,39

The intervention reduced the average time spent on AR and time spent per patient. Studies examining the relationship between bedside rounding and duration of rounds have yielded mixed results: some have demonstrated no effect of bedside rounds on rounding time,28,40 while others report longer rounding times.37 The pre-rounds huddle and real-time order writing may have enhanced workflow efficiency.

Our study has several limitations. These results reflect the experience of a single large academic medical center and may not be generalizable to other settings. Although overall patient response to the survey was low and may not be representative of the entire patient population, response rates in the intervention and control arms were equivalent. Non-English speaking patients may have preferences that were not reflected in our survey results, and we did not otherwise quantify individual reasons for survey noncompletion. The presence of auditors on AR may have introduced observer bias. There may have been crossover effect; however, observed prevalence of individual practices remained low in the control arm. The 1.5-hour workshop may have inadequately equipped trainees with the complex skills required to lead and participate in bedside rounding, and more training, experience, and feedback may have yielded different results. For instance, residents with more exposure to bedside rounding express greater appreciation of its role in education and patient care.20 While adherence to some of the recommended practices remained low, we did not employ a full range of change-management techniques. Instead, we opted for a “low intensity” intervention (eg, single workshop, handouts) that relied on voluntary adoption by medicine teams and that we hoped other institutions could reproduce. Finally, we did not assess the relative impact of individual rounding behaviors on the measured outcomes.

In conclusion, training medicine teams to adhere to a standardized bedside AR model increased patient satisfaction with rounds. Concomitant trainee dissatisfaction may require further experience and training of attending physicians and trainees to ensure successful adoption.

Acknowledgements

 

 

We would like to thank all patients, providers, and trainees who participated in this study. We would also like to acknowledge the following volunteer auditors who observed teams daily: Arianna Abundo, Elahhe Afkhamnejad, Yolanda Banuelos, Laila Fozoun, Soe Yupar Khin, Tam Thien Le, Wing Sum Li, Yaqiao Li, Mengyao Liu, Tzyy-Harn Lo, Shynh-Herng Lo, David Lowe, Danoush Paborji, Sa Nan Park, Urmila Powale, Redha Fouad Qabazard, Monique Quiroz, John-Luke Marcelo Rivera, Manfred Roy Luna Salvador, Tobias Gowen Squier-Roper, Flora Yan Ting, Francesca Natasha T. Tizon, Emily Claire Trautner, Stephen Weiner, Alice Wilson, Kimberly Woo, Bingling J Wu, Johnny Wu, Brenda Yee. Statistical expertise was provided by Joan Hilton from the UCSF Clinical and Translational Science Institute (CTSI), which is supported by the National Center for Advancing Translational Sciences, National Institutes of Health, through UCSF-CTSI Grant Number UL1 TR000004. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. Thanks also to Oralia Schatzman, Andrea Mazzini, and Erika Huie for their administrative support, and John Hillman for data-related support. Special thanks to Kirsten Kangelaris and Andrew Auerbach for their valuable feedback throughout, and to Maria Novelero and Robert Wachter for their divisional support of this project. 

Disclosure

The authors report no financial conflicts of interest.

References

1. Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1):1-18. PubMed
2. Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) Fact Sheet. August 2013. Centers for Medicare and Medicaid Services (CMS). Baltimore, MD.http://www.hcahpsonline.org/files/August_2013_HCAHPS_Fact_Sheet3.pdf. Accessed December 1, 2015.
3. Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17:41-48. PubMed

4. Wray CM, Flores A, Padula WV, Prochaska MT, Meltzer DO, Arora VM. Measuring patient experiences on hospitalist and teaching services: Patient responses to a 30-day postdischarge questionnaire. J Hosp Med. 2016;11(2):99-104. PubMed
5. Bharwani AM, Harris GC, Southwick FS. Perspective: A business school view of medical interprofessional rounds: transforming rounding groups into rounding teams. Acad Med. 2012;87(12):1768-1771. PubMed
6. Chand DV. Observational study using the tools of lean six sigma to improve the efficiency of the resident rounding process. J Grad Med Educ. 2011;3(2):144-150. PubMed

7. Stickrath C, Noble M, Prochazka A, et al. Attending rounds in the current era: what is and is not happening. JAMA Intern Med. 2013;173(12):1084-1089. PubMed
8. Weber H, Stöckli M, Nübling M, Langewitz WA. Communication during ward rounds in internal medicine. An analysis of patient-nurse-physician interactions using RIAS. Patient Educ Couns. 2007;67(3):343-348. PubMed
9. McMahon GT, Katz JT, Thorndike ME, Levy BD, Loscalzo J. Evaluation of a redesign initiative in an internal-medicine residency. N Engl J Med. 2010;362(14):1304-1311. PubMed

10. Amoss J. Attending rounds: where do we go from here?: comment on “Attending rounds in the current era”. JAMA Intern Med. 2013;173(12):1089-1090. PubMed
11. Curley C, McEachern JE, Speroff T. A firm trial of interdisciplinary rounds on the inpatient medical wards: an intervention designed using continuous quality improvement. Med Care. 1998;36(suppl 8):AS4-A12. PubMed
12. Wang-Cheng RM, Barnas GP, Sigmann P, Riendl PA, Young MJ. Bedside case presentations: why patients like them but learners don’t. J Gen Intern Med. 1989;4(4):284-287. PubMed

13. Chauke, HL, Pattinson RC. Ward rounds—bedside or conference room? S Afr Med J. 2006;96(5):398-400. PubMed
14. Lehmann LS, Brancati FL, Chen MC, Roter D, Dobs AS. The effect of bedside case presentations on patients’ perceptions of their medical care. N Engl J Med. 1997;336(16):336, 1150-1155. PubMed
15. Simons RJ, Baily RG, Zelis R, Zwillich CW. The physiologic and psychological effects of the bedside presentation. N Engl J Med. 1989;321(18):1273-1275. PubMed

16. Wise TN, Feldheim D, Mann LS, Boyle E, Rustgi VK. Patients’ reactions to house staff work rounds. Psychosomatics. 1985;26(8):669-672. PubMed
17. Linfors EW, Neelon FA. Sounding Boards. The case of bedside rounds. N Engl J Med. 1980;303(21):1230-1233. PubMed
18. Nair BR, Coughlan JL, Hensley MJ. Student and patient perspectives on bedside teaching. Med Educ. 1997;31(5):341-346. PubMed

19. Romano J. Patients’ attitudes and behavior in ward round teaching. JAMA. 1941;117(9):664-667.
20. Gonzalo JD, Masters PA, Simons RJ, Chuang CH. Attending rounds and bedside case presentations: medical student and medicine resident experiences and attitudes. Teach Learn Med. 2009;21(2):105-110. PubMed
21. Shoeb M, Khanna R, Fang M, et al. Internal medicine rounding practices and the Accreditation Council for Graduate Medical Education core competencies. J Hosp Med. 2014;9(4):239-243. PubMed

22. Calderon AS, Blackmore CC, Williams BL, et al. Transforming ward rounds through rounding-in-flow. J Grad Med Educ. 2014;6(4):750-755. PubMed
23. Henkin S, Chon TY, Christopherson ML, Halvorsen AJ, Worden LM, Ratelle JT. Improving nurse-physician teamwork through interprofessional bedside rounding. J Multidiscip Healthc. 2016;9:201-205. PubMed
24. O’Leary KJ, Killarney A, Hansen LO, et al. Effect of patient-centred bedside rounds on hospitalised patients’ decision control, activation and satisfaction with care. BMJ Qual Saf. 2016;25:921-928. PubMed

25. Southwick F, Lewis M, Treloar D, et al. Applying athletic principles to medical rounds to improve teaching and patient care. Acad Med. 2014;89(7):1018-1023. PubMed
26. Najafi N, Monash B, Mourad M, et al. Improving attending rounds: Qualitative reflections from multidisciplinary providers. Hosp Pract (1995). 2015;43(3):186-190. PubMed
27. Altman DG. Practical Statistics For Medical Research. Boca Raton, FL: Chapman & Hall/CRC; 2006.

28. Gonzalo JD, Chuang CH, Huang G, Smith C. The return of bedside rounds: an educational intervention. J Gen Intern Med. 2010;25(8):792-798. PubMed
29. Fletcher KE, Rankey DS, Stern DT. Bedside interactions from the other side of the bedrail. J Gen Intern Med. 2005;20(1):58-61. PubMed

30. Gatorounds: Applying Championship Athletic Principles to Healthcare. University of Florida Health. http://gatorounds.med.ufl.edu/surveys/. Accessed March 1, 2013.
31. Gonzalo JD, Heist BS, Duffy BL, et al. The value of bedside rounds: a multicenter qualitative study. Teach Learn Med. 2013;25(4):326-333. PubMed
32. Rogers HD, Carline JD, Paauw DS. Examination room presentations in general internal medicine clinic: patients’ and students’ perceptions. Acad Med. 2003;78(9):945-949. PubMed

 

 

33. Fletcher KE, Furney SL, Stern DT. Patients speak: what’s really important about bedside interactions with physician teams. Teach Learn Med. 2007;19(2):120-127. PubMed
34. Satterfield JM, Bereknyei S, Hilton JF, et al. The prevalence of social and behavioral topics and related educational opportunities during attending rounds. Acad Med. 2014; 89(11):1548-1557. PubMed
35. Kroenke K, Simmons JO, Copley JB, Smith C. Attending rounds: a survey of physician attitudes. J Gen Intern Med. 1990;5(3):229-233. PubMed

36. Castiglioni A, Shewchuk RM, Willett LL, Heudebert GR, Centor RM. A pilot study using nominal group technique to assess residents’ perceptions of successful attending rounds. J Gen Intern Med. 2008;23(7):1060-1065. PubMed
37. Crumlish CM, Yialamas MA, McMahon GT. Quantification of bedside teaching by an academic hospitalist group. J Hosp Med. 2009;4(5):304-307. PubMed
38. Gonzalo JD, Wolpaw DR, Lehman E, Chuang CH. Patient-centered interprofessional collaborative care: factors associated with bedside interprofessional rounds. J Gen Intern Med. 2014;29(7):1040-1047. PubMed

39. Roy B, Castiglioni A, Kraemer RR, et al. Using cognitive mapping to define key domains for successful attending rounds. J Gen Intern Med. 2012;27(11):1492-1498. PubMed
40. Bhansali P, Birch S, Campbell JK, et al. A time-motion study of inpatient rounds using a family-centered rounds model. Hosp Pediatr. 2013;3(1):31-38. PubMed

References

1. Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1):1-18. PubMed
2. Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) Fact Sheet. August 2013. Centers for Medicare and Medicaid Services (CMS). Baltimore, MD.http://www.hcahpsonline.org/files/August_2013_HCAHPS_Fact_Sheet3.pdf. Accessed December 1, 2015.
3. Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17:41-48. PubMed

4. Wray CM, Flores A, Padula WV, Prochaska MT, Meltzer DO, Arora VM. Measuring patient experiences on hospitalist and teaching services: Patient responses to a 30-day postdischarge questionnaire. J Hosp Med. 2016;11(2):99-104. PubMed
5. Bharwani AM, Harris GC, Southwick FS. Perspective: A business school view of medical interprofessional rounds: transforming rounding groups into rounding teams. Acad Med. 2012;87(12):1768-1771. PubMed
6. Chand DV. Observational study using the tools of lean six sigma to improve the efficiency of the resident rounding process. J Grad Med Educ. 2011;3(2):144-150. PubMed

7. Stickrath C, Noble M, Prochazka A, et al. Attending rounds in the current era: what is and is not happening. JAMA Intern Med. 2013;173(12):1084-1089. PubMed
8. Weber H, Stöckli M, Nübling M, Langewitz WA. Communication during ward rounds in internal medicine. An analysis of patient-nurse-physician interactions using RIAS. Patient Educ Couns. 2007;67(3):343-348. PubMed
9. McMahon GT, Katz JT, Thorndike ME, Levy BD, Loscalzo J. Evaluation of a redesign initiative in an internal-medicine residency. N Engl J Med. 2010;362(14):1304-1311. PubMed

10. Amoss J. Attending rounds: where do we go from here?: comment on “Attending rounds in the current era”. JAMA Intern Med. 2013;173(12):1089-1090. PubMed
11. Curley C, McEachern JE, Speroff T. A firm trial of interdisciplinary rounds on the inpatient medical wards: an intervention designed using continuous quality improvement. Med Care. 1998;36(suppl 8):AS4-A12. PubMed
12. Wang-Cheng RM, Barnas GP, Sigmann P, Riendl PA, Young MJ. Bedside case presentations: why patients like them but learners don’t. J Gen Intern Med. 1989;4(4):284-287. PubMed

13. Chauke, HL, Pattinson RC. Ward rounds—bedside or conference room? S Afr Med J. 2006;96(5):398-400. PubMed
14. Lehmann LS, Brancati FL, Chen MC, Roter D, Dobs AS. The effect of bedside case presentations on patients’ perceptions of their medical care. N Engl J Med. 1997;336(16):336, 1150-1155. PubMed
15. Simons RJ, Baily RG, Zelis R, Zwillich CW. The physiologic and psychological effects of the bedside presentation. N Engl J Med. 1989;321(18):1273-1275. PubMed

16. Wise TN, Feldheim D, Mann LS, Boyle E, Rustgi VK. Patients’ reactions to house staff work rounds. Psychosomatics. 1985;26(8):669-672. PubMed
17. Linfors EW, Neelon FA. Sounding Boards. The case of bedside rounds. N Engl J Med. 1980;303(21):1230-1233. PubMed
18. Nair BR, Coughlan JL, Hensley MJ. Student and patient perspectives on bedside teaching. Med Educ. 1997;31(5):341-346. PubMed

19. Romano J. Patients’ attitudes and behavior in ward round teaching. JAMA. 1941;117(9):664-667.
20. Gonzalo JD, Masters PA, Simons RJ, Chuang CH. Attending rounds and bedside case presentations: medical student and medicine resident experiences and attitudes. Teach Learn Med. 2009;21(2):105-110. PubMed
21. Shoeb M, Khanna R, Fang M, et al. Internal medicine rounding practices and the Accreditation Council for Graduate Medical Education core competencies. J Hosp Med. 2014;9(4):239-243. PubMed

22. Calderon AS, Blackmore CC, Williams BL, et al. Transforming ward rounds through rounding-in-flow. J Grad Med Educ. 2014;6(4):750-755. PubMed
23. Henkin S, Chon TY, Christopherson ML, Halvorsen AJ, Worden LM, Ratelle JT. Improving nurse-physician teamwork through interprofessional bedside rounding. J Multidiscip Healthc. 2016;9:201-205. PubMed
24. O’Leary KJ, Killarney A, Hansen LO, et al. Effect of patient-centred bedside rounds on hospitalised patients’ decision control, activation and satisfaction with care. BMJ Qual Saf. 2016;25:921-928. PubMed

25. Southwick F, Lewis M, Treloar D, et al. Applying athletic principles to medical rounds to improve teaching and patient care. Acad Med. 2014;89(7):1018-1023. PubMed
26. Najafi N, Monash B, Mourad M, et al. Improving attending rounds: Qualitative reflections from multidisciplinary providers. Hosp Pract (1995). 2015;43(3):186-190. PubMed
27. Altman DG. Practical Statistics For Medical Research. Boca Raton, FL: Chapman & Hall/CRC; 2006.

28. Gonzalo JD, Chuang CH, Huang G, Smith C. The return of bedside rounds: an educational intervention. J Gen Intern Med. 2010;25(8):792-798. PubMed
29. Fletcher KE, Rankey DS, Stern DT. Bedside interactions from the other side of the bedrail. J Gen Intern Med. 2005;20(1):58-61. PubMed

30. Gatorounds: Applying Championship Athletic Principles to Healthcare. University of Florida Health. http://gatorounds.med.ufl.edu/surveys/. Accessed March 1, 2013.
31. Gonzalo JD, Heist BS, Duffy BL, et al. The value of bedside rounds: a multicenter qualitative study. Teach Learn Med. 2013;25(4):326-333. PubMed
32. Rogers HD, Carline JD, Paauw DS. Examination room presentations in general internal medicine clinic: patients’ and students’ perceptions. Acad Med. 2003;78(9):945-949. PubMed

 

 

33. Fletcher KE, Furney SL, Stern DT. Patients speak: what’s really important about bedside interactions with physician teams. Teach Learn Med. 2007;19(2):120-127. PubMed
34. Satterfield JM, Bereknyei S, Hilton JF, et al. The prevalence of social and behavioral topics and related educational opportunities during attending rounds. Acad Med. 2014; 89(11):1548-1557. PubMed
35. Kroenke K, Simmons JO, Copley JB, Smith C. Attending rounds: a survey of physician attitudes. J Gen Intern Med. 1990;5(3):229-233. PubMed

36. Castiglioni A, Shewchuk RM, Willett LL, Heudebert GR, Centor RM. A pilot study using nominal group technique to assess residents’ perceptions of successful attending rounds. J Gen Intern Med. 2008;23(7):1060-1065. PubMed
37. Crumlish CM, Yialamas MA, McMahon GT. Quantification of bedside teaching by an academic hospitalist group. J Hosp Med. 2009;4(5):304-307. PubMed
38. Gonzalo JD, Wolpaw DR, Lehman E, Chuang CH. Patient-centered interprofessional collaborative care: factors associated with bedside interprofessional rounds. J Gen Intern Med. 2014;29(7):1040-1047. PubMed

39. Roy B, Castiglioni A, Kraemer RR, et al. Using cognitive mapping to define key domains for successful attending rounds. J Gen Intern Med. 2012;27(11):1492-1498. PubMed
40. Bhansali P, Birch S, Campbell JK, et al. A time-motion study of inpatient rounds using a family-centered rounds model. Hosp Pediatr. 2013;3(1):31-38. PubMed

Issue
Journal of Hospital Medicine - 12(3)
Issue
Journal of Hospital Medicine - 12(3)
Page Number
143-149
Page Number
143-149
Publications
Publications
Topics
Article Type
Display Headline
Standardized attending rounds to improve the patient experience: A pragmatic cluster randomized controlled trial
Display Headline
Standardized attending rounds to improve the patient experience: A pragmatic cluster randomized controlled trial
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence: 533 Parnassus Avenue, Box 0131, San Francisco, CA 94143; Telephone: 415-476-5928; Fax, 415-502-1963; E-mail: bradley.monash@ucsf.edu
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Use ProPublica
Article PDF Media
Media Files

SCHOLAR Project

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Features of successful academic hospitalist programs: Insights from the SCHOLAR (SuCcessful HOspitaLists in academics and research) project

The structure and function of academic hospital medicine programs (AHPs) has evolved significantly with the growth of hospital medicine.[1, 2, 3, 4] Many AHPs formed in response to regulatory and financial changes, which drove demand for increased trainee oversight, improved clinical efficiency, and growth in nonteaching services staffed by hospitalists. Differences in local organizational contexts and needs have contributed to great variability in AHP program design and operations. As AHPs have become more established, the need to engage academic hospitalists in scholarship and activities that support professional development and promotion has been recognized. Defining sustainable and successful positions for academic hospitalists is a priority called for by leaders in the field.[5, 6]

In this rapidly evolving context, AHPs have employed a variety of approaches to organizing clinical and academic faculty roles, without guiding evidence or consensus‐based performance benchmarks. A number of AHPs have achieved success along traditional academic metrics of research, scholarship, and education. Currently, it is not known whether specific approaches to AHP organization, structure, or definition of faculty roles are associated with achievement of more traditional markers of academic success.

The Academic Committee of the Society of Hospital Medicine (SHM), and the Academic Hospitalist Task Force of the Society of General Internal Medicine (SGIM) had separately initiated projects to explore characteristics associated with success in AHPs. In 2012, these organizations combined efforts to jointly develop and implement the SCHOLAR (SuCcessful HOspitaLists in Academics and Research) project. The goals were to identify successful AHPs using objective criteria, and to then study those groups in greater detail to generate insights that would be broadly relevant to the field. Efforts to clarify the factors within AHPs linked to success by traditional academic metrics will benefit hospitalists, their leaders, and key stakeholders striving to achieve optimal balance between clinical and academic roles. We describe the initial work of the SCHOLAR project, our definitions of academic success in AHPs, and the characteristics of a cohort of exemplary AHPs who achieved the highest levels on these metrics.

METHODS

Defining Success

The 11 members of the SCHOLAR project held a variety of clinical and academic roles within a geographically diverse group of AHPs. We sought to create a functional definition of success applicable to AHPs. As no gold standard currently exists, we used a consensus process among task force members to arrive at a definition that was quantifiable, feasible, and meaningful. The first step was brainstorming on conference calls held 1 to 2 times monthly over 4 months. Potential defining characteristics that emerged from these discussions related to research, teaching, and administrative activities. When potential characteristics were proposed, we considered how to operationalize each one. Each characteristic was discussed until there was consensus from the entire group. Those around education and administration were the most complex, as many roles are locally driven and defined, and challenging to quantify. For this reason, we focused on promotion as a more global approach to assessing academic hospitalist success in these areas. Although criteria for academic advancement also vary across institutions, we felt that promotion generally reflected having met some threshold of academic success. We also wanted to recognize that scholarship occurs outside the context of funded research. Ultimately, 3 key domains emerged: research grant funding, faculty promotion, and scholarship.

After these 3 domains were identified, the group sought to define quantitative metrics to assess performance. These discussions occurred on subsequent calls over a 4‐month period. Between calls, group members gathered additional information to facilitate assessment of the feasibility of proposed metrics, reporting on progress via email. Again, group consensus was sought for each metric considered. Data on grant funding and successful promotions were available from a previous survey conducted through the SHM in 2011. Leaders from 170 AHPs were contacted, with 50 providing complete responses to the 21‐item questionnaire (see Supporting Information, Appendix 1, in the online version of this article). Results of the survey, heretofore referred to as the Leaders of Academic Hospitalist Programs survey (LAHP‐50), have been described elsewhere.[7] For the purposes of this study, we used the self‐reported data about grant funding and promotions contained in the survey to reflect the current state of the field. Although the survey response rate was approximately 30%, the survey was not anonymous, and many reputationally prominent academic hospitalist programs were represented. For these reasons, the group members felt that the survey results were relevant for the purposes of assessing academic success.

In the LAHP‐50, funding was defined as principal investigator or coinvestigator roles on federally and nonfederally funded research, clinical trials, internal grants, and any other extramurally funded projects. Mean and median funding for the overall sample was calculated. Through a separate question, each program's total faculty full‐time equivalent (FTE) count was reported, allowing us to adjust for group size by assessing both total funding per group and funding/FTE for each responding AHP.

Promotions were defined by the self‐reported number of faculty at each of the following ranks: instructor, assistant professor, associate professor, full professor, and professor above scale/emeritus. In addition, a category of nonacademic track (eg, adjunct faculty, clinical associate) was included to capture hospitalists that did not fit into the traditional promotions categories. We did not distinguish between tenure‐track and nontenure‐track academic ranks. LAHP‐50 survey respondents reported the number of faculty in their group at each academic rank. Given that the majority of academic hospitalists hold a rank of assistant professor or lower,[6, 8, 9] and that the number of full professors was only 3% in the LAHP‐50 cohort, we combined the faculty at the associate and full professor ranks, defining successfully promoted faculty as the percent of hospitalists above the rank of assistant professor.

We created a new metric to assess scholarly output. We had considerable discussion of ways to assess the numbers of peer‐reviewed manuscripts generated by AHPs. However, the group had concerns about the feasibility of identification and attribution of authors to specific AHPs through literature searches. We considered examining only publications in the Journal of Hospital Medicine and the Journal of General Internal Medicine, but felt that this would exclude significant work published by hospitalists in fields of medical education or health services research that would more likely appear in alternate journals. Instead, we quantified scholarship based on the number of abstracts presented at national meetings. We focused on meetings of the SHM and SGIM as the primary professional societies representing hospital medicine. The group felt that even work published outside of the journals of our professional societies would likely be presented at those meetings. We used the following strategy: We reviewed research abstracts accepted for presentation as posters or oral abstracts at the 2010 and 2011 SHM national meetings, and research abstracts with a primary or secondary category of hospital medicine at the 2010 and 2011 SGIM national meetings. By including submissions at both SGIM and SHM meetings, we accounted for the fact that some programs may gravitate more to one society meeting or another. We did not include abstracts in the clinical vignettes or innovations categories. We tallied the number of abstracts by group affiliation of the authors for each of the 4 meetings above and created a cumulative total per group for the 2‐year period. Abstracts with authors from different AHPs were counted once for each individual group. Members of the study group reviewed abstracts from each of the meetings in pairs. Reviewers worked separately and compared tallies of results to ensure consistent tabulations. Internet searches were conducted to identify or confirm author affiliations if it was not apparent in the abstract author list. Abstract tallies were compiled without regard to whether programs had completed the LAHP‐50 survey; thus, we collected data on programs that did not respond to the LAHP‐50 survey.

Identification of the SCHOLAR Cohort

To identify our cohort of top‐performing AHPs, we combined the funding and promotions data from the LAHP‐50 sample with the abstract data. We limited our sample to adult hospital medicine groups to reduce heterogeneity. We created rank lists of programs in each category (grant funding, successful promotions, and scholarship), using data from the LAHP‐50 survey to rank programs on funding and promotions, and data from our abstract counts to rank on scholarship. We limited the top‐performing list in each category to 10 institutions as a cutoff. Because we set a threshold of at least $1 million in total funding, we identified only 9 top performing AHPs with regard to grant funding. We also calculated mean funding/FTE. We chose to rank programs only by funding/FTE rather than total funding per program to better account for group size. For successful promotions, we ranked programs by the percentage of senior faculty. For abstract counts, we included programs whose faculty presented abstracts at a minimum of 2 separate meetings, and ranked programs based on the total number of abstracts per group.

This process resulted in separate lists of top performing programs in each of the 3 domains we associated with academic success, arranged in descending order by grant dollars/FTE, percent of senior faculty, and abstract counts (Table 1). Seventeen different programs were represented across these 3 top 10 lists. One program appeared on all 3 lists, 8 programs appeared on 2 lists, and the remainder appeared on a single list (Table 2). Seven of these programs were identified solely based on abstract presentations, diversifying our top groups beyond only those who completed the LAHP‐50 survey. We considered all of these programs to represent high performance in academic hospital medicine. The group selected this inclusive approach because we recognized that any 1 metric was potentially limited, and we sought to identify diverse pathways to success.

Performance Among the Top Programs on Each of the Domains of Academic Success
Funding Promotions Scholarship
Grant $/FTE Total Grant $ Senior Faculty, No. (%) Total Abstract Count
  • NOTE: Funding is defined as mean grant dollars per FTE and total grant dollars per program; only programs with $1 million in total funding were included. Senior faculty are defined as all faculty above the rank of assistant professor. Abstract counts are the total number of research abstracts by members affiliated with the individual academic hospital medicine program accepted at the Society of Hospital Medicine and Society of General Internal Medicine national meetings in 2010 and 2011. Each column represents a separate ranked list; values across rows are independent and do not necessarily represent the same programs horizontally. Abbreviations: FTE = full‐time equivalent.

$1,409,090 $15,500,000 3 (60%) 23
$1,000,000 $9,000,000 3 (60%) 21
$750,000 $8,000,000 4 (57%) 20
$478,609 $6,700,535 9 (53%) 15
$347,826 $3,000,000 8 (44%) 11
$86,956 $3,000,000 14 (41%) 11
$66,666 $2,000,000 17 (36%) 10
$46,153 $1,500,000 9 (33%) 10
$38,461 $1,000,000 2 (33%) 9
4 (31%) 9
Qualifying Characteristics for Programs Represented in the SCHOLAR Cohort
Selection Criteria for SCHOLAR Cohort No. of Programs
  • NOTE: Programs were selected by appearing on 1 or more rank lists of top performing academic hospital medicine programs with regard to the number of abstracts presented at 4 different national meetings, the percent of senior faculty, or the amount of grant funding. Further details appear in the text. Abbreviations: SCHOLAR, SuCcessful HOspitaLists in Academics and Research.

Abstracts, funding, and promotions 1
Abstracts plus promotions 4
Abstracts plus funding 3
Funding plus promotion 1
Funding only 1
Abstract only 7
Total 17
Top 10 abstract count
4 meetings 2
3 meetings 2
2 meetings 6

The 17 unique adult AHPs appearing on at least 1 of the top 10 lists comprised the SCHOLAR cohort of programs that we studied in greater detail. Data reflecting program demographics were solicited directly from leaders of the AHPs identified in the SCHOLAR cohort, including size and age of program, reporting structure, number of faculty at various academic ranks (for programs that did not complete the LAHP‐50 survey), and number of faculty with fellowship training (defined as any postresidency fellowship program).

Subsequently, we performed comparative analyses between the programs in the SCHOLAR cohort to the general population of AHPs reflected by the LAHP‐50 sample. Because abstract presentations were not recorded in the original LAHP‐50 survey instrument, it was not possible to perform a benchmarking comparison for the scholarship domain.

Data Analysis

To measure the success of the SCHOLAR cohort we compared the grant funding and proportion of successfully promoted faculty at the SCHOLAR programs to those in the overall LAHP‐50 sample. Differences in mean and median grant funding were compared using t tests and Mann‐Whitney rank sum tests. Proportion of promoted faculty were compared using 2 tests. A 2‐tailed of 0.05 was used to test significance of differences.

RESULTS

Demographics

Among the AHPs in the SCHOLAR cohort, the mean program age was 13.2 years (range, 618 years), and the mean program size was 36 faculty (range, 1895; median, 28). On average, 15% of faculty members at SCHOLAR programs were fellowship trained (range, 0%37%). Reporting structure among the SCHOLAR programs was as follows: 53% were an independent division or section of the department of medicine; 29% were a section within general internal medicine, and 18% were an independent clinical group.

Grant Funding

Table 3 compares grant funding in the SCHOLAR programs to programs in the overall LAHP‐50 sample. Mean funding per group and mean funding per FTE were significantly higher in the SCHOLAR group than in the overall sample.

Funding From Grants and Contracts Among Academic Hospitalist Programs in the Overall LAHP‐50 Sample and the SCHOLAR Cohort
Funding (Millions)
LAHP‐50 Overall Sample SCHOLAR
  • NOTE: Abbreviations: AHP = academic hospital medicine program; FTE = full‐time equivalent; LAHP‐50, Leaders of Academic Hospitalist Programs (defined further in the text); SCHOLAR, SuCcessful HOspitaLists in Academics and Research. *P < 0.01.

Median grant funding/AHP 0.060 1.500*
Mean grant funding/AHP 1.147 (015) 3.984* (015)
Median grant funding/FTE 0.004 0.038*
Mean grant funding/FTE 0.095 (01.4) 0.364* (01.4)

Thirteen of the SCHOLAR programs were represented in the initial LAHP‐50, but 2 did not report a dollar amount for grants and contracts. Therefore, data for total grant funding were available for only 65% (11 of 17) of the programs in the SCHOLAR cohort. Of note, 28% of AHPs in the overall LAHP‐50 sample reported no external funding sources.

Faculty Promotion

Figure 1 demonstrates the proportion of faculty at various academic ranks. The percent of faculty above the rank of assistant professor in the SCHOLAR programs exceeded those in the overall LAHP‐50 by 5% (17.9% vs 12.8%, P = 0.01). Of note, 6% of the hospitalists at AHPs in the SCHOLAR programs were on nonfaculty tracks.

Figure 1
Distribution of faculty academic ranking at academic hospitalist programs in the LAHP‐50 and SCHOLAR cohorts. The percent of senior faculty (defined as associate and full professor) in the SCHOLAR cohort was significantly higher than the LAHP‐50 (P = 0.01). Abbreviations: LAHP‐50, Leaders of Academic Hospitalist Programs; SCHOLAR, SuCcessful HOspitaLists in Academics and Research.

Scholarship

Mean abstract output over the 2‐year period measured was 10.8 (range, 323) in the SCHOLAR cohort. Because we did not collect these data for the LAHP‐50 group, comparative analyses were not possible.

DISCUSSION

Using a definition of academic success that incorporated metrics of grant funding, faculty promotion, and scholarly output, we identified a unique subset of successful AHPsthe SCHOLAR cohort. The programs represented in the SCHOLAR cohort were generally large and relatively mature. Despite this, the cohort consisted of mostly junior faculty, had a paucity of fellowship‐trained hospitalists, and not all reported grant funding.

Prior published work reported complementary findings.[6, 8, 9] A survey of 20 large, well‐established academic hospitalist programs in 2008 found that the majority of hospitalists were junior faculty with a limited publication portfolio. Of the 266 respondents in that study, 86% reported an academic rank at or below assistant professor; funding was not explored.[9] Our similar findings 4 years later add to this work by demonstrating trends over time, and suggest that progress toward creating successful pathways for academic advancement has been slow. In a 2012 survey of the SHM membership, 28% of hospitalists with academic appointments reported no current or future plans to engage in research.[8] These findings suggest that faculty in AHPs may define scholarship through nontraditional pathways, or in some cases choose not to pursue or prioritize scholarship altogether.

Our findings also add to the literature with regard to our assessment of funding, which was variable across the SCHOLAR group. The broad range of funding in the SCHOLAR programs for which we have data (grant dollars $0$15 million per program) suggests that opportunities to improve supported scholarship remain, even among a selected cohort of successful AHPs. The predominance of junior faculty in the SCHOLAR programs may be a reason for this variation. Junior faculty may be engaged in research with funding directed to senior mentors outside their AHP. Alternatively, they may pursue meaningful local hospital quality improvement or educational innovations not supported by external grants, or hold leadership roles in education, quality, or information technology that allow for advancement and promotion without external grant funding. As the scope and impact of these roles increases, senior leaders with alternate sources of support may rely less on research funds; this too may explain some of the differences. Our findings are congruent with results of a study that reviewed original research published by hospitalists, and concluded that the majority of hospitalist research was not externally funded.[8] Our approach for assessing grant funding by adjusting for FTE had the potential to inadvertently favor smaller well‐funded groups over larger ones; however, programs in our sample were similarly represented when ranked by funding/FTE or total grant dollars. As many successful AHPs do concentrate their research funding among a core of focused hospitalist researchers, our definition may not be the ideal metric for some programs.

We chose to define scholarship based on abstract output, rather than peer‐reviewed publications. Although this choice was necessary from a feasibility perspective, it may have excluded programs that prioritize peer‐reviewed publications over abstracts. Although we were unable to incorporate a search strategy to accurately and comprehensively track the publication output attributed specifically to hospitalist researchers and quantify it by program, others have since defined such an approach.[8] However, tracking abstracts theoretically allowed insights into a larger volume of innovative and creative work generated by top AHPs by potentially including work in the earlier stages of development.

We used a consensus‐based definition of success to define our SCHOLAR cohort. There are other ways to measure academic success, which if applied, may have yielded a different sample of programs. For example, over half of the original research articles published in the Journal of Hospital Medicine over a 7‐year span were generated from 5 academic centers.[8] This definition of success may be equally credible, though we note that 4 of these 5 programs were also included in the SCHOLAR cohort. We feel our broader approach was more reflective of the variety of pathways to success available to academic hospitalists. Before our metrics are applied as a benchmarking tool, however, they should ideally be combined with factors not measured in our study to ensure a more comprehensive or balanced reflection of academic success. Factors such as mentorship, level of hospitalist engagement,[10] prevalence of leadership opportunities, operational and fiscal infrastructure, and the impact of local quality, safety, and value efforts should be considered.

Comparison of successfully promoted faculty at AHPs across the country is inherently limited by the wide variation in promotion standards across different institutions; controlling for such differences was not possible with our methodology. For example, it appears that several programs with relatively few senior faculty may have met metrics leading to their inclusion in the SCHOLAR group because of their small program size. Future benchmarking efforts for promotion at AHPs should take scaling into account and consider both total number as well as percentage of senior faculty when evaluating success.

Our methodology has several limitations. Survey data were self‐reported and not independently validated, and as such are subject to recall and reporting biases. Response bias inherently excluded some AHPs that may have met our grant funding or promotions criteria had they participated in the initial LAHP‐50 survey, though we identified and included additional programs through our scholarship metric, increasing the representativeness of the SCHOLAR cohort. Given the dynamic nature of the field, the age of the data we relied upon for analysis limits the generalizability of our specific benchmarks to current practice. However, the development of academic success occurs over the long‐term, and published data on academic hospitalist productivity are consistent with this slower time course.[8] Despite these limitations, our data inform the general topic of gauging performance of AHPs, underscoring the challenges of developing and applying metrics of success, and highlight the variability of performance on selected metrics even among a relatively small group of 17 programs.

In conclusion, we have created a method to quantify academic success that may be useful to academic hospitalists and their group leaders as they set targets for improvement in the field. Even among our SCHOLAR cohort, room for ongoing improvement in development of funded scholarship and a core of senior faculty exists. Further investigation into the unique features of successful groups will offer insight to leaders in academic hospital medicine regarding infrastructure and processes that should be embraced to raise the bar for all AHPs. In addition, efforts to further define and validate nontraditional approaches to scholarship that allow for successful promotion at AHPs would be informative. We view our work less as a singular approach to benchmarking standards for AHPs, and more a call to action to continue efforts to balance scholarly activity and broad professional development of academic hospitalists with increasing clinical demands.

Acknowledgements

The authors thank all of the AHP leaders who participated in the SCHOLAR project. They also thank the Society of Hospital Medicine and Society of General Internal Medicine and the SHM Academic Committee and SGIM Academic Hospitalist Task Force for their support of this work.

Disclosures

The work reported here was supported by the Department of Veterans Affairs, Veterans Health Administration, South Texas Veterans Health Care System. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs. The authors report no conflicts of interest.

Files
References
  1. Boonyasai RT, Lin Y‐L, Brotman DJ, Kuo Y‐F, Goodwin JS. Characteristics of primary care providers who adopted the hospitalist model from 2001 to 2009. J Hosp Med. 2015;10(2):7582.
  2. Kuo Y‐F, Sharma G, Freeman JL, Goodwin JS. Growth in the care of older patients by hospitalists in the United States. N Engl J Med. 2009;360(11):11021112.
  3. Lapps J, Flansbaum B, Leykum L, Boswell J, Haines L. Updating threshold‐based identification of hospitalists in 2012 Medicare pay data. J Hosp Med. 2016;11(1):4547.
  4. Pete Welch W, Stearns SC, Cuellar AE, Bindman AB. Use of hospitalists by Medicare beneficiaries: a national picture. Medicare Medicaid Res Rev. 2014;4(2).
  5. Flanders SA, Centor B, Weber V, McGinn T, DeSalvo K, Auerbach A. Challenges and opportunities in Academic Hospital Medicine: report from the Academic Hospital Medicine Summit. J Hosp Med. 2009;4(4):240246.
  6. Harrison R, Hunter AJ, Sharpe B, Auerbach AD. Survey of US academic hospitalist leaders about mentorship and academic activities in hospitalist groups. J Hosp Med. 2011;6(1):59.
  7. Seymann G, Brotman D, Lee B, Jaffer A, Amin A, Glasheen J. The structure of hospital medicine programs at academic medical centers [abstract]. J Hosp Med. 2012;7(suppl 2):s92.
  8. Dang Do AN, Munchhof AM, Terry C, Emmett T, Kara A. Research and publication trends in hospital medicine. J Hosp Med. 2014;9(3):148154.
  9. Reid M, Misky G, Harrison R, Sharpe B, Auerbach A, Glasheen J. Mentorship, productivity, and promotion among academic hospitalists. J Gen Intern Med. 2012;27(1):2327.
  10. Cawley P, Deitelzweig S, Flores L, et al. The key principles and characteristics of an effective hospital medicine group: an assessment guide for hospitals and hospitalists. J Hosp Med. 2014;9(2):123128.
Article PDF
Issue
Journal of Hospital Medicine - 11(10)
Publications
Page Number
708-713
Sections
Files
Files
Article PDF
Article PDF

The structure and function of academic hospital medicine programs (AHPs) has evolved significantly with the growth of hospital medicine.[1, 2, 3, 4] Many AHPs formed in response to regulatory and financial changes, which drove demand for increased trainee oversight, improved clinical efficiency, and growth in nonteaching services staffed by hospitalists. Differences in local organizational contexts and needs have contributed to great variability in AHP program design and operations. As AHPs have become more established, the need to engage academic hospitalists in scholarship and activities that support professional development and promotion has been recognized. Defining sustainable and successful positions for academic hospitalists is a priority called for by leaders in the field.[5, 6]

In this rapidly evolving context, AHPs have employed a variety of approaches to organizing clinical and academic faculty roles, without guiding evidence or consensus‐based performance benchmarks. A number of AHPs have achieved success along traditional academic metrics of research, scholarship, and education. Currently, it is not known whether specific approaches to AHP organization, structure, or definition of faculty roles are associated with achievement of more traditional markers of academic success.

The Academic Committee of the Society of Hospital Medicine (SHM), and the Academic Hospitalist Task Force of the Society of General Internal Medicine (SGIM) had separately initiated projects to explore characteristics associated with success in AHPs. In 2012, these organizations combined efforts to jointly develop and implement the SCHOLAR (SuCcessful HOspitaLists in Academics and Research) project. The goals were to identify successful AHPs using objective criteria, and to then study those groups in greater detail to generate insights that would be broadly relevant to the field. Efforts to clarify the factors within AHPs linked to success by traditional academic metrics will benefit hospitalists, their leaders, and key stakeholders striving to achieve optimal balance between clinical and academic roles. We describe the initial work of the SCHOLAR project, our definitions of academic success in AHPs, and the characteristics of a cohort of exemplary AHPs who achieved the highest levels on these metrics.

METHODS

Defining Success

The 11 members of the SCHOLAR project held a variety of clinical and academic roles within a geographically diverse group of AHPs. We sought to create a functional definition of success applicable to AHPs. As no gold standard currently exists, we used a consensus process among task force members to arrive at a definition that was quantifiable, feasible, and meaningful. The first step was brainstorming on conference calls held 1 to 2 times monthly over 4 months. Potential defining characteristics that emerged from these discussions related to research, teaching, and administrative activities. When potential characteristics were proposed, we considered how to operationalize each one. Each characteristic was discussed until there was consensus from the entire group. Those around education and administration were the most complex, as many roles are locally driven and defined, and challenging to quantify. For this reason, we focused on promotion as a more global approach to assessing academic hospitalist success in these areas. Although criteria for academic advancement also vary across institutions, we felt that promotion generally reflected having met some threshold of academic success. We also wanted to recognize that scholarship occurs outside the context of funded research. Ultimately, 3 key domains emerged: research grant funding, faculty promotion, and scholarship.

After these 3 domains were identified, the group sought to define quantitative metrics to assess performance. These discussions occurred on subsequent calls over a 4‐month period. Between calls, group members gathered additional information to facilitate assessment of the feasibility of proposed metrics, reporting on progress via email. Again, group consensus was sought for each metric considered. Data on grant funding and successful promotions were available from a previous survey conducted through the SHM in 2011. Leaders from 170 AHPs were contacted, with 50 providing complete responses to the 21‐item questionnaire (see Supporting Information, Appendix 1, in the online version of this article). Results of the survey, heretofore referred to as the Leaders of Academic Hospitalist Programs survey (LAHP‐50), have been described elsewhere.[7] For the purposes of this study, we used the self‐reported data about grant funding and promotions contained in the survey to reflect the current state of the field. Although the survey response rate was approximately 30%, the survey was not anonymous, and many reputationally prominent academic hospitalist programs were represented. For these reasons, the group members felt that the survey results were relevant for the purposes of assessing academic success.

In the LAHP‐50, funding was defined as principal investigator or coinvestigator roles on federally and nonfederally funded research, clinical trials, internal grants, and any other extramurally funded projects. Mean and median funding for the overall sample was calculated. Through a separate question, each program's total faculty full‐time equivalent (FTE) count was reported, allowing us to adjust for group size by assessing both total funding per group and funding/FTE for each responding AHP.

Promotions were defined by the self‐reported number of faculty at each of the following ranks: instructor, assistant professor, associate professor, full professor, and professor above scale/emeritus. In addition, a category of nonacademic track (eg, adjunct faculty, clinical associate) was included to capture hospitalists that did not fit into the traditional promotions categories. We did not distinguish between tenure‐track and nontenure‐track academic ranks. LAHP‐50 survey respondents reported the number of faculty in their group at each academic rank. Given that the majority of academic hospitalists hold a rank of assistant professor or lower,[6, 8, 9] and that the number of full professors was only 3% in the LAHP‐50 cohort, we combined the faculty at the associate and full professor ranks, defining successfully promoted faculty as the percent of hospitalists above the rank of assistant professor.

We created a new metric to assess scholarly output. We had considerable discussion of ways to assess the numbers of peer‐reviewed manuscripts generated by AHPs. However, the group had concerns about the feasibility of identification and attribution of authors to specific AHPs through literature searches. We considered examining only publications in the Journal of Hospital Medicine and the Journal of General Internal Medicine, but felt that this would exclude significant work published by hospitalists in fields of medical education or health services research that would more likely appear in alternate journals. Instead, we quantified scholarship based on the number of abstracts presented at national meetings. We focused on meetings of the SHM and SGIM as the primary professional societies representing hospital medicine. The group felt that even work published outside of the journals of our professional societies would likely be presented at those meetings. We used the following strategy: We reviewed research abstracts accepted for presentation as posters or oral abstracts at the 2010 and 2011 SHM national meetings, and research abstracts with a primary or secondary category of hospital medicine at the 2010 and 2011 SGIM national meetings. By including submissions at both SGIM and SHM meetings, we accounted for the fact that some programs may gravitate more to one society meeting or another. We did not include abstracts in the clinical vignettes or innovations categories. We tallied the number of abstracts by group affiliation of the authors for each of the 4 meetings above and created a cumulative total per group for the 2‐year period. Abstracts with authors from different AHPs were counted once for each individual group. Members of the study group reviewed abstracts from each of the meetings in pairs. Reviewers worked separately and compared tallies of results to ensure consistent tabulations. Internet searches were conducted to identify or confirm author affiliations if it was not apparent in the abstract author list. Abstract tallies were compiled without regard to whether programs had completed the LAHP‐50 survey; thus, we collected data on programs that did not respond to the LAHP‐50 survey.

Identification of the SCHOLAR Cohort

To identify our cohort of top‐performing AHPs, we combined the funding and promotions data from the LAHP‐50 sample with the abstract data. We limited our sample to adult hospital medicine groups to reduce heterogeneity. We created rank lists of programs in each category (grant funding, successful promotions, and scholarship), using data from the LAHP‐50 survey to rank programs on funding and promotions, and data from our abstract counts to rank on scholarship. We limited the top‐performing list in each category to 10 institutions as a cutoff. Because we set a threshold of at least $1 million in total funding, we identified only 9 top performing AHPs with regard to grant funding. We also calculated mean funding/FTE. We chose to rank programs only by funding/FTE rather than total funding per program to better account for group size. For successful promotions, we ranked programs by the percentage of senior faculty. For abstract counts, we included programs whose faculty presented abstracts at a minimum of 2 separate meetings, and ranked programs based on the total number of abstracts per group.

This process resulted in separate lists of top performing programs in each of the 3 domains we associated with academic success, arranged in descending order by grant dollars/FTE, percent of senior faculty, and abstract counts (Table 1). Seventeen different programs were represented across these 3 top 10 lists. One program appeared on all 3 lists, 8 programs appeared on 2 lists, and the remainder appeared on a single list (Table 2). Seven of these programs were identified solely based on abstract presentations, diversifying our top groups beyond only those who completed the LAHP‐50 survey. We considered all of these programs to represent high performance in academic hospital medicine. The group selected this inclusive approach because we recognized that any 1 metric was potentially limited, and we sought to identify diverse pathways to success.

Performance Among the Top Programs on Each of the Domains of Academic Success
Funding Promotions Scholarship
Grant $/FTE Total Grant $ Senior Faculty, No. (%) Total Abstract Count
  • NOTE: Funding is defined as mean grant dollars per FTE and total grant dollars per program; only programs with $1 million in total funding were included. Senior faculty are defined as all faculty above the rank of assistant professor. Abstract counts are the total number of research abstracts by members affiliated with the individual academic hospital medicine program accepted at the Society of Hospital Medicine and Society of General Internal Medicine national meetings in 2010 and 2011. Each column represents a separate ranked list; values across rows are independent and do not necessarily represent the same programs horizontally. Abbreviations: FTE = full‐time equivalent.

$1,409,090 $15,500,000 3 (60%) 23
$1,000,000 $9,000,000 3 (60%) 21
$750,000 $8,000,000 4 (57%) 20
$478,609 $6,700,535 9 (53%) 15
$347,826 $3,000,000 8 (44%) 11
$86,956 $3,000,000 14 (41%) 11
$66,666 $2,000,000 17 (36%) 10
$46,153 $1,500,000 9 (33%) 10
$38,461 $1,000,000 2 (33%) 9
4 (31%) 9
Qualifying Characteristics for Programs Represented in the SCHOLAR Cohort
Selection Criteria for SCHOLAR Cohort No. of Programs
  • NOTE: Programs were selected by appearing on 1 or more rank lists of top performing academic hospital medicine programs with regard to the number of abstracts presented at 4 different national meetings, the percent of senior faculty, or the amount of grant funding. Further details appear in the text. Abbreviations: SCHOLAR, SuCcessful HOspitaLists in Academics and Research.

Abstracts, funding, and promotions 1
Abstracts plus promotions 4
Abstracts plus funding 3
Funding plus promotion 1
Funding only 1
Abstract only 7
Total 17
Top 10 abstract count
4 meetings 2
3 meetings 2
2 meetings 6

The 17 unique adult AHPs appearing on at least 1 of the top 10 lists comprised the SCHOLAR cohort of programs that we studied in greater detail. Data reflecting program demographics were solicited directly from leaders of the AHPs identified in the SCHOLAR cohort, including size and age of program, reporting structure, number of faculty at various academic ranks (for programs that did not complete the LAHP‐50 survey), and number of faculty with fellowship training (defined as any postresidency fellowship program).

Subsequently, we performed comparative analyses between the programs in the SCHOLAR cohort to the general population of AHPs reflected by the LAHP‐50 sample. Because abstract presentations were not recorded in the original LAHP‐50 survey instrument, it was not possible to perform a benchmarking comparison for the scholarship domain.

Data Analysis

To measure the success of the SCHOLAR cohort we compared the grant funding and proportion of successfully promoted faculty at the SCHOLAR programs to those in the overall LAHP‐50 sample. Differences in mean and median grant funding were compared using t tests and Mann‐Whitney rank sum tests. Proportion of promoted faculty were compared using 2 tests. A 2‐tailed of 0.05 was used to test significance of differences.

RESULTS

Demographics

Among the AHPs in the SCHOLAR cohort, the mean program age was 13.2 years (range, 618 years), and the mean program size was 36 faculty (range, 1895; median, 28). On average, 15% of faculty members at SCHOLAR programs were fellowship trained (range, 0%37%). Reporting structure among the SCHOLAR programs was as follows: 53% were an independent division or section of the department of medicine; 29% were a section within general internal medicine, and 18% were an independent clinical group.

Grant Funding

Table 3 compares grant funding in the SCHOLAR programs to programs in the overall LAHP‐50 sample. Mean funding per group and mean funding per FTE were significantly higher in the SCHOLAR group than in the overall sample.

Funding From Grants and Contracts Among Academic Hospitalist Programs in the Overall LAHP‐50 Sample and the SCHOLAR Cohort
Funding (Millions)
LAHP‐50 Overall Sample SCHOLAR
  • NOTE: Abbreviations: AHP = academic hospital medicine program; FTE = full‐time equivalent; LAHP‐50, Leaders of Academic Hospitalist Programs (defined further in the text); SCHOLAR, SuCcessful HOspitaLists in Academics and Research. *P < 0.01.

Median grant funding/AHP 0.060 1.500*
Mean grant funding/AHP 1.147 (015) 3.984* (015)
Median grant funding/FTE 0.004 0.038*
Mean grant funding/FTE 0.095 (01.4) 0.364* (01.4)

Thirteen of the SCHOLAR programs were represented in the initial LAHP‐50, but 2 did not report a dollar amount for grants and contracts. Therefore, data for total grant funding were available for only 65% (11 of 17) of the programs in the SCHOLAR cohort. Of note, 28% of AHPs in the overall LAHP‐50 sample reported no external funding sources.

Faculty Promotion

Figure 1 demonstrates the proportion of faculty at various academic ranks. The percent of faculty above the rank of assistant professor in the SCHOLAR programs exceeded those in the overall LAHP‐50 by 5% (17.9% vs 12.8%, P = 0.01). Of note, 6% of the hospitalists at AHPs in the SCHOLAR programs were on nonfaculty tracks.

Figure 1
Distribution of faculty academic ranking at academic hospitalist programs in the LAHP‐50 and SCHOLAR cohorts. The percent of senior faculty (defined as associate and full professor) in the SCHOLAR cohort was significantly higher than the LAHP‐50 (P = 0.01). Abbreviations: LAHP‐50, Leaders of Academic Hospitalist Programs; SCHOLAR, SuCcessful HOspitaLists in Academics and Research.

Scholarship

Mean abstract output over the 2‐year period measured was 10.8 (range, 323) in the SCHOLAR cohort. Because we did not collect these data for the LAHP‐50 group, comparative analyses were not possible.

DISCUSSION

Using a definition of academic success that incorporated metrics of grant funding, faculty promotion, and scholarly output, we identified a unique subset of successful AHPsthe SCHOLAR cohort. The programs represented in the SCHOLAR cohort were generally large and relatively mature. Despite this, the cohort consisted of mostly junior faculty, had a paucity of fellowship‐trained hospitalists, and not all reported grant funding.

Prior published work reported complementary findings.[6, 8, 9] A survey of 20 large, well‐established academic hospitalist programs in 2008 found that the majority of hospitalists were junior faculty with a limited publication portfolio. Of the 266 respondents in that study, 86% reported an academic rank at or below assistant professor; funding was not explored.[9] Our similar findings 4 years later add to this work by demonstrating trends over time, and suggest that progress toward creating successful pathways for academic advancement has been slow. In a 2012 survey of the SHM membership, 28% of hospitalists with academic appointments reported no current or future plans to engage in research.[8] These findings suggest that faculty in AHPs may define scholarship through nontraditional pathways, or in some cases choose not to pursue or prioritize scholarship altogether.

Our findings also add to the literature with regard to our assessment of funding, which was variable across the SCHOLAR group. The broad range of funding in the SCHOLAR programs for which we have data (grant dollars $0$15 million per program) suggests that opportunities to improve supported scholarship remain, even among a selected cohort of successful AHPs. The predominance of junior faculty in the SCHOLAR programs may be a reason for this variation. Junior faculty may be engaged in research with funding directed to senior mentors outside their AHP. Alternatively, they may pursue meaningful local hospital quality improvement or educational innovations not supported by external grants, or hold leadership roles in education, quality, or information technology that allow for advancement and promotion without external grant funding. As the scope and impact of these roles increases, senior leaders with alternate sources of support may rely less on research funds; this too may explain some of the differences. Our findings are congruent with results of a study that reviewed original research published by hospitalists, and concluded that the majority of hospitalist research was not externally funded.[8] Our approach for assessing grant funding by adjusting for FTE had the potential to inadvertently favor smaller well‐funded groups over larger ones; however, programs in our sample were similarly represented when ranked by funding/FTE or total grant dollars. As many successful AHPs do concentrate their research funding among a core of focused hospitalist researchers, our definition may not be the ideal metric for some programs.

We chose to define scholarship based on abstract output, rather than peer‐reviewed publications. Although this choice was necessary from a feasibility perspective, it may have excluded programs that prioritize peer‐reviewed publications over abstracts. Although we were unable to incorporate a search strategy to accurately and comprehensively track the publication output attributed specifically to hospitalist researchers and quantify it by program, others have since defined such an approach.[8] However, tracking abstracts theoretically allowed insights into a larger volume of innovative and creative work generated by top AHPs by potentially including work in the earlier stages of development.

We used a consensus‐based definition of success to define our SCHOLAR cohort. There are other ways to measure academic success, which if applied, may have yielded a different sample of programs. For example, over half of the original research articles published in the Journal of Hospital Medicine over a 7‐year span were generated from 5 academic centers.[8] This definition of success may be equally credible, though we note that 4 of these 5 programs were also included in the SCHOLAR cohort. We feel our broader approach was more reflective of the variety of pathways to success available to academic hospitalists. Before our metrics are applied as a benchmarking tool, however, they should ideally be combined with factors not measured in our study to ensure a more comprehensive or balanced reflection of academic success. Factors such as mentorship, level of hospitalist engagement,[10] prevalence of leadership opportunities, operational and fiscal infrastructure, and the impact of local quality, safety, and value efforts should be considered.

Comparison of successfully promoted faculty at AHPs across the country is inherently limited by the wide variation in promotion standards across different institutions; controlling for such differences was not possible with our methodology. For example, it appears that several programs with relatively few senior faculty may have met metrics leading to their inclusion in the SCHOLAR group because of their small program size. Future benchmarking efforts for promotion at AHPs should take scaling into account and consider both total number as well as percentage of senior faculty when evaluating success.

Our methodology has several limitations. Survey data were self‐reported and not independently validated, and as such are subject to recall and reporting biases. Response bias inherently excluded some AHPs that may have met our grant funding or promotions criteria had they participated in the initial LAHP‐50 survey, though we identified and included additional programs through our scholarship metric, increasing the representativeness of the SCHOLAR cohort. Given the dynamic nature of the field, the age of the data we relied upon for analysis limits the generalizability of our specific benchmarks to current practice. However, the development of academic success occurs over the long‐term, and published data on academic hospitalist productivity are consistent with this slower time course.[8] Despite these limitations, our data inform the general topic of gauging performance of AHPs, underscoring the challenges of developing and applying metrics of success, and highlight the variability of performance on selected metrics even among a relatively small group of 17 programs.

In conclusion, we have created a method to quantify academic success that may be useful to academic hospitalists and their group leaders as they set targets for improvement in the field. Even among our SCHOLAR cohort, room for ongoing improvement in development of funded scholarship and a core of senior faculty exists. Further investigation into the unique features of successful groups will offer insight to leaders in academic hospital medicine regarding infrastructure and processes that should be embraced to raise the bar for all AHPs. In addition, efforts to further define and validate nontraditional approaches to scholarship that allow for successful promotion at AHPs would be informative. We view our work less as a singular approach to benchmarking standards for AHPs, and more a call to action to continue efforts to balance scholarly activity and broad professional development of academic hospitalists with increasing clinical demands.

Acknowledgements

The authors thank all of the AHP leaders who participated in the SCHOLAR project. They also thank the Society of Hospital Medicine and Society of General Internal Medicine and the SHM Academic Committee and SGIM Academic Hospitalist Task Force for their support of this work.

Disclosures

The work reported here was supported by the Department of Veterans Affairs, Veterans Health Administration, South Texas Veterans Health Care System. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs. The authors report no conflicts of interest.

The structure and function of academic hospital medicine programs (AHPs) has evolved significantly with the growth of hospital medicine.[1, 2, 3, 4] Many AHPs formed in response to regulatory and financial changes, which drove demand for increased trainee oversight, improved clinical efficiency, and growth in nonteaching services staffed by hospitalists. Differences in local organizational contexts and needs have contributed to great variability in AHP program design and operations. As AHPs have become more established, the need to engage academic hospitalists in scholarship and activities that support professional development and promotion has been recognized. Defining sustainable and successful positions for academic hospitalists is a priority called for by leaders in the field.[5, 6]

In this rapidly evolving context, AHPs have employed a variety of approaches to organizing clinical and academic faculty roles, without guiding evidence or consensus‐based performance benchmarks. A number of AHPs have achieved success along traditional academic metrics of research, scholarship, and education. Currently, it is not known whether specific approaches to AHP organization, structure, or definition of faculty roles are associated with achievement of more traditional markers of academic success.

The Academic Committee of the Society of Hospital Medicine (SHM), and the Academic Hospitalist Task Force of the Society of General Internal Medicine (SGIM) had separately initiated projects to explore characteristics associated with success in AHPs. In 2012, these organizations combined efforts to jointly develop and implement the SCHOLAR (SuCcessful HOspitaLists in Academics and Research) project. The goals were to identify successful AHPs using objective criteria, and to then study those groups in greater detail to generate insights that would be broadly relevant to the field. Efforts to clarify the factors within AHPs linked to success by traditional academic metrics will benefit hospitalists, their leaders, and key stakeholders striving to achieve optimal balance between clinical and academic roles. We describe the initial work of the SCHOLAR project, our definitions of academic success in AHPs, and the characteristics of a cohort of exemplary AHPs who achieved the highest levels on these metrics.

METHODS

Defining Success

The 11 members of the SCHOLAR project held a variety of clinical and academic roles within a geographically diverse group of AHPs. We sought to create a functional definition of success applicable to AHPs. As no gold standard currently exists, we used a consensus process among task force members to arrive at a definition that was quantifiable, feasible, and meaningful. The first step was brainstorming on conference calls held 1 to 2 times monthly over 4 months. Potential defining characteristics that emerged from these discussions related to research, teaching, and administrative activities. When potential characteristics were proposed, we considered how to operationalize each one. Each characteristic was discussed until there was consensus from the entire group. Those around education and administration were the most complex, as many roles are locally driven and defined, and challenging to quantify. For this reason, we focused on promotion as a more global approach to assessing academic hospitalist success in these areas. Although criteria for academic advancement also vary across institutions, we felt that promotion generally reflected having met some threshold of academic success. We also wanted to recognize that scholarship occurs outside the context of funded research. Ultimately, 3 key domains emerged: research grant funding, faculty promotion, and scholarship.

After these 3 domains were identified, the group sought to define quantitative metrics to assess performance. These discussions occurred on subsequent calls over a 4‐month period. Between calls, group members gathered additional information to facilitate assessment of the feasibility of proposed metrics, reporting on progress via email. Again, group consensus was sought for each metric considered. Data on grant funding and successful promotions were available from a previous survey conducted through the SHM in 2011. Leaders from 170 AHPs were contacted, with 50 providing complete responses to the 21‐item questionnaire (see Supporting Information, Appendix 1, in the online version of this article). Results of the survey, heretofore referred to as the Leaders of Academic Hospitalist Programs survey (LAHP‐50), have been described elsewhere.[7] For the purposes of this study, we used the self‐reported data about grant funding and promotions contained in the survey to reflect the current state of the field. Although the survey response rate was approximately 30%, the survey was not anonymous, and many reputationally prominent academic hospitalist programs were represented. For these reasons, the group members felt that the survey results were relevant for the purposes of assessing academic success.

In the LAHP‐50, funding was defined as principal investigator or coinvestigator roles on federally and nonfederally funded research, clinical trials, internal grants, and any other extramurally funded projects. Mean and median funding for the overall sample was calculated. Through a separate question, each program's total faculty full‐time equivalent (FTE) count was reported, allowing us to adjust for group size by assessing both total funding per group and funding/FTE for each responding AHP.

Promotions were defined by the self‐reported number of faculty at each of the following ranks: instructor, assistant professor, associate professor, full professor, and professor above scale/emeritus. In addition, a category of nonacademic track (eg, adjunct faculty, clinical associate) was included to capture hospitalists that did not fit into the traditional promotions categories. We did not distinguish between tenure‐track and nontenure‐track academic ranks. LAHP‐50 survey respondents reported the number of faculty in their group at each academic rank. Given that the majority of academic hospitalists hold a rank of assistant professor or lower,[6, 8, 9] and that the number of full professors was only 3% in the LAHP‐50 cohort, we combined the faculty at the associate and full professor ranks, defining successfully promoted faculty as the percent of hospitalists above the rank of assistant professor.

We created a new metric to assess scholarly output. We had considerable discussion of ways to assess the numbers of peer‐reviewed manuscripts generated by AHPs. However, the group had concerns about the feasibility of identification and attribution of authors to specific AHPs through literature searches. We considered examining only publications in the Journal of Hospital Medicine and the Journal of General Internal Medicine, but felt that this would exclude significant work published by hospitalists in fields of medical education or health services research that would more likely appear in alternate journals. Instead, we quantified scholarship based on the number of abstracts presented at national meetings. We focused on meetings of the SHM and SGIM as the primary professional societies representing hospital medicine. The group felt that even work published outside of the journals of our professional societies would likely be presented at those meetings. We used the following strategy: We reviewed research abstracts accepted for presentation as posters or oral abstracts at the 2010 and 2011 SHM national meetings, and research abstracts with a primary or secondary category of hospital medicine at the 2010 and 2011 SGIM national meetings. By including submissions at both SGIM and SHM meetings, we accounted for the fact that some programs may gravitate more to one society meeting or another. We did not include abstracts in the clinical vignettes or innovations categories. We tallied the number of abstracts by group affiliation of the authors for each of the 4 meetings above and created a cumulative total per group for the 2‐year period. Abstracts with authors from different AHPs were counted once for each individual group. Members of the study group reviewed abstracts from each of the meetings in pairs. Reviewers worked separately and compared tallies of results to ensure consistent tabulations. Internet searches were conducted to identify or confirm author affiliations if it was not apparent in the abstract author list. Abstract tallies were compiled without regard to whether programs had completed the LAHP‐50 survey; thus, we collected data on programs that did not respond to the LAHP‐50 survey.

Identification of the SCHOLAR Cohort

To identify our cohort of top‐performing AHPs, we combined the funding and promotions data from the LAHP‐50 sample with the abstract data. We limited our sample to adult hospital medicine groups to reduce heterogeneity. We created rank lists of programs in each category (grant funding, successful promotions, and scholarship), using data from the LAHP‐50 survey to rank programs on funding and promotions, and data from our abstract counts to rank on scholarship. We limited the top‐performing list in each category to 10 institutions as a cutoff. Because we set a threshold of at least $1 million in total funding, we identified only 9 top performing AHPs with regard to grant funding. We also calculated mean funding/FTE. We chose to rank programs only by funding/FTE rather than total funding per program to better account for group size. For successful promotions, we ranked programs by the percentage of senior faculty. For abstract counts, we included programs whose faculty presented abstracts at a minimum of 2 separate meetings, and ranked programs based on the total number of abstracts per group.

This process resulted in separate lists of top performing programs in each of the 3 domains we associated with academic success, arranged in descending order by grant dollars/FTE, percent of senior faculty, and abstract counts (Table 1). Seventeen different programs were represented across these 3 top 10 lists. One program appeared on all 3 lists, 8 programs appeared on 2 lists, and the remainder appeared on a single list (Table 2). Seven of these programs were identified solely based on abstract presentations, diversifying our top groups beyond only those who completed the LAHP‐50 survey. We considered all of these programs to represent high performance in academic hospital medicine. The group selected this inclusive approach because we recognized that any 1 metric was potentially limited, and we sought to identify diverse pathways to success.

Performance Among the Top Programs on Each of the Domains of Academic Success
Funding Promotions Scholarship
Grant $/FTE Total Grant $ Senior Faculty, No. (%) Total Abstract Count
  • NOTE: Funding is defined as mean grant dollars per FTE and total grant dollars per program; only programs with $1 million in total funding were included. Senior faculty are defined as all faculty above the rank of assistant professor. Abstract counts are the total number of research abstracts by members affiliated with the individual academic hospital medicine program accepted at the Society of Hospital Medicine and Society of General Internal Medicine national meetings in 2010 and 2011. Each column represents a separate ranked list; values across rows are independent and do not necessarily represent the same programs horizontally. Abbreviations: FTE = full‐time equivalent.

$1,409,090 $15,500,000 3 (60%) 23
$1,000,000 $9,000,000 3 (60%) 21
$750,000 $8,000,000 4 (57%) 20
$478,609 $6,700,535 9 (53%) 15
$347,826 $3,000,000 8 (44%) 11
$86,956 $3,000,000 14 (41%) 11
$66,666 $2,000,000 17 (36%) 10
$46,153 $1,500,000 9 (33%) 10
$38,461 $1,000,000 2 (33%) 9
4 (31%) 9
Qualifying Characteristics for Programs Represented in the SCHOLAR Cohort
Selection Criteria for SCHOLAR Cohort No. of Programs
  • NOTE: Programs were selected by appearing on 1 or more rank lists of top performing academic hospital medicine programs with regard to the number of abstracts presented at 4 different national meetings, the percent of senior faculty, or the amount of grant funding. Further details appear in the text. Abbreviations: SCHOLAR, SuCcessful HOspitaLists in Academics and Research.

Abstracts, funding, and promotions 1
Abstracts plus promotions 4
Abstracts plus funding 3
Funding plus promotion 1
Funding only 1
Abstract only 7
Total 17
Top 10 abstract count
4 meetings 2
3 meetings 2
2 meetings 6

The 17 unique adult AHPs appearing on at least 1 of the top 10 lists comprised the SCHOLAR cohort of programs that we studied in greater detail. Data reflecting program demographics were solicited directly from leaders of the AHPs identified in the SCHOLAR cohort, including size and age of program, reporting structure, number of faculty at various academic ranks (for programs that did not complete the LAHP‐50 survey), and number of faculty with fellowship training (defined as any postresidency fellowship program).

Subsequently, we performed comparative analyses between the programs in the SCHOLAR cohort to the general population of AHPs reflected by the LAHP‐50 sample. Because abstract presentations were not recorded in the original LAHP‐50 survey instrument, it was not possible to perform a benchmarking comparison for the scholarship domain.

Data Analysis

To measure the success of the SCHOLAR cohort we compared the grant funding and proportion of successfully promoted faculty at the SCHOLAR programs to those in the overall LAHP‐50 sample. Differences in mean and median grant funding were compared using t tests and Mann‐Whitney rank sum tests. Proportion of promoted faculty were compared using 2 tests. A 2‐tailed of 0.05 was used to test significance of differences.

RESULTS

Demographics

Among the AHPs in the SCHOLAR cohort, the mean program age was 13.2 years (range, 618 years), and the mean program size was 36 faculty (range, 1895; median, 28). On average, 15% of faculty members at SCHOLAR programs were fellowship trained (range, 0%37%). Reporting structure among the SCHOLAR programs was as follows: 53% were an independent division or section of the department of medicine; 29% were a section within general internal medicine, and 18% were an independent clinical group.

Grant Funding

Table 3 compares grant funding in the SCHOLAR programs to programs in the overall LAHP‐50 sample. Mean funding per group and mean funding per FTE were significantly higher in the SCHOLAR group than in the overall sample.

Funding From Grants and Contracts Among Academic Hospitalist Programs in the Overall LAHP‐50 Sample and the SCHOLAR Cohort
Funding (Millions)
LAHP‐50 Overall Sample SCHOLAR
  • NOTE: Abbreviations: AHP = academic hospital medicine program; FTE = full‐time equivalent; LAHP‐50, Leaders of Academic Hospitalist Programs (defined further in the text); SCHOLAR, SuCcessful HOspitaLists in Academics and Research. *P < 0.01.

Median grant funding/AHP 0.060 1.500*
Mean grant funding/AHP 1.147 (015) 3.984* (015)
Median grant funding/FTE 0.004 0.038*
Mean grant funding/FTE 0.095 (01.4) 0.364* (01.4)

Thirteen of the SCHOLAR programs were represented in the initial LAHP‐50, but 2 did not report a dollar amount for grants and contracts. Therefore, data for total grant funding were available for only 65% (11 of 17) of the programs in the SCHOLAR cohort. Of note, 28% of AHPs in the overall LAHP‐50 sample reported no external funding sources.

Faculty Promotion

Figure 1 demonstrates the proportion of faculty at various academic ranks. The percent of faculty above the rank of assistant professor in the SCHOLAR programs exceeded those in the overall LAHP‐50 by 5% (17.9% vs 12.8%, P = 0.01). Of note, 6% of the hospitalists at AHPs in the SCHOLAR programs were on nonfaculty tracks.

Figure 1
Distribution of faculty academic ranking at academic hospitalist programs in the LAHP‐50 and SCHOLAR cohorts. The percent of senior faculty (defined as associate and full professor) in the SCHOLAR cohort was significantly higher than the LAHP‐50 (P = 0.01). Abbreviations: LAHP‐50, Leaders of Academic Hospitalist Programs; SCHOLAR, SuCcessful HOspitaLists in Academics and Research.

Scholarship

Mean abstract output over the 2‐year period measured was 10.8 (range, 323) in the SCHOLAR cohort. Because we did not collect these data for the LAHP‐50 group, comparative analyses were not possible.

DISCUSSION

Using a definition of academic success that incorporated metrics of grant funding, faculty promotion, and scholarly output, we identified a unique subset of successful AHPsthe SCHOLAR cohort. The programs represented in the SCHOLAR cohort were generally large and relatively mature. Despite this, the cohort consisted of mostly junior faculty, had a paucity of fellowship‐trained hospitalists, and not all reported grant funding.

Prior published work reported complementary findings.[6, 8, 9] A survey of 20 large, well‐established academic hospitalist programs in 2008 found that the majority of hospitalists were junior faculty with a limited publication portfolio. Of the 266 respondents in that study, 86% reported an academic rank at or below assistant professor; funding was not explored.[9] Our similar findings 4 years later add to this work by demonstrating trends over time, and suggest that progress toward creating successful pathways for academic advancement has been slow. In a 2012 survey of the SHM membership, 28% of hospitalists with academic appointments reported no current or future plans to engage in research.[8] These findings suggest that faculty in AHPs may define scholarship through nontraditional pathways, or in some cases choose not to pursue or prioritize scholarship altogether.

Our findings also add to the literature with regard to our assessment of funding, which was variable across the SCHOLAR group. The broad range of funding in the SCHOLAR programs for which we have data (grant dollars $0$15 million per program) suggests that opportunities to improve supported scholarship remain, even among a selected cohort of successful AHPs. The predominance of junior faculty in the SCHOLAR programs may be a reason for this variation. Junior faculty may be engaged in research with funding directed to senior mentors outside their AHP. Alternatively, they may pursue meaningful local hospital quality improvement or educational innovations not supported by external grants, or hold leadership roles in education, quality, or information technology that allow for advancement and promotion without external grant funding. As the scope and impact of these roles increases, senior leaders with alternate sources of support may rely less on research funds; this too may explain some of the differences. Our findings are congruent with results of a study that reviewed original research published by hospitalists, and concluded that the majority of hospitalist research was not externally funded.[8] Our approach for assessing grant funding by adjusting for FTE had the potential to inadvertently favor smaller well‐funded groups over larger ones; however, programs in our sample were similarly represented when ranked by funding/FTE or total grant dollars. As many successful AHPs do concentrate their research funding among a core of focused hospitalist researchers, our definition may not be the ideal metric for some programs.

We chose to define scholarship based on abstract output, rather than peer‐reviewed publications. Although this choice was necessary from a feasibility perspective, it may have excluded programs that prioritize peer‐reviewed publications over abstracts. Although we were unable to incorporate a search strategy to accurately and comprehensively track the publication output attributed specifically to hospitalist researchers and quantify it by program, others have since defined such an approach.[8] However, tracking abstracts theoretically allowed insights into a larger volume of innovative and creative work generated by top AHPs by potentially including work in the earlier stages of development.

We used a consensus‐based definition of success to define our SCHOLAR cohort. There are other ways to measure academic success, which if applied, may have yielded a different sample of programs. For example, over half of the original research articles published in the Journal of Hospital Medicine over a 7‐year span were generated from 5 academic centers.[8] This definition of success may be equally credible, though we note that 4 of these 5 programs were also included in the SCHOLAR cohort. We feel our broader approach was more reflective of the variety of pathways to success available to academic hospitalists. Before our metrics are applied as a benchmarking tool, however, they should ideally be combined with factors not measured in our study to ensure a more comprehensive or balanced reflection of academic success. Factors such as mentorship, level of hospitalist engagement,[10] prevalence of leadership opportunities, operational and fiscal infrastructure, and the impact of local quality, safety, and value efforts should be considered.

Comparison of successfully promoted faculty at AHPs across the country is inherently limited by the wide variation in promotion standards across different institutions; controlling for such differences was not possible with our methodology. For example, it appears that several programs with relatively few senior faculty may have met metrics leading to their inclusion in the SCHOLAR group because of their small program size. Future benchmarking efforts for promotion at AHPs should take scaling into account and consider both total number as well as percentage of senior faculty when evaluating success.

Our methodology has several limitations. Survey data were self‐reported and not independently validated, and as such are subject to recall and reporting biases. Response bias inherently excluded some AHPs that may have met our grant funding or promotions criteria had they participated in the initial LAHP‐50 survey, though we identified and included additional programs through our scholarship metric, increasing the representativeness of the SCHOLAR cohort. Given the dynamic nature of the field, the age of the data we relied upon for analysis limits the generalizability of our specific benchmarks to current practice. However, the development of academic success occurs over the long‐term, and published data on academic hospitalist productivity are consistent with this slower time course.[8] Despite these limitations, our data inform the general topic of gauging performance of AHPs, underscoring the challenges of developing and applying metrics of success, and highlight the variability of performance on selected metrics even among a relatively small group of 17 programs.

In conclusion, we have created a method to quantify academic success that may be useful to academic hospitalists and their group leaders as they set targets for improvement in the field. Even among our SCHOLAR cohort, room for ongoing improvement in development of funded scholarship and a core of senior faculty exists. Further investigation into the unique features of successful groups will offer insight to leaders in academic hospital medicine regarding infrastructure and processes that should be embraced to raise the bar for all AHPs. In addition, efforts to further define and validate nontraditional approaches to scholarship that allow for successful promotion at AHPs would be informative. We view our work less as a singular approach to benchmarking standards for AHPs, and more a call to action to continue efforts to balance scholarly activity and broad professional development of academic hospitalists with increasing clinical demands.

Acknowledgements

The authors thank all of the AHP leaders who participated in the SCHOLAR project. They also thank the Society of Hospital Medicine and Society of General Internal Medicine and the SHM Academic Committee and SGIM Academic Hospitalist Task Force for their support of this work.

Disclosures

The work reported here was supported by the Department of Veterans Affairs, Veterans Health Administration, South Texas Veterans Health Care System. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs. The authors report no conflicts of interest.

References
  1. Boonyasai RT, Lin Y‐L, Brotman DJ, Kuo Y‐F, Goodwin JS. Characteristics of primary care providers who adopted the hospitalist model from 2001 to 2009. J Hosp Med. 2015;10(2):7582.
  2. Kuo Y‐F, Sharma G, Freeman JL, Goodwin JS. Growth in the care of older patients by hospitalists in the United States. N Engl J Med. 2009;360(11):11021112.
  3. Lapps J, Flansbaum B, Leykum L, Boswell J, Haines L. Updating threshold‐based identification of hospitalists in 2012 Medicare pay data. J Hosp Med. 2016;11(1):4547.
  4. Pete Welch W, Stearns SC, Cuellar AE, Bindman AB. Use of hospitalists by Medicare beneficiaries: a national picture. Medicare Medicaid Res Rev. 2014;4(2).
  5. Flanders SA, Centor B, Weber V, McGinn T, DeSalvo K, Auerbach A. Challenges and opportunities in Academic Hospital Medicine: report from the Academic Hospital Medicine Summit. J Hosp Med. 2009;4(4):240246.
  6. Harrison R, Hunter AJ, Sharpe B, Auerbach AD. Survey of US academic hospitalist leaders about mentorship and academic activities in hospitalist groups. J Hosp Med. 2011;6(1):59.
  7. Seymann G, Brotman D, Lee B, Jaffer A, Amin A, Glasheen J. The structure of hospital medicine programs at academic medical centers [abstract]. J Hosp Med. 2012;7(suppl 2):s92.
  8. Dang Do AN, Munchhof AM, Terry C, Emmett T, Kara A. Research and publication trends in hospital medicine. J Hosp Med. 2014;9(3):148154.
  9. Reid M, Misky G, Harrison R, Sharpe B, Auerbach A, Glasheen J. Mentorship, productivity, and promotion among academic hospitalists. J Gen Intern Med. 2012;27(1):2327.
  10. Cawley P, Deitelzweig S, Flores L, et al. The key principles and characteristics of an effective hospital medicine group: an assessment guide for hospitals and hospitalists. J Hosp Med. 2014;9(2):123128.
References
  1. Boonyasai RT, Lin Y‐L, Brotman DJ, Kuo Y‐F, Goodwin JS. Characteristics of primary care providers who adopted the hospitalist model from 2001 to 2009. J Hosp Med. 2015;10(2):7582.
  2. Kuo Y‐F, Sharma G, Freeman JL, Goodwin JS. Growth in the care of older patients by hospitalists in the United States. N Engl J Med. 2009;360(11):11021112.
  3. Lapps J, Flansbaum B, Leykum L, Boswell J, Haines L. Updating threshold‐based identification of hospitalists in 2012 Medicare pay data. J Hosp Med. 2016;11(1):4547.
  4. Pete Welch W, Stearns SC, Cuellar AE, Bindman AB. Use of hospitalists by Medicare beneficiaries: a national picture. Medicare Medicaid Res Rev. 2014;4(2).
  5. Flanders SA, Centor B, Weber V, McGinn T, DeSalvo K, Auerbach A. Challenges and opportunities in Academic Hospital Medicine: report from the Academic Hospital Medicine Summit. J Hosp Med. 2009;4(4):240246.
  6. Harrison R, Hunter AJ, Sharpe B, Auerbach AD. Survey of US academic hospitalist leaders about mentorship and academic activities in hospitalist groups. J Hosp Med. 2011;6(1):59.
  7. Seymann G, Brotman D, Lee B, Jaffer A, Amin A, Glasheen J. The structure of hospital medicine programs at academic medical centers [abstract]. J Hosp Med. 2012;7(suppl 2):s92.
  8. Dang Do AN, Munchhof AM, Terry C, Emmett T, Kara A. Research and publication trends in hospital medicine. J Hosp Med. 2014;9(3):148154.
  9. Reid M, Misky G, Harrison R, Sharpe B, Auerbach A, Glasheen J. Mentorship, productivity, and promotion among academic hospitalists. J Gen Intern Med. 2012;27(1):2327.
  10. Cawley P, Deitelzweig S, Flores L, et al. The key principles and characteristics of an effective hospital medicine group: an assessment guide for hospitals and hospitalists. J Hosp Med. 2014;9(2):123128.
Issue
Journal of Hospital Medicine - 11(10)
Issue
Journal of Hospital Medicine - 11(10)
Page Number
708-713
Page Number
708-713
Publications
Publications
Article Type
Display Headline
Features of successful academic hospitalist programs: Insights from the SCHOLAR (SuCcessful HOspitaLists in academics and research) project
Display Headline
Features of successful academic hospitalist programs: Insights from the SCHOLAR (SuCcessful HOspitaLists in academics and research) project
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Gregory B. Seymann, MD, University of California, San Diego, 200 W Arbor Drive, San Diego, CA 92103‐8485; Telephone: 619‐471‐9186; Fax: 619‐543‐8255; E‐mail: gseymann@ucsd.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Structured Peer Observation of Teaching

Article Type
Changed
Sun, 05/21/2017 - 14:26
Display Headline
Faculty development for hospitalists: Structured peer observation of teaching

Hospitalists are increasingly responsible for educating students and housestaff in internal medicine.[1] Because the quality of teaching is an important factor in learning,[2, 3, 4] leaders in medical education have expressed concern over the rapid shift of teaching responsibilities to this new group of educators.[5, 6, 7, 8] Moreover, recent changes in duty hour restrictions have strained both student and resident education,[9, 10] necessitating the optimization of inpatient teaching.[11, 12] Many hospitalists have recently finished residency and have not had formal training in clinical teaching. Collectively, most hospital medicine groups are early in their careers, have significant clinical obligations,[13] and may not have the bandwidth or expertise to provide faculty development for improving clinical teaching.

Rationally designed and theoretically sound faculty development to improve inpatient clinical teaching is required to meet this challenge. There are a limited number of reports describing faculty development focused on strengthening the teaching of hospitalists, and only 3 utilized direct observation and feedback, 1 of which involved peer observation in the clinical setting.[14, 15, 16] This 2011 report described a narrative method of peer observation and feedback but did not assess for efficacy of the program.[16] To our knowledge, there have been no studies of structured peer observation and feedback to optimize hospitalist attendings' teaching which have evaluated the efficacy of the intervention.

We developed a faculty development program based on peer observation and feedback based on actual teaching practices, using structured feedback anchored in validated and observable measures of effective teaching. We hypothesized that participation in the program would increase confidence in key teaching skills, increase confidence in the ability to give and receive peer feedback, and strengthen attitudes toward peer observation and feedback.

METHODS

Subjects and Setting

The study was conducted at a 570‐bed academic, tertiary care medical center affiliated with an internal medicine residency program of 180 housestaff. Internal medicine ward attendings rotate during 2‐week blocks, and are asked to give formal teaching rounds 3 or 4 times a week (these sessions are distinct from teaching which may happen while rounding on patients). Ward teams are composed of 1 senior resident, 2 interns, and 1 to 2 medical students. The majority of internal medicine ward attendings are hospitalist faculty, hospital medicine fellows, or medicine chief residents. Because outpatient general internists and subspecialists only occasionally attend on the wards, we refer to ward attendings as attending hospitalists in this article. All attending hospitalists were eligible to participate if they attended on the wards at least twice during the academic year. The institutional review board at the University of California, San Francisco approved this study.

Theoretical Framework

We reviewed the literature to optimize our program in 3 conceptual domains: (1) overall structure of the program, (2) definition of effective teaching and (3) effective delivery of feedback.

Over‐reliance on didactics that are disconnected from the work environment is a weakness of traditional faculty development. Individuals may attempt to apply what they have learned, but receiving feedback on their actual workplace practices may be difficult. A recent perspective responds to this fragmentation by conceptualizing faculty development as embedded in both a faculty development community and a workplace community. This model emphasizes translating what faculty have learned in the classroom into practice, and highlights the importance of coaching in the workplace.[17] In accordance with this framework, we designed our program to reach beyond isolated workshops to effectively penetrate the workplace community.

We selected the Stanford Faculty Development Program (SFDP) framework for optimal clinical teaching as our model for recognizing and improving teaching skills. The SFDP was developed as a theory‐based intensive feedback method to improve teaching skills,[18, 19] and has been shown to improve teaching in the ambulatory[20] and inpatient settings.[21, 22] In this widely disseminated framework,[23, 24] excellent clinical teaching is grounded in optimizing observable behaviors organized around 7 domains.[18] A 26‐item instrument to evaluate clinical teaching (SFDP‐26) has been developed based on this framework[25] and has been validated in multiple settings.[26, 27] High‐quality teaching, as defined by the SFDP framework, has been correlated with improved educational outcomes in internal medicine clerkship students.[4]

Feedback is crucial to optimizing teaching,[28, 29, 30] particularly when it incorporates consultation[31] and narrative comments.[32] Peer feedback has several advantages over feedback from learners or from other non‐peer observers (such as supervisors or other evaluators). First, the observers benefit by gaining insight into their own weaknesses and potential areas for growth as teachers.[33, 34] Additionally, collegial observation and feedback may promote supportive teaching relationships between faculty.[35] Furthermore, peer review overcomes the biases that may be present in learner evaluations.[36] We established a 3‐stage feedback technique based on a previously described method.[37] In the first step, the observer elicits self‐appraisal from the speaker. Next, the observer provides specific, behaviorally anchored feedback in the form of 3 reinforcing comments and 2 constructive comments. Finally, the observer elicits a reflection on the feedback and helps develop a plan to improve teaching in future opportunities. We used a dyad model (paired participants repeatedly observe and give feedback to each other) to support mutual benefit and reciprocity between attendings.

Intervention

Using a modified Delphi approach, 5 medical education experts selected the 10 items that are most easily observable and salient to formal attending teaching rounds from the SFDP‐26 teaching assessment tool. A structured observation form was created, which included a checklist of the 10 selected items, space for note taking, and a template for narrative feedback (Figure 1).

Figure 1
Structured observation form, side 1. See “Intervention” for discussion.

We introduced the SFDP framework during a 2‐hour initial training session. Participants watched videos of teaching, learned to identify the 10 selected teaching behaviors, developed appropriate constructive and reinforcing comments, and practiced giving and receiving peer feedback.

Dyads were created on the basis of predetermined attending schedules. Participants were asked to observe and be observed twice during attending teaching rounds over the course of the academic year. Attending teaching rounds were defined as any preplanned didactic activity for ward teams. The structured observation forms were returned to the study coordinators after the observer had given feedback to the presenter. A copy of the feedback without the observer's notes was also given to each speaker. At the midpoint of the academic year, a refresher session was offered to reinforce those teaching behaviors that were the least frequently performed to date. All participants received a $50.00 Amazon.com gift card, and additional gift card incentives were offered to the dyads that first completed both observations.

Measurements and Data Collection

Participants were given a pre‐ and post‐program survey. The surveys included questions assessing confidence in ability to give feedback, receive feedback without feeling defensive, and teach effectively, as well as attitudes toward peer observation. The postprogram survey was administered at the end of the year and additionally assessed the self‐rated performance of the 10 selected teaching behaviors. A retrospective pre‐ and post‐program assessment was used for this outcome, because this method can be more reliable when participants initially may not have sufficient insight to accurately assess their own competence in specific measures.[21] The post‐program survey also included 4 questions assessing satisfaction with aspects of the program. All questions were structured as statements to which the respondent indicated degree of agreement using a 5‐point Likert scale, where 1=strongly disagree and 5=strongly agree. Structured observation forms used by participants were collected throughout the year to assess frequency of performance of the 10 selected teaching behaviors.

Statistical Analysis

We only analyzed the pre‐ and post‐program surveys that could be matched using anonymous identifiers provided by participants. For both prospective and retrospective measures, mean values and standard deviations were calculated. Wilcoxon signed rank tests for nonparametric data were performed to obtain P values. For all comparisons, a P value of <0.05 was considered significant. All comparisons were performed using Stata version 10 (StataCorp, College Station, TX).

RESULTS

Participant Characteristics and Participation in Program

Of the 37 eligible attending hospitalists, 22 (59%) enrolled. Fourteen were hospital medicine faculty, 6 were hospital medicine fellows, and 2 were internal medicine chief residents. The averagestandard deviation (SD) number of years as a ward attending was 2.2 years2.1. Seventeen (77%) reported previously having been observed and given feedback by a colleague, and 9 (41%) reported previously observing a colleague for the purpose of giving feedback.

All 22 participants attended 1 of 2, 2‐hour training sessions. Ten participants attended an hour‐long midyear refresher session. A total of 19 observation and feedback sessions took place; 15 of them occurred in the first half of the academic year. Fifteen attending hospitalists participated in at least 1 observed teaching session. Of the 11 dyads, 6 completed at least 1 observation of each other. Two dyads performed 2 observations of each other.

Fifteen participants (68% of those enrolled) completed both the pre‐ and post‐program surveys. Among these respondents, the average number of years attending was 2.92.2 years. Eight (53%) reported previously having been observed and given feedback by a colleague, and 7 (47%) reported previously observing a colleague for the purpose of giving feedback. For this subset of participants, the averageSD frequency of being observed during the program was 1.30.7, and observing was 1.10.8.

Confidence in Ability to Give Feedback, Receive Feedback, and Teach Effectively

In comparison of pre‐ and post‐intervention measures, participants indicated increased confidence in their ability to evaluate their colleagues and provide feedback in all domains queried. Participants also indicated increased confidence in the efficacy of their feedback to improve their colleagues' teaching skills. Participating in the program did not significantly change pre‐intervention levels of confidence in ability to receive feedback without being defensive or confidence in ability to use feedback to improve teaching skills (Table 1).

Confidence in Ability to Give Feedback, Receive Feedback, and Teach Effectively Pre‐ and Post‐intervention.
StatementMean PreSDMean PostSDP
  • NOTE: 1=strongly disagree, 3=neutral, 5=strongly agree. N=15 except where noted. Abbreviations: Post, post‐intervention; Pre, pre‐intervention; SD, standard deviation.

  • N=14.

I can accurately assess my colleagues' teaching skills.3.200.864.070.590.004
I can give accurate feedback to my colleagues regarding their teaching skills.3.400.634.200.560.002
I can give feedback in a way that that my colleague will not feel defensive about their teaching skills.3.600.634.200.560.046
My feedback will improve my colleagues' teaching skills.3.400.513.930.590.011
I can receive feedback from a colleague without being defensive about my teaching skills.3.870.924.270.590.156
I can use feedback from a colleague to improve my teaching skills.4.330.824.470.640.607
I am confident in my ability to teach students and residents during attending rounds.a3.210.893.710.830.026
I am confident in my knowledge of components of effective teaching.a3.210.893.710.990.035
Learners regard me as an effective teacher.a3.140.663.640.740.033

Self‐Rated Performance of 10 Selected Teaching Behaviors

In retrospective assessment, participants felt that their performance had improved in all 10 teaching behaviors after the intervention. This perceived improvement reached statistical significance in 8 of the 10 selected behaviors (Table 2).

Retrospective Self‐Appraisal of Competence in Selected Teaching Behaviors Pre‐ and Post‐intervention.
SFDP Framework Category From Skeff et al.[18]When I Give Attending Rounds, I Generally .Mean PreSDMean PostSDP
  • NOTE: 1=strongly disagree and 5=strongly agree. N=15. Abbreviations: Post, post‐intervention; Pre, pre‐intervention; SD, standard deviation; SFDP, Stanford Faculty Development Program

1. Establishing a positive learning climateListen to learners4.270.594.530.520.046
Encourage learners to participate actively in the discussion4.070.704.600.510.009
2. Controlling the teaching sessionCall attention to time3.330.984.270.590.004
3. Communicating goalsState goals clearly and concisely3.400.634.270.590.001
State relevance of goals to learners3.400.744.200.680.002
4. Promoting understanding and retentionPresent well‐organized material3.870.644.070.700.083
Use blackboard or other visual aids4.270.884.470.740.158
5. Evaluating the learnersEvaluate learners' ability to apply medical knowledge to specific patients3.330.984.000.760.005
6. Providing feedback to the learnersExplain to learners why he/she was correct or incorrect3.471.134.130.640.009
7. Promoting self‐directed learningMotivate learners to learn on their own3.200.863.730.700.005

Attitudes Toward Peer Observation and Feedback

There were no significant changes in attitudes toward observation and feedback on teaching. A strong preprogram belief that observation and feedback can improve teaching skills increased slightly, but not significantly, after the program. Participants remained largely neutral in expectation of discomfort with giving or receiving peer feedback. Prior to the program, there was a slight tendency to believe that observation and feedback is more effective when done by more skilled and experienced colleagues; this belief diminished, but not significantly (Table 3).

Attitudes Toward Peer Observation and Feedback Pre‐ and Post‐intervention.
StatementMean PreSDMean PostSDP
  • NOTE: 1=strongly disagree, 3=neutral, 5=strongly agree. N=15. Abbreviations: Post, post‐intervention; Pre, pre‐intervention; SD, standard deviation.

Being observed and receiving feedback can improve my teaching skills.4.471.064.600.510.941
My teaching skills cannot improve without observation with feedback.2.931.393.471.300.188
Observation with feedback is most effective when done by colleagues who are expert educators.3.530.833.330.980.180
Observation with feedback is most effective when done by colleagues who have been teaching many years.3.400.913.071.030.143
The thought of observing and giving feedback to my colleagues makes me uncomfortable.3.130.923.001.130.565
The thought of being observed by a colleague and receiving feedback makes me uncomfortable.3.200.943.271.220.747

Program Evaluation

There were a variable number of responses to the program evaluation questions. The majority of participants found the program to be very beneficial (1=strongly disagree, 5=strongly agree [n, meanSD]): My teaching has improved as a result of this program (n=14, 4.90.3). Both giving (n=11, 4.21.6) and receiving (n=13, 4.61.1) feedback were felt to have improved teaching skills. There was strong agreement from respondents that they would participate in the program in the future: I am likely to participate in this program in the future (n=12, 4.60.9).

DISCUSSION

Previous studies have shown that teaching skills are unlikely to improve without feedback,[28, 29, 30] yet feedback for hospitalists is usually limited to summative, end‐rotation evaluations from learners, disconnected from the teaching encounter. Our theory‐based, rationally designed peer observation and feedback program resulted in increased confidence in the ability to give feedback, receive feedback, and teach effectively. Participation did not result in negative attitudes toward giving and receiving feedback from colleagues. Participants self‐reported increased performance of important teaching behaviors. Most participants rated the program very highly, and endorsed improved teaching skills as a result of the program.

Our experience provides several lessons for other groups considering the implementation of peer feedback to strengthen teaching. First, we suggest that hospitalist groups may expect variable degrees of participation in a voluntary peer feedback program. In our program, 41% of eligible attendings did not participate. We did not specifically investigate why; we speculate that they may not have had the time, believed that their teaching skills were already strong, or they may have been daunted at the idea of peer review. It is also possible that participants were a self‐selected group who were the most motivated to strengthen their teaching. Second, we note the steep decline in the number of observations in the second half of the year. Informal assessment for reasons for the drop‐off suggested that after initial enthusiasm for the program, navigating the logistics of observing the same peer in the second half of the year proved to be prohibitive to many participants. Therefore, future versions of peer feedback programs may benefit from removing the dyad requirement and encouraging all participants to observe one another whenever possible.

With these lessons in mind, we believe that a peer observation program could be implemented by other hospital medicine groups. The program does not require extensive content expertise or senior faculty but does require engaged leadership and interested and motivated faculty. Groups could identify an individual in their group with an interest in clinical teaching who could then be responsible for creating the training session (materials available upon request). We believe that with only a small upfront investment, most hospital medicine groups could use this as a model to build a peer observation program aimed at improving clinical teaching.

Our study has several limitations. As noted above, our participation rate was 59%, and the number of participating attendings declined through the year. We did not examine whether our program resulted in advances in the knowledge, skills, or attitudes of the learners; because each attending teaching session was unique, it was not possible to measure changes in learner knowledge. Our primary outcome measures relied on self‐assessment rather than higher order and more objective measures of teaching efficacy. Furthermore, our results may not be generalizable to other programs, given the heterogeneity in service structures and teaching practices across the country. This was an uncontrolled study; some of the outcomes may have naturally occurred independent of the intervention due to the natural evolution of clinical teaching. As with any educational intervention that integrates multiple strategies, we are not able to discern if the improved outcomes were the result of the initial didactic sessions, the refresher sessions, or the peer feedback itself. Serial assessments of frequency of teaching behaviors were not done due to the low number of observations in the second half of the program. Finally, our 10‐item tool derived from the validated SFDP‐26 tool is not itself a validated assessment of teaching.

We acknowledge that the increased confidence seen in our participants does not necessarily predict improved performance. Although increased confidence in core skills is a necessary step that can lead to changes in behavior, further studies are needed to determine whether the increase in faculty confidence that results from peer observation and feedback translates into improved educational outcomes.

The pressure on hospitalists to be excellent teachers is here to stay. Resources to train these faculty are scarce, yet we must prioritize faculty development in teaching to optimize the training of future physicians. Our data illustrate the benefits of peer observation and feedback. Hospitalist programs should consider this option in addressing the professional development needs of their faculty.

Acknowledgements

The authors thank Zachary Martin for administrative support for the program; Gurpreet Dhaliwal, MD, and Patricia O'Sullivan, PhD, for aid in program development; and John Amory, MD, MPH, for critical review of the manuscript. The authors thank the University of California, San Francisco Office of Medical Education for funding this work with an Educational Research Grant.

Disclosures: Funding: UCSF Office of Medical Education Educational Research Grant. Ethics approval: approved by UCSF Committee on Human Research. Previous presentations: Previous versions of this work were presented as an oral presentation at the University of California at San Francisco Medical Education Day, San Francisco, California, April 27, 2012, and as a poster presentation at the Society for General Internal Medicine 35th Annual Meeting, Orlando, Florida, May 912, 2012. The authors report no conflicts of interest.

Files
References
  1. Beasley BW, McBride J, McDonald FS. Hospitalist involvement in internal medicine residencies. J Hosp Med. 2009;4(8):471475.
  2. Stern DT, Williams BC, Gill A, Gruppen LD, Woolliscroft JO, Grum CM. Is there a relationship between attending physicians' and residents' teaching skills and students' examination scores? Acad Med. 2000;75(11):11441146.
  3. Griffith CH, Georgesen JC, Wilson JF. Six‐year documentation of the association between excellent clinical teaching and improved students' examination performances. Acad Med. 2000;75(10 suppl):S62S64.
  4. Roop SA, Pangaro L. Effect of clinical teaching on student performance during a medicine clerkship. Am J Med. 2001;110(3):205209.
  5. Hauer KE, Wachter RM. Implications of the hospitalist model for medical students' education. Acad Med. 2001;76(4):324330.
  6. Benson JA. On educating and being a physician in the hospitalist era. Am J Med. 2001;111(9B):45S47S.
  7. Whitcomb WF, Nelson JR. The role of hospitalists in medical education. Am J Med. 1999;107(4):305309.
  8. Flanders SA, Centor B, Weber V, McGinn T, Desalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the academic hospital medicine summit. J Gen Intern Med. 2009;24(5):636641.
  9. Reed DA, Levine RB, Miller RG, et al. Impact of duty hour regulations on medical students' education: views of key clinical faculty. J Gen Intern Med. 2008;23(7):10841089.
  10. Kogan JR, Pinto‐Powell R, Brown LA, Hemmer P, Bellini LM, Peltier D. The impact of resident duty hours reform on the internal medicine core clerkship: results from the clerkship directors in internal medicine survey. Acad Med. 2006;81(12):10381044.
  11. Goitein L, Shanafelt TD, Nathens AB, Curtis JR. Effects of resident work hour limitations on faculty professional lives. J Gen Intern Med. 2008;23(7):10771083.
  12. Harrison R, Allen E. Teaching internal medicine residents in the new era. Inpatient attending with duty‐hour regulations. J Gen Intern Med. 2006;21(5):447452.
  13. Harrison R, Hunter AJ, Sharpe B, Auerbach AD. Survey of US academic hospitalist leaders about mentorship and academic activities in hospitalist groups. J Hosp Med. 2011;6(1):59.
  14. Ottolini M, Wohlberg R, Lewis K, Greenberg L. Using observed structured teaching exercises (OSTE) to enhance hospitalist teaching during family centered rounds. J Hosp Med. 2011;6(7):423427.
  15. Sehgal NL, Sharpe BA, Auerbach AA, Wachter RM. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6(3):161166.
  16. Finn K, Chiappa V, Puig A, Hunt DP. How to become a better clinical teacher: a collaborative peer observation process. Med Teach. 2011;33(2):151155.
  17. O'Sullivan PS, Irby DM. Reframing research on faculty development. Acad Med. 2011;86(4):421428.
  18. Skeff KM, Stratos GA, Bergen MR, et al. The Stanford faculty development program: a dissemination approach to faculty development for medical teachers. Teach Learn Med. 1992;4(3):180187.
  19. Skeff KM. Evaluation of a method for improving the teaching performance of attending physicians. Am J Med. 1983;75(3):465470.
  20. Berbano EP, Browning R, Pangaro L, Jackson JL. The impact of the Stanford Faculty Development Program on ambulatory teaching behavior. J Gen Intern Med. 2006;21(5):430434.
  21. Skeff KM, Stratos GA, Bergen MR. Evaluation of a medical faculty development program: a comparison of traditional pre/post and retrospective pre/post self‐assessment ratings. Eval Health Prof. 1992;15(3):350366.
  22. Skeff KM, Stratos G, Campbell M, Cooke M, Jones HW. Evaluation of the seminar method to improve clinical teaching. J Gen Intern Med. 1986;1(5):315322.
  23. Skeff KM, Stratos GA, Bergen MR, Sampson K, Deutsch SL. Regional teaching improvement programs for community‐based teachers. Am J Med. 1999;106(1):7680.
  24. Skeff KM, Stratos GA, Berman J, Bergen MR. Improving clinical teaching. Evaluation of a national dissemination program. Arch Intern Med. 1992;152(6):11561161.
  25. Litzelman DK, Stratos GA, Marriott DJ, Skeff KM. Factorial validation of a widely disseminated educational framework for evaluating clinical teachers. Acad Med. 1998;73(6):688695.
  26. Litzelman DK, Westmoreland GR, Skeff KM, Stratos GA. Student and resident evaluations of faculty—how reliable are they? Factorial validation of an educational framework using residents' evaluations of clinician‐educators. Acad Med. 1999;74(10):S25S27.
  27. Marriott DJ, Litzelman DK. Students' global assessments of clinical teachers: a reliable and valid measure of teaching effectiveness. Acad Med. 1998;73(10 suppl):S72S74.
  28. Brinko KT. The practice of giving feedback to improve teaching: what is effective? J Higher Educ. 1993;64(5):574593.
  29. Skeff KM, Stratos GA, Mygdal W, et al. Faculty development. A resource for clinical teachers. J Gen Intern Med. 1997;12(suppl 2):S56S63.
  30. Steinert Y, Mann K, Centeno A, et al. A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME guide no. 8. Med Teach. 2006;28(6):497526.
  31. Wilkerson L, Irby DM. Strategies for improving teaching practices: a comprehensive approach to faculty development. Acad Med. 1998;73(4):387396.
  32. Schum TR, Yindra KJ. Relationship between systematic feedback to faculty and ratings of clinical teaching. Acad Med. 1996;71(10):11001102.
  33. Beckman TJ. Lessons learned from a peer review of bedside teaching. Acad Med. 2004;79(4):343346.
  34. Beckman TJ, Lee MC, Rohren CH, Pankratz VS. Evaluating an instrument for the peer review of inpatient teaching. Med Teach. 2003;25(2):131135.
  35. Siddiqui ZS, Jonas‐Dwyer D, Carr SE. Twelve tips for peer observation of teaching. Med Teach. 2007;29(4):297300.
  36. Speer AJ, Elnicki DM. Assessing the quality of teaching. Am J Med. 1999;106(4):381384.
  37. Bienstock JL, Katz NT, Cox SM, Hueppchen N, Erickson S, Puscheck EE. To the point: medical education reviews—providing feedback. Am J Obstet Gynecol. 2007;196(6):508513.
Article PDF
Issue
Journal of Hospital Medicine - 9(4)
Publications
Page Number
244-250
Sections
Files
Files
Article PDF
Article PDF

Hospitalists are increasingly responsible for educating students and housestaff in internal medicine.[1] Because the quality of teaching is an important factor in learning,[2, 3, 4] leaders in medical education have expressed concern over the rapid shift of teaching responsibilities to this new group of educators.[5, 6, 7, 8] Moreover, recent changes in duty hour restrictions have strained both student and resident education,[9, 10] necessitating the optimization of inpatient teaching.[11, 12] Many hospitalists have recently finished residency and have not had formal training in clinical teaching. Collectively, most hospital medicine groups are early in their careers, have significant clinical obligations,[13] and may not have the bandwidth or expertise to provide faculty development for improving clinical teaching.

Rationally designed and theoretically sound faculty development to improve inpatient clinical teaching is required to meet this challenge. There are a limited number of reports describing faculty development focused on strengthening the teaching of hospitalists, and only 3 utilized direct observation and feedback, 1 of which involved peer observation in the clinical setting.[14, 15, 16] This 2011 report described a narrative method of peer observation and feedback but did not assess for efficacy of the program.[16] To our knowledge, there have been no studies of structured peer observation and feedback to optimize hospitalist attendings' teaching which have evaluated the efficacy of the intervention.

We developed a faculty development program based on peer observation and feedback based on actual teaching practices, using structured feedback anchored in validated and observable measures of effective teaching. We hypothesized that participation in the program would increase confidence in key teaching skills, increase confidence in the ability to give and receive peer feedback, and strengthen attitudes toward peer observation and feedback.

METHODS

Subjects and Setting

The study was conducted at a 570‐bed academic, tertiary care medical center affiliated with an internal medicine residency program of 180 housestaff. Internal medicine ward attendings rotate during 2‐week blocks, and are asked to give formal teaching rounds 3 or 4 times a week (these sessions are distinct from teaching which may happen while rounding on patients). Ward teams are composed of 1 senior resident, 2 interns, and 1 to 2 medical students. The majority of internal medicine ward attendings are hospitalist faculty, hospital medicine fellows, or medicine chief residents. Because outpatient general internists and subspecialists only occasionally attend on the wards, we refer to ward attendings as attending hospitalists in this article. All attending hospitalists were eligible to participate if they attended on the wards at least twice during the academic year. The institutional review board at the University of California, San Francisco approved this study.

Theoretical Framework

We reviewed the literature to optimize our program in 3 conceptual domains: (1) overall structure of the program, (2) definition of effective teaching and (3) effective delivery of feedback.

Over‐reliance on didactics that are disconnected from the work environment is a weakness of traditional faculty development. Individuals may attempt to apply what they have learned, but receiving feedback on their actual workplace practices may be difficult. A recent perspective responds to this fragmentation by conceptualizing faculty development as embedded in both a faculty development community and a workplace community. This model emphasizes translating what faculty have learned in the classroom into practice, and highlights the importance of coaching in the workplace.[17] In accordance with this framework, we designed our program to reach beyond isolated workshops to effectively penetrate the workplace community.

We selected the Stanford Faculty Development Program (SFDP) framework for optimal clinical teaching as our model for recognizing and improving teaching skills. The SFDP was developed as a theory‐based intensive feedback method to improve teaching skills,[18, 19] and has been shown to improve teaching in the ambulatory[20] and inpatient settings.[21, 22] In this widely disseminated framework,[23, 24] excellent clinical teaching is grounded in optimizing observable behaviors organized around 7 domains.[18] A 26‐item instrument to evaluate clinical teaching (SFDP‐26) has been developed based on this framework[25] and has been validated in multiple settings.[26, 27] High‐quality teaching, as defined by the SFDP framework, has been correlated with improved educational outcomes in internal medicine clerkship students.[4]

Feedback is crucial to optimizing teaching,[28, 29, 30] particularly when it incorporates consultation[31] and narrative comments.[32] Peer feedback has several advantages over feedback from learners or from other non‐peer observers (such as supervisors or other evaluators). First, the observers benefit by gaining insight into their own weaknesses and potential areas for growth as teachers.[33, 34] Additionally, collegial observation and feedback may promote supportive teaching relationships between faculty.[35] Furthermore, peer review overcomes the biases that may be present in learner evaluations.[36] We established a 3‐stage feedback technique based on a previously described method.[37] In the first step, the observer elicits self‐appraisal from the speaker. Next, the observer provides specific, behaviorally anchored feedback in the form of 3 reinforcing comments and 2 constructive comments. Finally, the observer elicits a reflection on the feedback and helps develop a plan to improve teaching in future opportunities. We used a dyad model (paired participants repeatedly observe and give feedback to each other) to support mutual benefit and reciprocity between attendings.

Intervention

Using a modified Delphi approach, 5 medical education experts selected the 10 items that are most easily observable and salient to formal attending teaching rounds from the SFDP‐26 teaching assessment tool. A structured observation form was created, which included a checklist of the 10 selected items, space for note taking, and a template for narrative feedback (Figure 1).

Figure 1
Structured observation form, side 1. See “Intervention” for discussion.

We introduced the SFDP framework during a 2‐hour initial training session. Participants watched videos of teaching, learned to identify the 10 selected teaching behaviors, developed appropriate constructive and reinforcing comments, and practiced giving and receiving peer feedback.

Dyads were created on the basis of predetermined attending schedules. Participants were asked to observe and be observed twice during attending teaching rounds over the course of the academic year. Attending teaching rounds were defined as any preplanned didactic activity for ward teams. The structured observation forms were returned to the study coordinators after the observer had given feedback to the presenter. A copy of the feedback without the observer's notes was also given to each speaker. At the midpoint of the academic year, a refresher session was offered to reinforce those teaching behaviors that were the least frequently performed to date. All participants received a $50.00 Amazon.com gift card, and additional gift card incentives were offered to the dyads that first completed both observations.

Measurements and Data Collection

Participants were given a pre‐ and post‐program survey. The surveys included questions assessing confidence in ability to give feedback, receive feedback without feeling defensive, and teach effectively, as well as attitudes toward peer observation. The postprogram survey was administered at the end of the year and additionally assessed the self‐rated performance of the 10 selected teaching behaviors. A retrospective pre‐ and post‐program assessment was used for this outcome, because this method can be more reliable when participants initially may not have sufficient insight to accurately assess their own competence in specific measures.[21] The post‐program survey also included 4 questions assessing satisfaction with aspects of the program. All questions were structured as statements to which the respondent indicated degree of agreement using a 5‐point Likert scale, where 1=strongly disagree and 5=strongly agree. Structured observation forms used by participants were collected throughout the year to assess frequency of performance of the 10 selected teaching behaviors.

Statistical Analysis

We only analyzed the pre‐ and post‐program surveys that could be matched using anonymous identifiers provided by participants. For both prospective and retrospective measures, mean values and standard deviations were calculated. Wilcoxon signed rank tests for nonparametric data were performed to obtain P values. For all comparisons, a P value of <0.05 was considered significant. All comparisons were performed using Stata version 10 (StataCorp, College Station, TX).

RESULTS

Participant Characteristics and Participation in Program

Of the 37 eligible attending hospitalists, 22 (59%) enrolled. Fourteen were hospital medicine faculty, 6 were hospital medicine fellows, and 2 were internal medicine chief residents. The averagestandard deviation (SD) number of years as a ward attending was 2.2 years2.1. Seventeen (77%) reported previously having been observed and given feedback by a colleague, and 9 (41%) reported previously observing a colleague for the purpose of giving feedback.

All 22 participants attended 1 of 2, 2‐hour training sessions. Ten participants attended an hour‐long midyear refresher session. A total of 19 observation and feedback sessions took place; 15 of them occurred in the first half of the academic year. Fifteen attending hospitalists participated in at least 1 observed teaching session. Of the 11 dyads, 6 completed at least 1 observation of each other. Two dyads performed 2 observations of each other.

Fifteen participants (68% of those enrolled) completed both the pre‐ and post‐program surveys. Among these respondents, the average number of years attending was 2.92.2 years. Eight (53%) reported previously having been observed and given feedback by a colleague, and 7 (47%) reported previously observing a colleague for the purpose of giving feedback. For this subset of participants, the averageSD frequency of being observed during the program was 1.30.7, and observing was 1.10.8.

Confidence in Ability to Give Feedback, Receive Feedback, and Teach Effectively

In comparison of pre‐ and post‐intervention measures, participants indicated increased confidence in their ability to evaluate their colleagues and provide feedback in all domains queried. Participants also indicated increased confidence in the efficacy of their feedback to improve their colleagues' teaching skills. Participating in the program did not significantly change pre‐intervention levels of confidence in ability to receive feedback without being defensive or confidence in ability to use feedback to improve teaching skills (Table 1).

Confidence in Ability to Give Feedback, Receive Feedback, and Teach Effectively Pre‐ and Post‐intervention.
StatementMean PreSDMean PostSDP
  • NOTE: 1=strongly disagree, 3=neutral, 5=strongly agree. N=15 except where noted. Abbreviations: Post, post‐intervention; Pre, pre‐intervention; SD, standard deviation.

  • N=14.

I can accurately assess my colleagues' teaching skills.3.200.864.070.590.004
I can give accurate feedback to my colleagues regarding their teaching skills.3.400.634.200.560.002
I can give feedback in a way that that my colleague will not feel defensive about their teaching skills.3.600.634.200.560.046
My feedback will improve my colleagues' teaching skills.3.400.513.930.590.011
I can receive feedback from a colleague without being defensive about my teaching skills.3.870.924.270.590.156
I can use feedback from a colleague to improve my teaching skills.4.330.824.470.640.607
I am confident in my ability to teach students and residents during attending rounds.a3.210.893.710.830.026
I am confident in my knowledge of components of effective teaching.a3.210.893.710.990.035
Learners regard me as an effective teacher.a3.140.663.640.740.033

Self‐Rated Performance of 10 Selected Teaching Behaviors

In retrospective assessment, participants felt that their performance had improved in all 10 teaching behaviors after the intervention. This perceived improvement reached statistical significance in 8 of the 10 selected behaviors (Table 2).

Retrospective Self‐Appraisal of Competence in Selected Teaching Behaviors Pre‐ and Post‐intervention.
SFDP Framework Category From Skeff et al.[18]When I Give Attending Rounds, I Generally .Mean PreSDMean PostSDP
  • NOTE: 1=strongly disagree and 5=strongly agree. N=15. Abbreviations: Post, post‐intervention; Pre, pre‐intervention; SD, standard deviation; SFDP, Stanford Faculty Development Program

1. Establishing a positive learning climateListen to learners4.270.594.530.520.046
Encourage learners to participate actively in the discussion4.070.704.600.510.009
2. Controlling the teaching sessionCall attention to time3.330.984.270.590.004
3. Communicating goalsState goals clearly and concisely3.400.634.270.590.001
State relevance of goals to learners3.400.744.200.680.002
4. Promoting understanding and retentionPresent well‐organized material3.870.644.070.700.083
Use blackboard or other visual aids4.270.884.470.740.158
5. Evaluating the learnersEvaluate learners' ability to apply medical knowledge to specific patients3.330.984.000.760.005
6. Providing feedback to the learnersExplain to learners why he/she was correct or incorrect3.471.134.130.640.009
7. Promoting self‐directed learningMotivate learners to learn on their own3.200.863.730.700.005

Attitudes Toward Peer Observation and Feedback

There were no significant changes in attitudes toward observation and feedback on teaching. A strong preprogram belief that observation and feedback can improve teaching skills increased slightly, but not significantly, after the program. Participants remained largely neutral in expectation of discomfort with giving or receiving peer feedback. Prior to the program, there was a slight tendency to believe that observation and feedback is more effective when done by more skilled and experienced colleagues; this belief diminished, but not significantly (Table 3).

Attitudes Toward Peer Observation and Feedback Pre‐ and Post‐intervention.
StatementMean PreSDMean PostSDP
  • NOTE: 1=strongly disagree, 3=neutral, 5=strongly agree. N=15. Abbreviations: Post, post‐intervention; Pre, pre‐intervention; SD, standard deviation.

Being observed and receiving feedback can improve my teaching skills.4.471.064.600.510.941
My teaching skills cannot improve without observation with feedback.2.931.393.471.300.188
Observation with feedback is most effective when done by colleagues who are expert educators.3.530.833.330.980.180
Observation with feedback is most effective when done by colleagues who have been teaching many years.3.400.913.071.030.143
The thought of observing and giving feedback to my colleagues makes me uncomfortable.3.130.923.001.130.565
The thought of being observed by a colleague and receiving feedback makes me uncomfortable.3.200.943.271.220.747

Program Evaluation

There were a variable number of responses to the program evaluation questions. The majority of participants found the program to be very beneficial (1=strongly disagree, 5=strongly agree [n, meanSD]): My teaching has improved as a result of this program (n=14, 4.90.3). Both giving (n=11, 4.21.6) and receiving (n=13, 4.61.1) feedback were felt to have improved teaching skills. There was strong agreement from respondents that they would participate in the program in the future: I am likely to participate in this program in the future (n=12, 4.60.9).

DISCUSSION

Previous studies have shown that teaching skills are unlikely to improve without feedback,[28, 29, 30] yet feedback for hospitalists is usually limited to summative, end‐rotation evaluations from learners, disconnected from the teaching encounter. Our theory‐based, rationally designed peer observation and feedback program resulted in increased confidence in the ability to give feedback, receive feedback, and teach effectively. Participation did not result in negative attitudes toward giving and receiving feedback from colleagues. Participants self‐reported increased performance of important teaching behaviors. Most participants rated the program very highly, and endorsed improved teaching skills as a result of the program.

Our experience provides several lessons for other groups considering the implementation of peer feedback to strengthen teaching. First, we suggest that hospitalist groups may expect variable degrees of participation in a voluntary peer feedback program. In our program, 41% of eligible attendings did not participate. We did not specifically investigate why; we speculate that they may not have had the time, believed that their teaching skills were already strong, or they may have been daunted at the idea of peer review. It is also possible that participants were a self‐selected group who were the most motivated to strengthen their teaching. Second, we note the steep decline in the number of observations in the second half of the year. Informal assessment for reasons for the drop‐off suggested that after initial enthusiasm for the program, navigating the logistics of observing the same peer in the second half of the year proved to be prohibitive to many participants. Therefore, future versions of peer feedback programs may benefit from removing the dyad requirement and encouraging all participants to observe one another whenever possible.

With these lessons in mind, we believe that a peer observation program could be implemented by other hospital medicine groups. The program does not require extensive content expertise or senior faculty but does require engaged leadership and interested and motivated faculty. Groups could identify an individual in their group with an interest in clinical teaching who could then be responsible for creating the training session (materials available upon request). We believe that with only a small upfront investment, most hospital medicine groups could use this as a model to build a peer observation program aimed at improving clinical teaching.

Our study has several limitations. As noted above, our participation rate was 59%, and the number of participating attendings declined through the year. We did not examine whether our program resulted in advances in the knowledge, skills, or attitudes of the learners; because each attending teaching session was unique, it was not possible to measure changes in learner knowledge. Our primary outcome measures relied on self‐assessment rather than higher order and more objective measures of teaching efficacy. Furthermore, our results may not be generalizable to other programs, given the heterogeneity in service structures and teaching practices across the country. This was an uncontrolled study; some of the outcomes may have naturally occurred independent of the intervention due to the natural evolution of clinical teaching. As with any educational intervention that integrates multiple strategies, we are not able to discern if the improved outcomes were the result of the initial didactic sessions, the refresher sessions, or the peer feedback itself. Serial assessments of frequency of teaching behaviors were not done due to the low number of observations in the second half of the program. Finally, our 10‐item tool derived from the validated SFDP‐26 tool is not itself a validated assessment of teaching.

We acknowledge that the increased confidence seen in our participants does not necessarily predict improved performance. Although increased confidence in core skills is a necessary step that can lead to changes in behavior, further studies are needed to determine whether the increase in faculty confidence that results from peer observation and feedback translates into improved educational outcomes.

The pressure on hospitalists to be excellent teachers is here to stay. Resources to train these faculty are scarce, yet we must prioritize faculty development in teaching to optimize the training of future physicians. Our data illustrate the benefits of peer observation and feedback. Hospitalist programs should consider this option in addressing the professional development needs of their faculty.

Acknowledgements

The authors thank Zachary Martin for administrative support for the program; Gurpreet Dhaliwal, MD, and Patricia O'Sullivan, PhD, for aid in program development; and John Amory, MD, MPH, for critical review of the manuscript. The authors thank the University of California, San Francisco Office of Medical Education for funding this work with an Educational Research Grant.

Disclosures: Funding: UCSF Office of Medical Education Educational Research Grant. Ethics approval: approved by UCSF Committee on Human Research. Previous presentations: Previous versions of this work were presented as an oral presentation at the University of California at San Francisco Medical Education Day, San Francisco, California, April 27, 2012, and as a poster presentation at the Society for General Internal Medicine 35th Annual Meeting, Orlando, Florida, May 912, 2012. The authors report no conflicts of interest.

Hospitalists are increasingly responsible for educating students and housestaff in internal medicine.[1] Because the quality of teaching is an important factor in learning,[2, 3, 4] leaders in medical education have expressed concern over the rapid shift of teaching responsibilities to this new group of educators.[5, 6, 7, 8] Moreover, recent changes in duty hour restrictions have strained both student and resident education,[9, 10] necessitating the optimization of inpatient teaching.[11, 12] Many hospitalists have recently finished residency and have not had formal training in clinical teaching. Collectively, most hospital medicine groups are early in their careers, have significant clinical obligations,[13] and may not have the bandwidth or expertise to provide faculty development for improving clinical teaching.

Rationally designed and theoretically sound faculty development to improve inpatient clinical teaching is required to meet this challenge. There are a limited number of reports describing faculty development focused on strengthening the teaching of hospitalists, and only 3 utilized direct observation and feedback, 1 of which involved peer observation in the clinical setting.[14, 15, 16] This 2011 report described a narrative method of peer observation and feedback but did not assess for efficacy of the program.[16] To our knowledge, there have been no studies of structured peer observation and feedback to optimize hospitalist attendings' teaching which have evaluated the efficacy of the intervention.

We developed a faculty development program based on peer observation and feedback based on actual teaching practices, using structured feedback anchored in validated and observable measures of effective teaching. We hypothesized that participation in the program would increase confidence in key teaching skills, increase confidence in the ability to give and receive peer feedback, and strengthen attitudes toward peer observation and feedback.

METHODS

Subjects and Setting

The study was conducted at a 570‐bed academic, tertiary care medical center affiliated with an internal medicine residency program of 180 housestaff. Internal medicine ward attendings rotate during 2‐week blocks, and are asked to give formal teaching rounds 3 or 4 times a week (these sessions are distinct from teaching which may happen while rounding on patients). Ward teams are composed of 1 senior resident, 2 interns, and 1 to 2 medical students. The majority of internal medicine ward attendings are hospitalist faculty, hospital medicine fellows, or medicine chief residents. Because outpatient general internists and subspecialists only occasionally attend on the wards, we refer to ward attendings as attending hospitalists in this article. All attending hospitalists were eligible to participate if they attended on the wards at least twice during the academic year. The institutional review board at the University of California, San Francisco approved this study.

Theoretical Framework

We reviewed the literature to optimize our program in 3 conceptual domains: (1) overall structure of the program, (2) definition of effective teaching and (3) effective delivery of feedback.

Over‐reliance on didactics that are disconnected from the work environment is a weakness of traditional faculty development. Individuals may attempt to apply what they have learned, but receiving feedback on their actual workplace practices may be difficult. A recent perspective responds to this fragmentation by conceptualizing faculty development as embedded in both a faculty development community and a workplace community. This model emphasizes translating what faculty have learned in the classroom into practice, and highlights the importance of coaching in the workplace.[17] In accordance with this framework, we designed our program to reach beyond isolated workshops to effectively penetrate the workplace community.

We selected the Stanford Faculty Development Program (SFDP) framework for optimal clinical teaching as our model for recognizing and improving teaching skills. The SFDP was developed as a theory‐based intensive feedback method to improve teaching skills,[18, 19] and has been shown to improve teaching in the ambulatory[20] and inpatient settings.[21, 22] In this widely disseminated framework,[23, 24] excellent clinical teaching is grounded in optimizing observable behaviors organized around 7 domains.[18] A 26‐item instrument to evaluate clinical teaching (SFDP‐26) has been developed based on this framework[25] and has been validated in multiple settings.[26, 27] High‐quality teaching, as defined by the SFDP framework, has been correlated with improved educational outcomes in internal medicine clerkship students.[4]

Feedback is crucial to optimizing teaching,[28, 29, 30] particularly when it incorporates consultation[31] and narrative comments.[32] Peer feedback has several advantages over feedback from learners or from other non‐peer observers (such as supervisors or other evaluators). First, the observers benefit by gaining insight into their own weaknesses and potential areas for growth as teachers.[33, 34] Additionally, collegial observation and feedback may promote supportive teaching relationships between faculty.[35] Furthermore, peer review overcomes the biases that may be present in learner evaluations.[36] We established a 3‐stage feedback technique based on a previously described method.[37] In the first step, the observer elicits self‐appraisal from the speaker. Next, the observer provides specific, behaviorally anchored feedback in the form of 3 reinforcing comments and 2 constructive comments. Finally, the observer elicits a reflection on the feedback and helps develop a plan to improve teaching in future opportunities. We used a dyad model (paired participants repeatedly observe and give feedback to each other) to support mutual benefit and reciprocity between attendings.

Intervention

Using a modified Delphi approach, 5 medical education experts selected the 10 items that are most easily observable and salient to formal attending teaching rounds from the SFDP‐26 teaching assessment tool. A structured observation form was created, which included a checklist of the 10 selected items, space for note taking, and a template for narrative feedback (Figure 1).

Figure 1
Structured observation form, side 1. See “Intervention” for discussion.

We introduced the SFDP framework during a 2‐hour initial training session. Participants watched videos of teaching, learned to identify the 10 selected teaching behaviors, developed appropriate constructive and reinforcing comments, and practiced giving and receiving peer feedback.

Dyads were created on the basis of predetermined attending schedules. Participants were asked to observe and be observed twice during attending teaching rounds over the course of the academic year. Attending teaching rounds were defined as any preplanned didactic activity for ward teams. The structured observation forms were returned to the study coordinators after the observer had given feedback to the presenter. A copy of the feedback without the observer's notes was also given to each speaker. At the midpoint of the academic year, a refresher session was offered to reinforce those teaching behaviors that were the least frequently performed to date. All participants received a $50.00 Amazon.com gift card, and additional gift card incentives were offered to the dyads that first completed both observations.

Measurements and Data Collection

Participants were given a pre‐ and post‐program survey. The surveys included questions assessing confidence in ability to give feedback, receive feedback without feeling defensive, and teach effectively, as well as attitudes toward peer observation. The postprogram survey was administered at the end of the year and additionally assessed the self‐rated performance of the 10 selected teaching behaviors. A retrospective pre‐ and post‐program assessment was used for this outcome, because this method can be more reliable when participants initially may not have sufficient insight to accurately assess their own competence in specific measures.[21] The post‐program survey also included 4 questions assessing satisfaction with aspects of the program. All questions were structured as statements to which the respondent indicated degree of agreement using a 5‐point Likert scale, where 1=strongly disagree and 5=strongly agree. Structured observation forms used by participants were collected throughout the year to assess frequency of performance of the 10 selected teaching behaviors.

Statistical Analysis

We only analyzed the pre‐ and post‐program surveys that could be matched using anonymous identifiers provided by participants. For both prospective and retrospective measures, mean values and standard deviations were calculated. Wilcoxon signed rank tests for nonparametric data were performed to obtain P values. For all comparisons, a P value of <0.05 was considered significant. All comparisons were performed using Stata version 10 (StataCorp, College Station, TX).

RESULTS

Participant Characteristics and Participation in Program

Of the 37 eligible attending hospitalists, 22 (59%) enrolled. Fourteen were hospital medicine faculty, 6 were hospital medicine fellows, and 2 were internal medicine chief residents. The averagestandard deviation (SD) number of years as a ward attending was 2.2 years2.1. Seventeen (77%) reported previously having been observed and given feedback by a colleague, and 9 (41%) reported previously observing a colleague for the purpose of giving feedback.

All 22 participants attended 1 of 2, 2‐hour training sessions. Ten participants attended an hour‐long midyear refresher session. A total of 19 observation and feedback sessions took place; 15 of them occurred in the first half of the academic year. Fifteen attending hospitalists participated in at least 1 observed teaching session. Of the 11 dyads, 6 completed at least 1 observation of each other. Two dyads performed 2 observations of each other.

Fifteen participants (68% of those enrolled) completed both the pre‐ and post‐program surveys. Among these respondents, the average number of years attending was 2.92.2 years. Eight (53%) reported previously having been observed and given feedback by a colleague, and 7 (47%) reported previously observing a colleague for the purpose of giving feedback. For this subset of participants, the averageSD frequency of being observed during the program was 1.30.7, and observing was 1.10.8.

Confidence in Ability to Give Feedback, Receive Feedback, and Teach Effectively

In comparison of pre‐ and post‐intervention measures, participants indicated increased confidence in their ability to evaluate their colleagues and provide feedback in all domains queried. Participants also indicated increased confidence in the efficacy of their feedback to improve their colleagues' teaching skills. Participating in the program did not significantly change pre‐intervention levels of confidence in ability to receive feedback without being defensive or confidence in ability to use feedback to improve teaching skills (Table 1).

Confidence in Ability to Give Feedback, Receive Feedback, and Teach Effectively Pre‐ and Post‐intervention.
StatementMean PreSDMean PostSDP
  • NOTE: 1=strongly disagree, 3=neutral, 5=strongly agree. N=15 except where noted. Abbreviations: Post, post‐intervention; Pre, pre‐intervention; SD, standard deviation.

  • N=14.

I can accurately assess my colleagues' teaching skills.3.200.864.070.590.004
I can give accurate feedback to my colleagues regarding their teaching skills.3.400.634.200.560.002
I can give feedback in a way that that my colleague will not feel defensive about their teaching skills.3.600.634.200.560.046
My feedback will improve my colleagues' teaching skills.3.400.513.930.590.011
I can receive feedback from a colleague without being defensive about my teaching skills.3.870.924.270.590.156
I can use feedback from a colleague to improve my teaching skills.4.330.824.470.640.607
I am confident in my ability to teach students and residents during attending rounds.a3.210.893.710.830.026
I am confident in my knowledge of components of effective teaching.a3.210.893.710.990.035
Learners regard me as an effective teacher.a3.140.663.640.740.033

Self‐Rated Performance of 10 Selected Teaching Behaviors

In retrospective assessment, participants felt that their performance had improved in all 10 teaching behaviors after the intervention. This perceived improvement reached statistical significance in 8 of the 10 selected behaviors (Table 2).

Retrospective Self‐Appraisal of Competence in Selected Teaching Behaviors Pre‐ and Post‐intervention.
SFDP Framework Category From Skeff et al.[18]When I Give Attending Rounds, I Generally .Mean PreSDMean PostSDP
  • NOTE: 1=strongly disagree and 5=strongly agree. N=15. Abbreviations: Post, post‐intervention; Pre, pre‐intervention; SD, standard deviation; SFDP, Stanford Faculty Development Program

1. Establishing a positive learning climateListen to learners4.270.594.530.520.046
Encourage learners to participate actively in the discussion4.070.704.600.510.009
2. Controlling the teaching sessionCall attention to time3.330.984.270.590.004
3. Communicating goalsState goals clearly and concisely3.400.634.270.590.001
State relevance of goals to learners3.400.744.200.680.002
4. Promoting understanding and retentionPresent well‐organized material3.870.644.070.700.083
Use blackboard or other visual aids4.270.884.470.740.158
5. Evaluating the learnersEvaluate learners' ability to apply medical knowledge to specific patients3.330.984.000.760.005
6. Providing feedback to the learnersExplain to learners why he/she was correct or incorrect3.471.134.130.640.009
7. Promoting self‐directed learningMotivate learners to learn on their own3.200.863.730.700.005

Attitudes Toward Peer Observation and Feedback

There were no significant changes in attitudes toward observation and feedback on teaching. A strong preprogram belief that observation and feedback can improve teaching skills increased slightly, but not significantly, after the program. Participants remained largely neutral in expectation of discomfort with giving or receiving peer feedback. Prior to the program, there was a slight tendency to believe that observation and feedback is more effective when done by more skilled and experienced colleagues; this belief diminished, but not significantly (Table 3).

Attitudes Toward Peer Observation and Feedback Pre‐ and Post‐intervention.
StatementMean PreSDMean PostSDP
  • NOTE: 1=strongly disagree, 3=neutral, 5=strongly agree. N=15. Abbreviations: Post, post‐intervention; Pre, pre‐intervention; SD, standard deviation.

Being observed and receiving feedback can improve my teaching skills.4.471.064.600.510.941
My teaching skills cannot improve without observation with feedback.2.931.393.471.300.188
Observation with feedback is most effective when done by colleagues who are expert educators.3.530.833.330.980.180
Observation with feedback is most effective when done by colleagues who have been teaching many years.3.400.913.071.030.143
The thought of observing and giving feedback to my colleagues makes me uncomfortable.3.130.923.001.130.565
The thought of being observed by a colleague and receiving feedback makes me uncomfortable.3.200.943.271.220.747

Program Evaluation

There were a variable number of responses to the program evaluation questions. The majority of participants found the program to be very beneficial (1=strongly disagree, 5=strongly agree [n, meanSD]): My teaching has improved as a result of this program (n=14, 4.90.3). Both giving (n=11, 4.21.6) and receiving (n=13, 4.61.1) feedback were felt to have improved teaching skills. There was strong agreement from respondents that they would participate in the program in the future: I am likely to participate in this program in the future (n=12, 4.60.9).

DISCUSSION

Previous studies have shown that teaching skills are unlikely to improve without feedback,[28, 29, 30] yet feedback for hospitalists is usually limited to summative, end‐rotation evaluations from learners, disconnected from the teaching encounter. Our theory‐based, rationally designed peer observation and feedback program resulted in increased confidence in the ability to give feedback, receive feedback, and teach effectively. Participation did not result in negative attitudes toward giving and receiving feedback from colleagues. Participants self‐reported increased performance of important teaching behaviors. Most participants rated the program very highly, and endorsed improved teaching skills as a result of the program.

Our experience provides several lessons for other groups considering the implementation of peer feedback to strengthen teaching. First, we suggest that hospitalist groups may expect variable degrees of participation in a voluntary peer feedback program. In our program, 41% of eligible attendings did not participate. We did not specifically investigate why; we speculate that they may not have had the time, believed that their teaching skills were already strong, or they may have been daunted at the idea of peer review. It is also possible that participants were a self‐selected group who were the most motivated to strengthen their teaching. Second, we note the steep decline in the number of observations in the second half of the year. Informal assessment for reasons for the drop‐off suggested that after initial enthusiasm for the program, navigating the logistics of observing the same peer in the second half of the year proved to be prohibitive to many participants. Therefore, future versions of peer feedback programs may benefit from removing the dyad requirement and encouraging all participants to observe one another whenever possible.

With these lessons in mind, we believe that a peer observation program could be implemented by other hospital medicine groups. The program does not require extensive content expertise or senior faculty but does require engaged leadership and interested and motivated faculty. Groups could identify an individual in their group with an interest in clinical teaching who could then be responsible for creating the training session (materials available upon request). We believe that with only a small upfront investment, most hospital medicine groups could use this as a model to build a peer observation program aimed at improving clinical teaching.

Our study has several limitations. As noted above, our participation rate was 59%, and the number of participating attendings declined through the year. We did not examine whether our program resulted in advances in the knowledge, skills, or attitudes of the learners; because each attending teaching session was unique, it was not possible to measure changes in learner knowledge. Our primary outcome measures relied on self‐assessment rather than higher order and more objective measures of teaching efficacy. Furthermore, our results may not be generalizable to other programs, given the heterogeneity in service structures and teaching practices across the country. This was an uncontrolled study; some of the outcomes may have naturally occurred independent of the intervention due to the natural evolution of clinical teaching. As with any educational intervention that integrates multiple strategies, we are not able to discern if the improved outcomes were the result of the initial didactic sessions, the refresher sessions, or the peer feedback itself. Serial assessments of frequency of teaching behaviors were not done due to the low number of observations in the second half of the program. Finally, our 10‐item tool derived from the validated SFDP‐26 tool is not itself a validated assessment of teaching.

We acknowledge that the increased confidence seen in our participants does not necessarily predict improved performance. Although increased confidence in core skills is a necessary step that can lead to changes in behavior, further studies are needed to determine whether the increase in faculty confidence that results from peer observation and feedback translates into improved educational outcomes.

The pressure on hospitalists to be excellent teachers is here to stay. Resources to train these faculty are scarce, yet we must prioritize faculty development in teaching to optimize the training of future physicians. Our data illustrate the benefits of peer observation and feedback. Hospitalist programs should consider this option in addressing the professional development needs of their faculty.

Acknowledgements

The authors thank Zachary Martin for administrative support for the program; Gurpreet Dhaliwal, MD, and Patricia O'Sullivan, PhD, for aid in program development; and John Amory, MD, MPH, for critical review of the manuscript. The authors thank the University of California, San Francisco Office of Medical Education for funding this work with an Educational Research Grant.

Disclosures: Funding: UCSF Office of Medical Education Educational Research Grant. Ethics approval: approved by UCSF Committee on Human Research. Previous presentations: Previous versions of this work were presented as an oral presentation at the University of California at San Francisco Medical Education Day, San Francisco, California, April 27, 2012, and as a poster presentation at the Society for General Internal Medicine 35th Annual Meeting, Orlando, Florida, May 912, 2012. The authors report no conflicts of interest.

References
  1. Beasley BW, McBride J, McDonald FS. Hospitalist involvement in internal medicine residencies. J Hosp Med. 2009;4(8):471475.
  2. Stern DT, Williams BC, Gill A, Gruppen LD, Woolliscroft JO, Grum CM. Is there a relationship between attending physicians' and residents' teaching skills and students' examination scores? Acad Med. 2000;75(11):11441146.
  3. Griffith CH, Georgesen JC, Wilson JF. Six‐year documentation of the association between excellent clinical teaching and improved students' examination performances. Acad Med. 2000;75(10 suppl):S62S64.
  4. Roop SA, Pangaro L. Effect of clinical teaching on student performance during a medicine clerkship. Am J Med. 2001;110(3):205209.
  5. Hauer KE, Wachter RM. Implications of the hospitalist model for medical students' education. Acad Med. 2001;76(4):324330.
  6. Benson JA. On educating and being a physician in the hospitalist era. Am J Med. 2001;111(9B):45S47S.
  7. Whitcomb WF, Nelson JR. The role of hospitalists in medical education. Am J Med. 1999;107(4):305309.
  8. Flanders SA, Centor B, Weber V, McGinn T, Desalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the academic hospital medicine summit. J Gen Intern Med. 2009;24(5):636641.
  9. Reed DA, Levine RB, Miller RG, et al. Impact of duty hour regulations on medical students' education: views of key clinical faculty. J Gen Intern Med. 2008;23(7):10841089.
  10. Kogan JR, Pinto‐Powell R, Brown LA, Hemmer P, Bellini LM, Peltier D. The impact of resident duty hours reform on the internal medicine core clerkship: results from the clerkship directors in internal medicine survey. Acad Med. 2006;81(12):10381044.
  11. Goitein L, Shanafelt TD, Nathens AB, Curtis JR. Effects of resident work hour limitations on faculty professional lives. J Gen Intern Med. 2008;23(7):10771083.
  12. Harrison R, Allen E. Teaching internal medicine residents in the new era. Inpatient attending with duty‐hour regulations. J Gen Intern Med. 2006;21(5):447452.
  13. Harrison R, Hunter AJ, Sharpe B, Auerbach AD. Survey of US academic hospitalist leaders about mentorship and academic activities in hospitalist groups. J Hosp Med. 2011;6(1):59.
  14. Ottolini M, Wohlberg R, Lewis K, Greenberg L. Using observed structured teaching exercises (OSTE) to enhance hospitalist teaching during family centered rounds. J Hosp Med. 2011;6(7):423427.
  15. Sehgal NL, Sharpe BA, Auerbach AA, Wachter RM. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6(3):161166.
  16. Finn K, Chiappa V, Puig A, Hunt DP. How to become a better clinical teacher: a collaborative peer observation process. Med Teach. 2011;33(2):151155.
  17. O'Sullivan PS, Irby DM. Reframing research on faculty development. Acad Med. 2011;86(4):421428.
  18. Skeff KM, Stratos GA, Bergen MR, et al. The Stanford faculty development program: a dissemination approach to faculty development for medical teachers. Teach Learn Med. 1992;4(3):180187.
  19. Skeff KM. Evaluation of a method for improving the teaching performance of attending physicians. Am J Med. 1983;75(3):465470.
  20. Berbano EP, Browning R, Pangaro L, Jackson JL. The impact of the Stanford Faculty Development Program on ambulatory teaching behavior. J Gen Intern Med. 2006;21(5):430434.
  21. Skeff KM, Stratos GA, Bergen MR. Evaluation of a medical faculty development program: a comparison of traditional pre/post and retrospective pre/post self‐assessment ratings. Eval Health Prof. 1992;15(3):350366.
  22. Skeff KM, Stratos G, Campbell M, Cooke M, Jones HW. Evaluation of the seminar method to improve clinical teaching. J Gen Intern Med. 1986;1(5):315322.
  23. Skeff KM, Stratos GA, Bergen MR, Sampson K, Deutsch SL. Regional teaching improvement programs for community‐based teachers. Am J Med. 1999;106(1):7680.
  24. Skeff KM, Stratos GA, Berman J, Bergen MR. Improving clinical teaching. Evaluation of a national dissemination program. Arch Intern Med. 1992;152(6):11561161.
  25. Litzelman DK, Stratos GA, Marriott DJ, Skeff KM. Factorial validation of a widely disseminated educational framework for evaluating clinical teachers. Acad Med. 1998;73(6):688695.
  26. Litzelman DK, Westmoreland GR, Skeff KM, Stratos GA. Student and resident evaluations of faculty—how reliable are they? Factorial validation of an educational framework using residents' evaluations of clinician‐educators. Acad Med. 1999;74(10):S25S27.
  27. Marriott DJ, Litzelman DK. Students' global assessments of clinical teachers: a reliable and valid measure of teaching effectiveness. Acad Med. 1998;73(10 suppl):S72S74.
  28. Brinko KT. The practice of giving feedback to improve teaching: what is effective? J Higher Educ. 1993;64(5):574593.
  29. Skeff KM, Stratos GA, Mygdal W, et al. Faculty development. A resource for clinical teachers. J Gen Intern Med. 1997;12(suppl 2):S56S63.
  30. Steinert Y, Mann K, Centeno A, et al. A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME guide no. 8. Med Teach. 2006;28(6):497526.
  31. Wilkerson L, Irby DM. Strategies for improving teaching practices: a comprehensive approach to faculty development. Acad Med. 1998;73(4):387396.
  32. Schum TR, Yindra KJ. Relationship between systematic feedback to faculty and ratings of clinical teaching. Acad Med. 1996;71(10):11001102.
  33. Beckman TJ. Lessons learned from a peer review of bedside teaching. Acad Med. 2004;79(4):343346.
  34. Beckman TJ, Lee MC, Rohren CH, Pankratz VS. Evaluating an instrument for the peer review of inpatient teaching. Med Teach. 2003;25(2):131135.
  35. Siddiqui ZS, Jonas‐Dwyer D, Carr SE. Twelve tips for peer observation of teaching. Med Teach. 2007;29(4):297300.
  36. Speer AJ, Elnicki DM. Assessing the quality of teaching. Am J Med. 1999;106(4):381384.
  37. Bienstock JL, Katz NT, Cox SM, Hueppchen N, Erickson S, Puscheck EE. To the point: medical education reviews—providing feedback. Am J Obstet Gynecol. 2007;196(6):508513.
References
  1. Beasley BW, McBride J, McDonald FS. Hospitalist involvement in internal medicine residencies. J Hosp Med. 2009;4(8):471475.
  2. Stern DT, Williams BC, Gill A, Gruppen LD, Woolliscroft JO, Grum CM. Is there a relationship between attending physicians' and residents' teaching skills and students' examination scores? Acad Med. 2000;75(11):11441146.
  3. Griffith CH, Georgesen JC, Wilson JF. Six‐year documentation of the association between excellent clinical teaching and improved students' examination performances. Acad Med. 2000;75(10 suppl):S62S64.
  4. Roop SA, Pangaro L. Effect of clinical teaching on student performance during a medicine clerkship. Am J Med. 2001;110(3):205209.
  5. Hauer KE, Wachter RM. Implications of the hospitalist model for medical students' education. Acad Med. 2001;76(4):324330.
  6. Benson JA. On educating and being a physician in the hospitalist era. Am J Med. 2001;111(9B):45S47S.
  7. Whitcomb WF, Nelson JR. The role of hospitalists in medical education. Am J Med. 1999;107(4):305309.
  8. Flanders SA, Centor B, Weber V, McGinn T, Desalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the academic hospital medicine summit. J Gen Intern Med. 2009;24(5):636641.
  9. Reed DA, Levine RB, Miller RG, et al. Impact of duty hour regulations on medical students' education: views of key clinical faculty. J Gen Intern Med. 2008;23(7):10841089.
  10. Kogan JR, Pinto‐Powell R, Brown LA, Hemmer P, Bellini LM, Peltier D. The impact of resident duty hours reform on the internal medicine core clerkship: results from the clerkship directors in internal medicine survey. Acad Med. 2006;81(12):10381044.
  11. Goitein L, Shanafelt TD, Nathens AB, Curtis JR. Effects of resident work hour limitations on faculty professional lives. J Gen Intern Med. 2008;23(7):10771083.
  12. Harrison R, Allen E. Teaching internal medicine residents in the new era. Inpatient attending with duty‐hour regulations. J Gen Intern Med. 2006;21(5):447452.
  13. Harrison R, Hunter AJ, Sharpe B, Auerbach AD. Survey of US academic hospitalist leaders about mentorship and academic activities in hospitalist groups. J Hosp Med. 2011;6(1):59.
  14. Ottolini M, Wohlberg R, Lewis K, Greenberg L. Using observed structured teaching exercises (OSTE) to enhance hospitalist teaching during family centered rounds. J Hosp Med. 2011;6(7):423427.
  15. Sehgal NL, Sharpe BA, Auerbach AA, Wachter RM. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6(3):161166.
  16. Finn K, Chiappa V, Puig A, Hunt DP. How to become a better clinical teacher: a collaborative peer observation process. Med Teach. 2011;33(2):151155.
  17. O'Sullivan PS, Irby DM. Reframing research on faculty development. Acad Med. 2011;86(4):421428.
  18. Skeff KM, Stratos GA, Bergen MR, et al. The Stanford faculty development program: a dissemination approach to faculty development for medical teachers. Teach Learn Med. 1992;4(3):180187.
  19. Skeff KM. Evaluation of a method for improving the teaching performance of attending physicians. Am J Med. 1983;75(3):465470.
  20. Berbano EP, Browning R, Pangaro L, Jackson JL. The impact of the Stanford Faculty Development Program on ambulatory teaching behavior. J Gen Intern Med. 2006;21(5):430434.
  21. Skeff KM, Stratos GA, Bergen MR. Evaluation of a medical faculty development program: a comparison of traditional pre/post and retrospective pre/post self‐assessment ratings. Eval Health Prof. 1992;15(3):350366.
  22. Skeff KM, Stratos G, Campbell M, Cooke M, Jones HW. Evaluation of the seminar method to improve clinical teaching. J Gen Intern Med. 1986;1(5):315322.
  23. Skeff KM, Stratos GA, Bergen MR, Sampson K, Deutsch SL. Regional teaching improvement programs for community‐based teachers. Am J Med. 1999;106(1):7680.
  24. Skeff KM, Stratos GA, Berman J, Bergen MR. Improving clinical teaching. Evaluation of a national dissemination program. Arch Intern Med. 1992;152(6):11561161.
  25. Litzelman DK, Stratos GA, Marriott DJ, Skeff KM. Factorial validation of a widely disseminated educational framework for evaluating clinical teachers. Acad Med. 1998;73(6):688695.
  26. Litzelman DK, Westmoreland GR, Skeff KM, Stratos GA. Student and resident evaluations of faculty—how reliable are they? Factorial validation of an educational framework using residents' evaluations of clinician‐educators. Acad Med. 1999;74(10):S25S27.
  27. Marriott DJ, Litzelman DK. Students' global assessments of clinical teachers: a reliable and valid measure of teaching effectiveness. Acad Med. 1998;73(10 suppl):S72S74.
  28. Brinko KT. The practice of giving feedback to improve teaching: what is effective? J Higher Educ. 1993;64(5):574593.
  29. Skeff KM, Stratos GA, Mygdal W, et al. Faculty development. A resource for clinical teachers. J Gen Intern Med. 1997;12(suppl 2):S56S63.
  30. Steinert Y, Mann K, Centeno A, et al. A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME guide no. 8. Med Teach. 2006;28(6):497526.
  31. Wilkerson L, Irby DM. Strategies for improving teaching practices: a comprehensive approach to faculty development. Acad Med. 1998;73(4):387396.
  32. Schum TR, Yindra KJ. Relationship between systematic feedback to faculty and ratings of clinical teaching. Acad Med. 1996;71(10):11001102.
  33. Beckman TJ. Lessons learned from a peer review of bedside teaching. Acad Med. 2004;79(4):343346.
  34. Beckman TJ, Lee MC, Rohren CH, Pankratz VS. Evaluating an instrument for the peer review of inpatient teaching. Med Teach. 2003;25(2):131135.
  35. Siddiqui ZS, Jonas‐Dwyer D, Carr SE. Twelve tips for peer observation of teaching. Med Teach. 2007;29(4):297300.
  36. Speer AJ, Elnicki DM. Assessing the quality of teaching. Am J Med. 1999;106(4):381384.
  37. Bienstock JL, Katz NT, Cox SM, Hueppchen N, Erickson S, Puscheck EE. To the point: medical education reviews—providing feedback. Am J Obstet Gynecol. 2007;196(6):508513.
Issue
Journal of Hospital Medicine - 9(4)
Issue
Journal of Hospital Medicine - 9(4)
Page Number
244-250
Page Number
244-250
Publications
Publications
Article Type
Display Headline
Faculty development for hospitalists: Structured peer observation of teaching
Display Headline
Faculty development for hospitalists: Structured peer observation of teaching
Sections
Article Source

© 2014 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Somnath Mookherjee, MD, Department of Medicine, Division of General Internal Medicine, University of Washington, 1959 NE Pacific St, Box 356429, Seattle, WA 98195; Telephone: 206‐744‐3391; Fax: 206‐221‐8732; E‐mail: smookh@u.washington.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Update in Hospital Palliative Care

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Update in hospital palliative care

Seriously ill patients frequently receive care in hospitals,[1, 2, 3] and palliative care is a core competency for hospitalists.[4, 5] The goal of this update was to summarize and critique recently published research that has the highest potential to impact the clinical practice of palliative care in the hospital. We reviewed articles published between January 2012 and May 2013. To identify articles, we hand‐searched 22 leading journals (see Appendix) and the Cochrane Database of Systematic Reviews, and performed a PubMed keyword search using the terms hospice and palliative care. We evaluated identified articles based on scientific rigor and relevance to hospital practice. In this review, we summarize 9 articles that were collectively selected as having the highest impact on the clinical practice of hospital palliative care. We summarize each article and its findings and note cautions and implications for practice.

SYMPTOM MANAGEMENT

Indwelling Pleural Catheters and Talc Pleurodesis Provide Similar Dyspnea Relief in Patients With Malignant Pleural Effusions

Davies HE, Mishra EK, Kahan BC, et al. Effect of an indwelling pleural catheter vs chest tube and talc pleurodesis for relieving dyspnea in patients with malignant pleural effusion. JAMA. 2012;307:23832389.

Background

Expert guidelines recommend chest‐tube insertion and talc pleurodesis as a first‐line therapy for symptomatic malignant pleural effusions, but indwelling pleural catheters are gaining in popularity.[6] The optimal management is unknown.

Findings

A total of 106 patients with newly diagnosed symptomatic malignant pleural effusion were randomized to undergo talc pleurodesis or placement of an indwelling pleural catheter. Most patients had metastatic breast or lung cancer. Overall, there were no differences in relief of dyspnea at 42 days between patients who received indwelling catheters and pleurodesis; importantly, more than 75% of patients in both groups reported improved shortness of breath. The initial hospitalization was much shorter in the indwelling catheter group (0 days vs 4 days). There was no difference in quality of life, but in surviving patients, dyspnea at 6 months was better with the indwelling catheter. In the talc group, 22% of patients required further pleural procedures compared with 6% in the indwelling catheter group. Patients in the talc group had a higher frequency of adverse events than in the catheter group (40% vs 13%). In the catheter group, the most common adverse events were pleural infection, cellulitis, and catheter obstruction.

Cautions

The study was small and unblinded, and the primary outcome was subjective dyspnea. The study occurred at 7 hospitals, and the impact of institutional or provider experience was not taken into account. Last, overall costs of care, which could impact the choice of intervention, were not calculated.

Implications

This was a small but well‐done study showing that indwelling catheters and talc pleurodesis provide similar relief of dyspnea 42 days postintervention. Given these results, both interventions seem to be acceptable options. Clinicians and patients could select the best option based on local procedural expertise and patient factors such as preference, ability to manage a catheter, and life expectancy.

Most Dying Patients Do Not Experience Increased Respiratory Distress When Oxygen is Withdrawn

Campbell ML, Yarandi H, Dove‐Medows E. Oxygen is nonbeneficial for most patients who are near death. J Pain Symptom Manage. 2013;45(3):517523.

Background

Oxygen is frequently administered to patients at the end of life, yet there is limited evidence evaluating whether oxygen reduces respiratory distress in dying patients.

Findings

In this double‐blind, repeated‐measure study, patients served as their own controls as the investigators evaluated respiratory distress with and without oxygen therapy. The study included 32 patients who were enrolled in hospice or seen in palliative care consultation and had a diagnosis such as lung cancer or heart failure that might cause dyspnea. Medical air (nasal cannula with air flow), supplemental oxygen, and no flow were randomly alternated every 10 minutes for 1 hour. Blinded research assistants used a validated observation scale to compare respiratory distress under each condition. At baseline, 27 of 32 (84%) patients were on oxygen. Three patients, all of whom were conscious and on oxygen at baseline, experienced increased respiratory distress without oxygen; reapplication of supplemental oxygen relieved their distress. The other 29 patients had no change in respiratory distress under the oxygen, medical air, and no flow conditions.

Cautions

All patients in this study were near death as measured by the Palliative Performance Scale, which assesses prognosis based on functional status and level of consciousness. Patients were excluded if they were receiving high‐flow oxygen by face mask or were experiencing respiratory distress at the time of initial evaluation. Some patients experienced increased discomfort after withdrawal of oxygen. Close observation is needed to determine which patients will experience distress.

Implications

The majority of patients who were receiving oxygen at baseline experienced no change in respiratory comfort when oxygen was withdrawn, supporting previous evidence that oxygen provides little benefit in nonhypoxemic patients. Oxygen may be an unnecessary intervention near death and has the potential to add to discomfort through nasal dryness and decreased mobility.

Sennosides Performed Similarly to Docusate Plus Sennosides in Managing Opioid‐Induced Constipation in Seriously Ill Patients

Tarumi Y, Wilson MP, Szafran O, Spooner GR. Randomized, double‐blind, placebo‐controlled trial of oral docusate in the management of constipation in hospice patients. J Pain Symptom Manage. 2013;45:213.

Background

Seriously ill patients frequently suffer from constipation, often as a result of opioid analgesics. Hospital clinicians should seek to optimize bowel regimens to prevent opioid‐induced constipation. A combination of the stimulant laxative sennoside and the stool softener docusate is often recommended to treat and prevent constipation. Docusate may not have additional benefit to sennoside, and may have significant burdens, including disturbing the absorption of other medications, adding to patients' pill burden and increasing nurse workload.[7]

Findings

In this double‐blinded trial, 74 patients in 3 inpatient hospices in Canada were randomized to receive sennoside plus either docusate 100 mg, or placebo tablets twice daily, or sennoside plus placebo for 10 days. Most patients had cancer as a life‐limiting diagnosis and received opioids during the study period. All were able to tolerate pills and food or sips of fluid. There was no significant difference between the 2 groups in stool frequency, volume, consistency, or patients' perceptions of difficulty with defecation. The percentage of patients who had a bowel movement at least every 3 days was 71% in the docusate plus sennoside group and 81% in the sennoside only group (P=0.45). There was also no significant difference between the groups in sennoside dose (which ranged between 13, 8.6 mg tablets daily), mean morphine equivalent daily dosage, or other bowel interventions.

Cautions

The trial was small, though it was adequately powered to detect a clinically meaningful difference between the 2 groups of 0.5 in the average number of bowel movements per day. The consent rate was low (26%); the authors do not detail reasons patients were not randomized. Patients who did not participate might have had different responses.

Implications

Consistent with previous work,[7] these results indicate that docusate is probably not needed for routine management of opioid‐induced constipation in seriously ill patients.

Sublingual Atropine Performed Similarly to Placebo in Reducing Noise Associated With Respiratory Rattle Near Death

Heisler M, Hamilton G, Abbott A, et al. Randomized double‐blind trial of sublingual atropine vs. placebo for the management of death rattle. J Pain Symptom Manage. 2012;45(1):1422.

Background

Increased respiratory tract secretions in patients near death can cause noisy breathing, often referred to as a death rattle. Antimuscarinic medications, such as atropine, are frequently used to decrease audible respirations and family distress, though little evidence exists to support this practice.

Findings

In this double‐blind, placebo‐controlled, parallel group trial at 3 inpatient hospices, 177 terminally ill patients with audible respiratory secretions were randomized to 2 drops of sublingual atropine 1% solution or placebo drops. Bedside nurses rated patients' respiratory secretions at enrollment, and 2 and 4 hours after receiving atropine or placebo. There were no differences in noise score between subjects treated with atropine and placebo at 2 hours (37.8% vs. 41.3%, P=0.24) or at 4 hours (39.7% and 51.7%, P=0.21). There were no differences in the safety end point of change in heart rate (P=0.47).

Cautions

Previous studies comparing different anticholinergic medications and routes of administration to manage audible respiratory secretions had variable response rates but suggested a benefit to antimuscarinic medications. However, these trials had significant methodological limitations including lack of randomization and blinding. The improvement in death rattle over time in other studies may suggest a favorable natural course for respiratory secretions rather than a treatment effect.

Implications

Although generalizability to other antimuscarinic medications and routes of administration is limited, in a randomized, double‐blind, placebo‐controlled trial, sublingual atropine did not reduce the noise from respiratory secretions when compared to placebo.

PATIENT AND FAMILY OUTCOMES AFTER CARDIOPULMONARY RESUSCITATION

Over Half of Older Adult Survivors of In‐Hospital Cardiopulmonary Resuscitation Were Alive At 1 Year

Chan PS, Krumholz HM, Spertus JA, et al. Long‐term outcomes in elderly survivors of in‐hospital cardiac arrest. N Engl J Med. 2013;368:10191026.

Background

Studies of cardiopulmonary resuscitation (CPR) outcomes have focused on survival to hospital discharge. Little is known about long‐term outcomes following in‐hospital cardiac arrest in older adults.

Findings

The authors analyzed data from the Get With the GuidelinesResuscitation registry from 2000 to 2008 and Medicare inpatient files from 2000 to 2010. The cohort included 6972 patients at 401 hospitals who were discharged after surviving in‐hospital arrest. Outcomes were survival and freedom from hospital readmission at 1 year after discharge. At discharge, 48% of patients had either no or mild neurologic disability at discharge; the remainder had moderate to severe neurologic disability. Overall, 58% of patients who were discharged were still alive at 1 year. Survival rates were lowest for patients who were discharged in coma or vegetative state (8% at 1 year), and highest for those discharged with mild or no disability (73% at 1 year). Older patients had lower survival rates than younger patients, as did men compared with women and blacks compared with whites. At 1 year, 34.4% of the patients had not been readmitted. Predictors of readmission were similar to those for lower survival rates.

Cautions

This study only analyzed survival data from patients who survived to hospital discharge after receiving in‐hospital CPR, not all patients who had a cardiac arrest. Thus, the survival rates reported here do not include patients who died during the original arrest, or who survived the arrest but died during their hospitalization. The 1‐year survival rate for people aged 65 years and above following a cardiac arrest is not reported but is likely to be about 10%, based on data from this registry.[8] Data were not available for health status, neurologic status, or quality of life of the survivors at 1 year.

Implications

Older patients who receive in‐hospital CPR and have a good neurologic status at hospital discharge have good long‐term outcomes. In counseling patients about CPR, it is important to note that most patients who receive CPR do not survive to hospital discharge.

Families Who Were Present During CPR Had Decreased Post‐traumatic Stress Symptoms

Jabre P, Belpomme V, Azoulay E, et al. Family presence during cardiopulmonary resuscitation. N Engl J Med. 2013;368:10081018.

Background

Family members who watch their loved ones undergo (CPR) might have increased emotional distress. Alternatively, observing CPR may allow for appreciation of the efforts taken for their loved one and provide comfort at a challenging time. The right balance of benefit and harm is unclear.

Findings

Between 2009 and 2011, 15 prehospital emergency medical service units in France were randomized to offer adult family members the opportunity to observe CPR or follow their usual practice. A total of 570 relatives were enrolled. In the intervention group, 79% of relatives observed CPR, compared to 43% in the control group. There was no difference in the effectiveness of CPR between the 2 groups. At 90 days, post‐traumatic stress symptoms were more common in the control group (adjusted odds ratio [OR]: 1.7; 95% confidence interval [CI]: 1.2‐2.5). At 90 days, those who were present for the resuscitation also had fewer symptoms of anxiety and fewer symptoms of depression (P<0.009 for both). Stress of the medical teams involved in the CPR was not different between the 2 groups. No malpractice claims were filed in either group.

Cautions

The study was conducted only in France, so the results may not be generalizable outside of France. In addition, the observed resuscitation was for patients who suffered a cardiac arrest in the home; it is unclear if the same results would be found in the emergency department or hospital.

Implications

This is the highest quality study to date in this area that argues for actively inviting family members to be present for resuscitation efforts in the home. Further studies are needed to determine if hospitals should implement standard protocols. In the meantime, providers who perform CPR should consider inviting families to observe, as it may result in less emotional distress for family members.

COMMUNICATION AND DECISION MAKING

Surrogate Decision Makers Interpreted Prognostic Information Optimistically

Zier LS, Sottile PD, Hong SY, et al. Surrogate decision makers' interpretation of prognostic information: a mixed‐methods study. Ann Intern Med. 2012;156:360366.

Background

Surrogates of critically ill patients often have beliefs about prognosis that are discordant from what is told to them by providers. Little is known about why this is the case.

Findings

Eighty surrogates of patients in intensive care units (ICUs) were given questionnaires with hypothetical prognostic statements and asked to identify a survival probability associated with each statement on a 0% to 100% scale. Interviewers examined the questionnaires to identify responses that were not concordant with the given prognostic statements. They then interviewed participants to determine why the answers were discordant. The researchers found that surrogates were more likely to offer an overoptimistic interpretation of statements communicating a high risk of death, compared to statements communicating a low risk of death. The qualitative interviews revealed that surrogates felt they needed to express ongoing optimism and that patient factors not known to the medical team would lead to better outcomes.

Cautions

The participants were surrogates who were present in the ICU at the time when study investigators were there, and thus the results may not be generalizable to all surrogates. Only a subset of participants completed qualitative interviews. Prognostic statements were hypothetical. Written prognostic statements may be interpreted differently than spoken statements.

Implications

Surrogate decision makers may interpret prognostic statements optimistically, especially when a high risk of death is estimated. Inaccurate interpretation may be related to personal beliefs about the patients' strengths and a need to hold onto hope for a positive outcome. When communicating with surrogates of critically ill patients, providers should be aware that, beyond the actual information shared, many other factors influence surrogates' beliefs about prognosis.

A Majority of Patients With Metastatic Cancer Felt That Chemotherapy Might Cure Their Disease

Weeks JC, Catalano PJ, Chronin A, et al. Patients' expectations about effects of chemotherapy for advanced cancer. N Engl J Med. 2012;367:16161625.

Background

Chemotherapy for advanced cancer is not curative, and many cancer patients overestimate their prognosis. Little is known about patients' understanding of the goals of chemotherapy when cancer is advanced.

Findings

Participants were part of the Cancer Care Outcomes Research and Surveillance study. Patients with stage IV lung or colon cancer who opted to receive chemotherapy (n=1193) were asked how likely they thought it was that the chemotherapy would cure their cancer. A majority (69% of lung cancer patients and 81% of colon cancer patients) felt that chemotherapy might cure their disease. Those who rated their physicians very favorably in satisfaction surveys were more likely to feel that that chemotherapy might be curative, compared to those who rated their physician less favorably (OR: 1.90; 95% CI: 1.33‐2.72).

Cautions

The study did not include patients who died soon after diagnosis and thus does not provide information about those who opted for chemotherapy but did not survive to the interview. It is possible that responses were influenced by participants' need to express optimism (social desirability bias). It is not clear how or whether prognostic disclosure by physicians caused the lower satisfaction ratings.

Implications

Despite the fact that stage IV lung and colon cancer are not curable with chemotherapy, a majority of patients reported believing that chemotherapy might cure their disease. Hospital clinicians should be aware that many patients who they view as terminally ill believe their illness may be cured.

Older Patients Who Viewed a Goals‐of‐Care Video at Admission to a Skilled Nursing Facility Were More Likely to Prefer Comfort Care

Volandes AE, Brandeis GH, Davis AD, et al. A randomized controlled trial of a goals‐of‐care video for elderly patients admitted to skilled nursing facilities. J Palliat Med. 2012;15:805811.

Background

Seriously ill older patients are frequently discharged from hospitals to skilled nursing facilities (SNFs). It is important to clarify and document patients' goals for care at the time of admission to SNFs, to ensure that care provided there is consistent with patients' preferences. Previous work has shown promise using videos to assist patients in advance‐care planning, providing realistic and standardized portrayals of different treatment options.[9, 10]

Findings

English‐speaking patients at least 65 years of age who did not have altered mental status were randomized to hear a verbal description (n=51) or view a 6‐minute video (n=50) that presented the same information accompanied by pictures of patients of 3 possible goals of medical care: life‐prolonging care, limited medical care, and comfort care. After the video or narrative, patients were asked what their care preference would be if they became more ill while at the SNF. Patients who viewed the video were more likely to report a preference for comfort care, compared to patients who received the narrative, 80% vs 57%, P=0.02. In a review of medical records, only 31% of patients who reported a preference for comfort care had a do not resuscitate order at the SNF.

Cautions

The study was conducted at 2 nursing homes located in the Boston, Massachusetts area, which may limit generalizability. Assessors were not blinded to whether the patient saw the video or received the narrative, which may have introduced bias. The authors note that the video aimed to present the different care options without valuing one over the other, though it may have inadvertently presented one option in a more favorable light.

Implications

Videos may be powerful tools for helping nursing home patients to clarify goals of care, and might be applied in the hospital setting prior to transferring patients to nursing homes. There is a significant opportunity to improve concordance of care with preferences through better documentation and implementation of code status orders when transferring patients to SNFs.

Acknowledgments

Disclosures: Drs. Anderson and Johnson and Mr. Horton received an honorarium and support for travel to present findings resulting from the literature review at the Annual Assembly of the American Academy of Hospice and Palliative Medicine and Hospice and Palliative Nurses Association on March 16, 2013 in New Orleans, Louisiana. Dr. Anderson was funded by the National Center for Advancing Translational Sciences, National Institutes of Health, through UCSF‐CTSI grant number KL2TR000143. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. The authors report no conflicts of interest.

APPENDIX

Journals That Were Hand Searched to Identify Articles, By Topic Area

General:

  • British Medical Journal
  • Journal of the American Medical Association
  • Lancet
  • New England Journal of Medicine

Internal medicine:

  • Annals Internal Medicine
  • Archives Internal Medicine
  • Journal of General Internal Medicine
  • Journal of Hospital Medicine

Palliative care and symptom management:

  • Journal Pain and Symptom Management
  • Journal of Palliative Care
  • Journal of Palliative Medicine
  • Palliative Medicine
  • Pain

Oncology:

  • Journal of Clinical Oncology
  • Supportive Care in Cancer

Critical care:

  • American Journal of Respiratory and Critical Care Medicine
  • Critical Care Medicine

Pediatrics:

  • Pediatrics

Geriatrics:

  • Journal of the American Geriatrics Society

Education:

  • Academic Medicine

Nursing:

  • Journal of Hospice and Palliative Nursing
  • Oncology Nursing Forum
Files
References
  1. The Dartmouth Atlas of Health Care. Percent of Medicare decedents hospitalized at least once during the last six months of life 2007. Available at: http://www.dartmouthatlas.org/data/table.aspx?ind=133. Accessed October 30, 2013.
  2. Teno JM, Gozalo PL, Bynum JP, et al. Change in end‐of‐life care for Medicare beneficiaries: site of death, place of care, and health care transitions in 2000, 2005, and 2009. JAMA. 2013;309(5):470477.
  3. Warren JL, Barbera L, Bremner KE, et al. End‐of‐life care for lung cancer patients in the United States and Ontario. J Natl Cancer Inst. 2011;103(11):853862.
  4. Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN. Core competencies in hospital medicine: development and methodology. J Hosp Med. 2006;1(suppl 1):4856.
  5. Society of Hospital Medicine; 2008.The core competencies in hospital medicine. http://www.hospitalmedicine.org/Content/NavigationMenu/Education/CoreCurriculum/Core_Competencies.htm. Accessed October 30, 2013.
  6. Roberts M, Neville E, Berrisford R, Antunes G, Ali N. Management of a malignant pleural effusion: British Thoracic Society Pleural Disease Guideline. Thorax. 2010;65:ii32ii40.
  7. Hawley PH, Byeon JJ. A comparison of sennosides‐based bowel protocols with and without docusate in hospitalized patients with cancer. J Palliat Med. 2008;11(4):575581.
  8. Girota S, Nallamothu B, Spertus J, Li Y, Krumholz M, Chan P. Trends in survival after In‐hospital cardiac arrest. N Engl J Med. 2012;367:19121920.
  9. El‐Jawahri A, Podgurski LM, Eichler AF, et al. Use of video to facilitate end‐of‐life discussions with patients with cancer: a randomized controlled trial. J Clin Oncol. 2010;28(2):305310.
  10. Volandes AE, Levin TT, Slovin S, et al. Augmenting advance care planning in poor prognosis cancer with a video decision aid: a preintervention‐postintervention study. Cancer. 2012;118(17):43314338.
Article PDF
Issue
Journal of Hospital Medicine - 8(12)
Publications
Page Number
715-720
Sections
Files
Files
Article PDF
Article PDF

Seriously ill patients frequently receive care in hospitals,[1, 2, 3] and palliative care is a core competency for hospitalists.[4, 5] The goal of this update was to summarize and critique recently published research that has the highest potential to impact the clinical practice of palliative care in the hospital. We reviewed articles published between January 2012 and May 2013. To identify articles, we hand‐searched 22 leading journals (see Appendix) and the Cochrane Database of Systematic Reviews, and performed a PubMed keyword search using the terms hospice and palliative care. We evaluated identified articles based on scientific rigor and relevance to hospital practice. In this review, we summarize 9 articles that were collectively selected as having the highest impact on the clinical practice of hospital palliative care. We summarize each article and its findings and note cautions and implications for practice.

SYMPTOM MANAGEMENT

Indwelling Pleural Catheters and Talc Pleurodesis Provide Similar Dyspnea Relief in Patients With Malignant Pleural Effusions

Davies HE, Mishra EK, Kahan BC, et al. Effect of an indwelling pleural catheter vs chest tube and talc pleurodesis for relieving dyspnea in patients with malignant pleural effusion. JAMA. 2012;307:23832389.

Background

Expert guidelines recommend chest‐tube insertion and talc pleurodesis as a first‐line therapy for symptomatic malignant pleural effusions, but indwelling pleural catheters are gaining in popularity.[6] The optimal management is unknown.

Findings

A total of 106 patients with newly diagnosed symptomatic malignant pleural effusion were randomized to undergo talc pleurodesis or placement of an indwelling pleural catheter. Most patients had metastatic breast or lung cancer. Overall, there were no differences in relief of dyspnea at 42 days between patients who received indwelling catheters and pleurodesis; importantly, more than 75% of patients in both groups reported improved shortness of breath. The initial hospitalization was much shorter in the indwelling catheter group (0 days vs 4 days). There was no difference in quality of life, but in surviving patients, dyspnea at 6 months was better with the indwelling catheter. In the talc group, 22% of patients required further pleural procedures compared with 6% in the indwelling catheter group. Patients in the talc group had a higher frequency of adverse events than in the catheter group (40% vs 13%). In the catheter group, the most common adverse events were pleural infection, cellulitis, and catheter obstruction.

Cautions

The study was small and unblinded, and the primary outcome was subjective dyspnea. The study occurred at 7 hospitals, and the impact of institutional or provider experience was not taken into account. Last, overall costs of care, which could impact the choice of intervention, were not calculated.

Implications

This was a small but well‐done study showing that indwelling catheters and talc pleurodesis provide similar relief of dyspnea 42 days postintervention. Given these results, both interventions seem to be acceptable options. Clinicians and patients could select the best option based on local procedural expertise and patient factors such as preference, ability to manage a catheter, and life expectancy.

Most Dying Patients Do Not Experience Increased Respiratory Distress When Oxygen is Withdrawn

Campbell ML, Yarandi H, Dove‐Medows E. Oxygen is nonbeneficial for most patients who are near death. J Pain Symptom Manage. 2013;45(3):517523.

Background

Oxygen is frequently administered to patients at the end of life, yet there is limited evidence evaluating whether oxygen reduces respiratory distress in dying patients.

Findings

In this double‐blind, repeated‐measure study, patients served as their own controls as the investigators evaluated respiratory distress with and without oxygen therapy. The study included 32 patients who were enrolled in hospice or seen in palliative care consultation and had a diagnosis such as lung cancer or heart failure that might cause dyspnea. Medical air (nasal cannula with air flow), supplemental oxygen, and no flow were randomly alternated every 10 minutes for 1 hour. Blinded research assistants used a validated observation scale to compare respiratory distress under each condition. At baseline, 27 of 32 (84%) patients were on oxygen. Three patients, all of whom were conscious and on oxygen at baseline, experienced increased respiratory distress without oxygen; reapplication of supplemental oxygen relieved their distress. The other 29 patients had no change in respiratory distress under the oxygen, medical air, and no flow conditions.

Cautions

All patients in this study were near death as measured by the Palliative Performance Scale, which assesses prognosis based on functional status and level of consciousness. Patients were excluded if they were receiving high‐flow oxygen by face mask or were experiencing respiratory distress at the time of initial evaluation. Some patients experienced increased discomfort after withdrawal of oxygen. Close observation is needed to determine which patients will experience distress.

Implications

The majority of patients who were receiving oxygen at baseline experienced no change in respiratory comfort when oxygen was withdrawn, supporting previous evidence that oxygen provides little benefit in nonhypoxemic patients. Oxygen may be an unnecessary intervention near death and has the potential to add to discomfort through nasal dryness and decreased mobility.

Sennosides Performed Similarly to Docusate Plus Sennosides in Managing Opioid‐Induced Constipation in Seriously Ill Patients

Tarumi Y, Wilson MP, Szafran O, Spooner GR. Randomized, double‐blind, placebo‐controlled trial of oral docusate in the management of constipation in hospice patients. J Pain Symptom Manage. 2013;45:213.

Background

Seriously ill patients frequently suffer from constipation, often as a result of opioid analgesics. Hospital clinicians should seek to optimize bowel regimens to prevent opioid‐induced constipation. A combination of the stimulant laxative sennoside and the stool softener docusate is often recommended to treat and prevent constipation. Docusate may not have additional benefit to sennoside, and may have significant burdens, including disturbing the absorption of other medications, adding to patients' pill burden and increasing nurse workload.[7]

Findings

In this double‐blinded trial, 74 patients in 3 inpatient hospices in Canada were randomized to receive sennoside plus either docusate 100 mg, or placebo tablets twice daily, or sennoside plus placebo for 10 days. Most patients had cancer as a life‐limiting diagnosis and received opioids during the study period. All were able to tolerate pills and food or sips of fluid. There was no significant difference between the 2 groups in stool frequency, volume, consistency, or patients' perceptions of difficulty with defecation. The percentage of patients who had a bowel movement at least every 3 days was 71% in the docusate plus sennoside group and 81% in the sennoside only group (P=0.45). There was also no significant difference between the groups in sennoside dose (which ranged between 13, 8.6 mg tablets daily), mean morphine equivalent daily dosage, or other bowel interventions.

Cautions

The trial was small, though it was adequately powered to detect a clinically meaningful difference between the 2 groups of 0.5 in the average number of bowel movements per day. The consent rate was low (26%); the authors do not detail reasons patients were not randomized. Patients who did not participate might have had different responses.

Implications

Consistent with previous work,[7] these results indicate that docusate is probably not needed for routine management of opioid‐induced constipation in seriously ill patients.

Sublingual Atropine Performed Similarly to Placebo in Reducing Noise Associated With Respiratory Rattle Near Death

Heisler M, Hamilton G, Abbott A, et al. Randomized double‐blind trial of sublingual atropine vs. placebo for the management of death rattle. J Pain Symptom Manage. 2012;45(1):1422.

Background

Increased respiratory tract secretions in patients near death can cause noisy breathing, often referred to as a death rattle. Antimuscarinic medications, such as atropine, are frequently used to decrease audible respirations and family distress, though little evidence exists to support this practice.

Findings

In this double‐blind, placebo‐controlled, parallel group trial at 3 inpatient hospices, 177 terminally ill patients with audible respiratory secretions were randomized to 2 drops of sublingual atropine 1% solution or placebo drops. Bedside nurses rated patients' respiratory secretions at enrollment, and 2 and 4 hours after receiving atropine or placebo. There were no differences in noise score between subjects treated with atropine and placebo at 2 hours (37.8% vs. 41.3%, P=0.24) or at 4 hours (39.7% and 51.7%, P=0.21). There were no differences in the safety end point of change in heart rate (P=0.47).

Cautions

Previous studies comparing different anticholinergic medications and routes of administration to manage audible respiratory secretions had variable response rates but suggested a benefit to antimuscarinic medications. However, these trials had significant methodological limitations including lack of randomization and blinding. The improvement in death rattle over time in other studies may suggest a favorable natural course for respiratory secretions rather than a treatment effect.

Implications

Although generalizability to other antimuscarinic medications and routes of administration is limited, in a randomized, double‐blind, placebo‐controlled trial, sublingual atropine did not reduce the noise from respiratory secretions when compared to placebo.

PATIENT AND FAMILY OUTCOMES AFTER CARDIOPULMONARY RESUSCITATION

Over Half of Older Adult Survivors of In‐Hospital Cardiopulmonary Resuscitation Were Alive At 1 Year

Chan PS, Krumholz HM, Spertus JA, et al. Long‐term outcomes in elderly survivors of in‐hospital cardiac arrest. N Engl J Med. 2013;368:10191026.

Background

Studies of cardiopulmonary resuscitation (CPR) outcomes have focused on survival to hospital discharge. Little is known about long‐term outcomes following in‐hospital cardiac arrest in older adults.

Findings

The authors analyzed data from the Get With the GuidelinesResuscitation registry from 2000 to 2008 and Medicare inpatient files from 2000 to 2010. The cohort included 6972 patients at 401 hospitals who were discharged after surviving in‐hospital arrest. Outcomes were survival and freedom from hospital readmission at 1 year after discharge. At discharge, 48% of patients had either no or mild neurologic disability at discharge; the remainder had moderate to severe neurologic disability. Overall, 58% of patients who were discharged were still alive at 1 year. Survival rates were lowest for patients who were discharged in coma or vegetative state (8% at 1 year), and highest for those discharged with mild or no disability (73% at 1 year). Older patients had lower survival rates than younger patients, as did men compared with women and blacks compared with whites. At 1 year, 34.4% of the patients had not been readmitted. Predictors of readmission were similar to those for lower survival rates.

Cautions

This study only analyzed survival data from patients who survived to hospital discharge after receiving in‐hospital CPR, not all patients who had a cardiac arrest. Thus, the survival rates reported here do not include patients who died during the original arrest, or who survived the arrest but died during their hospitalization. The 1‐year survival rate for people aged 65 years and above following a cardiac arrest is not reported but is likely to be about 10%, based on data from this registry.[8] Data were not available for health status, neurologic status, or quality of life of the survivors at 1 year.

Implications

Older patients who receive in‐hospital CPR and have a good neurologic status at hospital discharge have good long‐term outcomes. In counseling patients about CPR, it is important to note that most patients who receive CPR do not survive to hospital discharge.

Families Who Were Present During CPR Had Decreased Post‐traumatic Stress Symptoms

Jabre P, Belpomme V, Azoulay E, et al. Family presence during cardiopulmonary resuscitation. N Engl J Med. 2013;368:10081018.

Background

Family members who watch their loved ones undergo (CPR) might have increased emotional distress. Alternatively, observing CPR may allow for appreciation of the efforts taken for their loved one and provide comfort at a challenging time. The right balance of benefit and harm is unclear.

Findings

Between 2009 and 2011, 15 prehospital emergency medical service units in France were randomized to offer adult family members the opportunity to observe CPR or follow their usual practice. A total of 570 relatives were enrolled. In the intervention group, 79% of relatives observed CPR, compared to 43% in the control group. There was no difference in the effectiveness of CPR between the 2 groups. At 90 days, post‐traumatic stress symptoms were more common in the control group (adjusted odds ratio [OR]: 1.7; 95% confidence interval [CI]: 1.2‐2.5). At 90 days, those who were present for the resuscitation also had fewer symptoms of anxiety and fewer symptoms of depression (P<0.009 for both). Stress of the medical teams involved in the CPR was not different between the 2 groups. No malpractice claims were filed in either group.

Cautions

The study was conducted only in France, so the results may not be generalizable outside of France. In addition, the observed resuscitation was for patients who suffered a cardiac arrest in the home; it is unclear if the same results would be found in the emergency department or hospital.

Implications

This is the highest quality study to date in this area that argues for actively inviting family members to be present for resuscitation efforts in the home. Further studies are needed to determine if hospitals should implement standard protocols. In the meantime, providers who perform CPR should consider inviting families to observe, as it may result in less emotional distress for family members.

COMMUNICATION AND DECISION MAKING

Surrogate Decision Makers Interpreted Prognostic Information Optimistically

Zier LS, Sottile PD, Hong SY, et al. Surrogate decision makers' interpretation of prognostic information: a mixed‐methods study. Ann Intern Med. 2012;156:360366.

Background

Surrogates of critically ill patients often have beliefs about prognosis that are discordant from what is told to them by providers. Little is known about why this is the case.

Findings

Eighty surrogates of patients in intensive care units (ICUs) were given questionnaires with hypothetical prognostic statements and asked to identify a survival probability associated with each statement on a 0% to 100% scale. Interviewers examined the questionnaires to identify responses that were not concordant with the given prognostic statements. They then interviewed participants to determine why the answers were discordant. The researchers found that surrogates were more likely to offer an overoptimistic interpretation of statements communicating a high risk of death, compared to statements communicating a low risk of death. The qualitative interviews revealed that surrogates felt they needed to express ongoing optimism and that patient factors not known to the medical team would lead to better outcomes.

Cautions

The participants were surrogates who were present in the ICU at the time when study investigators were there, and thus the results may not be generalizable to all surrogates. Only a subset of participants completed qualitative interviews. Prognostic statements were hypothetical. Written prognostic statements may be interpreted differently than spoken statements.

Implications

Surrogate decision makers may interpret prognostic statements optimistically, especially when a high risk of death is estimated. Inaccurate interpretation may be related to personal beliefs about the patients' strengths and a need to hold onto hope for a positive outcome. When communicating with surrogates of critically ill patients, providers should be aware that, beyond the actual information shared, many other factors influence surrogates' beliefs about prognosis.

A Majority of Patients With Metastatic Cancer Felt That Chemotherapy Might Cure Their Disease

Weeks JC, Catalano PJ, Chronin A, et al. Patients' expectations about effects of chemotherapy for advanced cancer. N Engl J Med. 2012;367:16161625.

Background

Chemotherapy for advanced cancer is not curative, and many cancer patients overestimate their prognosis. Little is known about patients' understanding of the goals of chemotherapy when cancer is advanced.

Findings

Participants were part of the Cancer Care Outcomes Research and Surveillance study. Patients with stage IV lung or colon cancer who opted to receive chemotherapy (n=1193) were asked how likely they thought it was that the chemotherapy would cure their cancer. A majority (69% of lung cancer patients and 81% of colon cancer patients) felt that chemotherapy might cure their disease. Those who rated their physicians very favorably in satisfaction surveys were more likely to feel that that chemotherapy might be curative, compared to those who rated their physician less favorably (OR: 1.90; 95% CI: 1.33‐2.72).

Cautions

The study did not include patients who died soon after diagnosis and thus does not provide information about those who opted for chemotherapy but did not survive to the interview. It is possible that responses were influenced by participants' need to express optimism (social desirability bias). It is not clear how or whether prognostic disclosure by physicians caused the lower satisfaction ratings.

Implications

Despite the fact that stage IV lung and colon cancer are not curable with chemotherapy, a majority of patients reported believing that chemotherapy might cure their disease. Hospital clinicians should be aware that many patients who they view as terminally ill believe their illness may be cured.

Older Patients Who Viewed a Goals‐of‐Care Video at Admission to a Skilled Nursing Facility Were More Likely to Prefer Comfort Care

Volandes AE, Brandeis GH, Davis AD, et al. A randomized controlled trial of a goals‐of‐care video for elderly patients admitted to skilled nursing facilities. J Palliat Med. 2012;15:805811.

Background

Seriously ill older patients are frequently discharged from hospitals to skilled nursing facilities (SNFs). It is important to clarify and document patients' goals for care at the time of admission to SNFs, to ensure that care provided there is consistent with patients' preferences. Previous work has shown promise using videos to assist patients in advance‐care planning, providing realistic and standardized portrayals of different treatment options.[9, 10]

Findings

English‐speaking patients at least 65 years of age who did not have altered mental status were randomized to hear a verbal description (n=51) or view a 6‐minute video (n=50) that presented the same information accompanied by pictures of patients of 3 possible goals of medical care: life‐prolonging care, limited medical care, and comfort care. After the video or narrative, patients were asked what their care preference would be if they became more ill while at the SNF. Patients who viewed the video were more likely to report a preference for comfort care, compared to patients who received the narrative, 80% vs 57%, P=0.02. In a review of medical records, only 31% of patients who reported a preference for comfort care had a do not resuscitate order at the SNF.

Cautions

The study was conducted at 2 nursing homes located in the Boston, Massachusetts area, which may limit generalizability. Assessors were not blinded to whether the patient saw the video or received the narrative, which may have introduced bias. The authors note that the video aimed to present the different care options without valuing one over the other, though it may have inadvertently presented one option in a more favorable light.

Implications

Videos may be powerful tools for helping nursing home patients to clarify goals of care, and might be applied in the hospital setting prior to transferring patients to nursing homes. There is a significant opportunity to improve concordance of care with preferences through better documentation and implementation of code status orders when transferring patients to SNFs.

Acknowledgments

Disclosures: Drs. Anderson and Johnson and Mr. Horton received an honorarium and support for travel to present findings resulting from the literature review at the Annual Assembly of the American Academy of Hospice and Palliative Medicine and Hospice and Palliative Nurses Association on March 16, 2013 in New Orleans, Louisiana. Dr. Anderson was funded by the National Center for Advancing Translational Sciences, National Institutes of Health, through UCSF‐CTSI grant number KL2TR000143. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. The authors report no conflicts of interest.

APPENDIX

Journals That Were Hand Searched to Identify Articles, By Topic Area

General:

  • British Medical Journal
  • Journal of the American Medical Association
  • Lancet
  • New England Journal of Medicine

Internal medicine:

  • Annals Internal Medicine
  • Archives Internal Medicine
  • Journal of General Internal Medicine
  • Journal of Hospital Medicine

Palliative care and symptom management:

  • Journal Pain and Symptom Management
  • Journal of Palliative Care
  • Journal of Palliative Medicine
  • Palliative Medicine
  • Pain

Oncology:

  • Journal of Clinical Oncology
  • Supportive Care in Cancer

Critical care:

  • American Journal of Respiratory and Critical Care Medicine
  • Critical Care Medicine

Pediatrics:

  • Pediatrics

Geriatrics:

  • Journal of the American Geriatrics Society

Education:

  • Academic Medicine

Nursing:

  • Journal of Hospice and Palliative Nursing
  • Oncology Nursing Forum

Seriously ill patients frequently receive care in hospitals,[1, 2, 3] and palliative care is a core competency for hospitalists.[4, 5] The goal of this update was to summarize and critique recently published research that has the highest potential to impact the clinical practice of palliative care in the hospital. We reviewed articles published between January 2012 and May 2013. To identify articles, we hand‐searched 22 leading journals (see Appendix) and the Cochrane Database of Systematic Reviews, and performed a PubMed keyword search using the terms hospice and palliative care. We evaluated identified articles based on scientific rigor and relevance to hospital practice. In this review, we summarize 9 articles that were collectively selected as having the highest impact on the clinical practice of hospital palliative care. We summarize each article and its findings and note cautions and implications for practice.

SYMPTOM MANAGEMENT

Indwelling Pleural Catheters and Talc Pleurodesis Provide Similar Dyspnea Relief in Patients With Malignant Pleural Effusions

Davies HE, Mishra EK, Kahan BC, et al. Effect of an indwelling pleural catheter vs chest tube and talc pleurodesis for relieving dyspnea in patients with malignant pleural effusion. JAMA. 2012;307:23832389.

Background

Expert guidelines recommend chest‐tube insertion and talc pleurodesis as a first‐line therapy for symptomatic malignant pleural effusions, but indwelling pleural catheters are gaining in popularity.[6] The optimal management is unknown.

Findings

A total of 106 patients with newly diagnosed symptomatic malignant pleural effusion were randomized to undergo talc pleurodesis or placement of an indwelling pleural catheter. Most patients had metastatic breast or lung cancer. Overall, there were no differences in relief of dyspnea at 42 days between patients who received indwelling catheters and pleurodesis; importantly, more than 75% of patients in both groups reported improved shortness of breath. The initial hospitalization was much shorter in the indwelling catheter group (0 days vs 4 days). There was no difference in quality of life, but in surviving patients, dyspnea at 6 months was better with the indwelling catheter. In the talc group, 22% of patients required further pleural procedures compared with 6% in the indwelling catheter group. Patients in the talc group had a higher frequency of adverse events than in the catheter group (40% vs 13%). In the catheter group, the most common adverse events were pleural infection, cellulitis, and catheter obstruction.

Cautions

The study was small and unblinded, and the primary outcome was subjective dyspnea. The study occurred at 7 hospitals, and the impact of institutional or provider experience was not taken into account. Last, overall costs of care, which could impact the choice of intervention, were not calculated.

Implications

This was a small but well‐done study showing that indwelling catheters and talc pleurodesis provide similar relief of dyspnea 42 days postintervention. Given these results, both interventions seem to be acceptable options. Clinicians and patients could select the best option based on local procedural expertise and patient factors such as preference, ability to manage a catheter, and life expectancy.

Most Dying Patients Do Not Experience Increased Respiratory Distress When Oxygen is Withdrawn

Campbell ML, Yarandi H, Dove‐Medows E. Oxygen is nonbeneficial for most patients who are near death. J Pain Symptom Manage. 2013;45(3):517523.

Background

Oxygen is frequently administered to patients at the end of life, yet there is limited evidence evaluating whether oxygen reduces respiratory distress in dying patients.

Findings

In this double‐blind, repeated‐measure study, patients served as their own controls as the investigators evaluated respiratory distress with and without oxygen therapy. The study included 32 patients who were enrolled in hospice or seen in palliative care consultation and had a diagnosis such as lung cancer or heart failure that might cause dyspnea. Medical air (nasal cannula with air flow), supplemental oxygen, and no flow were randomly alternated every 10 minutes for 1 hour. Blinded research assistants used a validated observation scale to compare respiratory distress under each condition. At baseline, 27 of 32 (84%) patients were on oxygen. Three patients, all of whom were conscious and on oxygen at baseline, experienced increased respiratory distress without oxygen; reapplication of supplemental oxygen relieved their distress. The other 29 patients had no change in respiratory distress under the oxygen, medical air, and no flow conditions.

Cautions

All patients in this study were near death as measured by the Palliative Performance Scale, which assesses prognosis based on functional status and level of consciousness. Patients were excluded if they were receiving high‐flow oxygen by face mask or were experiencing respiratory distress at the time of initial evaluation. Some patients experienced increased discomfort after withdrawal of oxygen. Close observation is needed to determine which patients will experience distress.

Implications

The majority of patients who were receiving oxygen at baseline experienced no change in respiratory comfort when oxygen was withdrawn, supporting previous evidence that oxygen provides little benefit in nonhypoxemic patients. Oxygen may be an unnecessary intervention near death and has the potential to add to discomfort through nasal dryness and decreased mobility.

Sennosides Performed Similarly to Docusate Plus Sennosides in Managing Opioid‐Induced Constipation in Seriously Ill Patients

Tarumi Y, Wilson MP, Szafran O, Spooner GR. Randomized, double‐blind, placebo‐controlled trial of oral docusate in the management of constipation in hospice patients. J Pain Symptom Manage. 2013;45:213.

Background

Seriously ill patients frequently suffer from constipation, often as a result of opioid analgesics. Hospital clinicians should seek to optimize bowel regimens to prevent opioid‐induced constipation. A combination of the stimulant laxative sennoside and the stool softener docusate is often recommended to treat and prevent constipation. Docusate may not have additional benefit to sennoside, and may have significant burdens, including disturbing the absorption of other medications, adding to patients' pill burden and increasing nurse workload.[7]

Findings

In this double‐blinded trial, 74 patients in 3 inpatient hospices in Canada were randomized to receive sennoside plus either docusate 100 mg, or placebo tablets twice daily, or sennoside plus placebo for 10 days. Most patients had cancer as a life‐limiting diagnosis and received opioids during the study period. All were able to tolerate pills and food or sips of fluid. There was no significant difference between the 2 groups in stool frequency, volume, consistency, or patients' perceptions of difficulty with defecation. The percentage of patients who had a bowel movement at least every 3 days was 71% in the docusate plus sennoside group and 81% in the sennoside only group (P=0.45). There was also no significant difference between the groups in sennoside dose (which ranged between 13, 8.6 mg tablets daily), mean morphine equivalent daily dosage, or other bowel interventions.

Cautions

The trial was small, though it was adequately powered to detect a clinically meaningful difference between the 2 groups of 0.5 in the average number of bowel movements per day. The consent rate was low (26%); the authors do not detail reasons patients were not randomized. Patients who did not participate might have had different responses.

Implications

Consistent with previous work,[7] these results indicate that docusate is probably not needed for routine management of opioid‐induced constipation in seriously ill patients.

Sublingual Atropine Performed Similarly to Placebo in Reducing Noise Associated With Respiratory Rattle Near Death

Heisler M, Hamilton G, Abbott A, et al. Randomized double‐blind trial of sublingual atropine vs. placebo for the management of death rattle. J Pain Symptom Manage. 2012;45(1):1422.

Background

Increased respiratory tract secretions in patients near death can cause noisy breathing, often referred to as a death rattle. Antimuscarinic medications, such as atropine, are frequently used to decrease audible respirations and family distress, though little evidence exists to support this practice.

Findings

In this double‐blind, placebo‐controlled, parallel group trial at 3 inpatient hospices, 177 terminally ill patients with audible respiratory secretions were randomized to 2 drops of sublingual atropine 1% solution or placebo drops. Bedside nurses rated patients' respiratory secretions at enrollment, and 2 and 4 hours after receiving atropine or placebo. There were no differences in noise score between subjects treated with atropine and placebo at 2 hours (37.8% vs. 41.3%, P=0.24) or at 4 hours (39.7% and 51.7%, P=0.21). There were no differences in the safety end point of change in heart rate (P=0.47).

Cautions

Previous studies comparing different anticholinergic medications and routes of administration to manage audible respiratory secretions had variable response rates but suggested a benefit to antimuscarinic medications. However, these trials had significant methodological limitations including lack of randomization and blinding. The improvement in death rattle over time in other studies may suggest a favorable natural course for respiratory secretions rather than a treatment effect.

Implications

Although generalizability to other antimuscarinic medications and routes of administration is limited, in a randomized, double‐blind, placebo‐controlled trial, sublingual atropine did not reduce the noise from respiratory secretions when compared to placebo.

PATIENT AND FAMILY OUTCOMES AFTER CARDIOPULMONARY RESUSCITATION

Over Half of Older Adult Survivors of In‐Hospital Cardiopulmonary Resuscitation Were Alive At 1 Year

Chan PS, Krumholz HM, Spertus JA, et al. Long‐term outcomes in elderly survivors of in‐hospital cardiac arrest. N Engl J Med. 2013;368:10191026.

Background

Studies of cardiopulmonary resuscitation (CPR) outcomes have focused on survival to hospital discharge. Little is known about long‐term outcomes following in‐hospital cardiac arrest in older adults.

Findings

The authors analyzed data from the Get With the GuidelinesResuscitation registry from 2000 to 2008 and Medicare inpatient files from 2000 to 2010. The cohort included 6972 patients at 401 hospitals who were discharged after surviving in‐hospital arrest. Outcomes were survival and freedom from hospital readmission at 1 year after discharge. At discharge, 48% of patients had either no or mild neurologic disability at discharge; the remainder had moderate to severe neurologic disability. Overall, 58% of patients who were discharged were still alive at 1 year. Survival rates were lowest for patients who were discharged in coma or vegetative state (8% at 1 year), and highest for those discharged with mild or no disability (73% at 1 year). Older patients had lower survival rates than younger patients, as did men compared with women and blacks compared with whites. At 1 year, 34.4% of the patients had not been readmitted. Predictors of readmission were similar to those for lower survival rates.

Cautions

This study only analyzed survival data from patients who survived to hospital discharge after receiving in‐hospital CPR, not all patients who had a cardiac arrest. Thus, the survival rates reported here do not include patients who died during the original arrest, or who survived the arrest but died during their hospitalization. The 1‐year survival rate for people aged 65 years and above following a cardiac arrest is not reported but is likely to be about 10%, based on data from this registry.[8] Data were not available for health status, neurologic status, or quality of life of the survivors at 1 year.

Implications

Older patients who receive in‐hospital CPR and have a good neurologic status at hospital discharge have good long‐term outcomes. In counseling patients about CPR, it is important to note that most patients who receive CPR do not survive to hospital discharge.

Families Who Were Present During CPR Had Decreased Post‐traumatic Stress Symptoms

Jabre P, Belpomme V, Azoulay E, et al. Family presence during cardiopulmonary resuscitation. N Engl J Med. 2013;368:10081018.

Background

Family members who watch their loved ones undergo (CPR) might have increased emotional distress. Alternatively, observing CPR may allow for appreciation of the efforts taken for their loved one and provide comfort at a challenging time. The right balance of benefit and harm is unclear.

Findings

Between 2009 and 2011, 15 prehospital emergency medical service units in France were randomized to offer adult family members the opportunity to observe CPR or follow their usual practice. A total of 570 relatives were enrolled. In the intervention group, 79% of relatives observed CPR, compared to 43% in the control group. There was no difference in the effectiveness of CPR between the 2 groups. At 90 days, post‐traumatic stress symptoms were more common in the control group (adjusted odds ratio [OR]: 1.7; 95% confidence interval [CI]: 1.2‐2.5). At 90 days, those who were present for the resuscitation also had fewer symptoms of anxiety and fewer symptoms of depression (P<0.009 for both). Stress of the medical teams involved in the CPR was not different between the 2 groups. No malpractice claims were filed in either group.

Cautions

The study was conducted only in France, so the results may not be generalizable outside of France. In addition, the observed resuscitation was for patients who suffered a cardiac arrest in the home; it is unclear if the same results would be found in the emergency department or hospital.

Implications

This is the highest quality study to date in this area that argues for actively inviting family members to be present for resuscitation efforts in the home. Further studies are needed to determine if hospitals should implement standard protocols. In the meantime, providers who perform CPR should consider inviting families to observe, as it may result in less emotional distress for family members.

COMMUNICATION AND DECISION MAKING

Surrogate Decision Makers Interpreted Prognostic Information Optimistically

Zier LS, Sottile PD, Hong SY, et al. Surrogate decision makers' interpretation of prognostic information: a mixed‐methods study. Ann Intern Med. 2012;156:360366.

Background

Surrogates of critically ill patients often have beliefs about prognosis that are discordant from what is told to them by providers. Little is known about why this is the case.

Findings

Eighty surrogates of patients in intensive care units (ICUs) were given questionnaires with hypothetical prognostic statements and asked to identify a survival probability associated with each statement on a 0% to 100% scale. Interviewers examined the questionnaires to identify responses that were not concordant with the given prognostic statements. They then interviewed participants to determine why the answers were discordant. The researchers found that surrogates were more likely to offer an overoptimistic interpretation of statements communicating a high risk of death, compared to statements communicating a low risk of death. The qualitative interviews revealed that surrogates felt they needed to express ongoing optimism and that patient factors not known to the medical team would lead to better outcomes.

Cautions

The participants were surrogates who were present in the ICU at the time when study investigators were there, and thus the results may not be generalizable to all surrogates. Only a subset of participants completed qualitative interviews. Prognostic statements were hypothetical. Written prognostic statements may be interpreted differently than spoken statements.

Implications

Surrogate decision makers may interpret prognostic statements optimistically, especially when a high risk of death is estimated. Inaccurate interpretation may be related to personal beliefs about the patients' strengths and a need to hold onto hope for a positive outcome. When communicating with surrogates of critically ill patients, providers should be aware that, beyond the actual information shared, many other factors influence surrogates' beliefs about prognosis.

A Majority of Patients With Metastatic Cancer Felt That Chemotherapy Might Cure Their Disease

Weeks JC, Catalano PJ, Chronin A, et al. Patients' expectations about effects of chemotherapy for advanced cancer. N Engl J Med. 2012;367:16161625.

Background

Chemotherapy for advanced cancer is not curative, and many cancer patients overestimate their prognosis. Little is known about patients' understanding of the goals of chemotherapy when cancer is advanced.

Findings

Participants were part of the Cancer Care Outcomes Research and Surveillance study. Patients with stage IV lung or colon cancer who opted to receive chemotherapy (n=1193) were asked how likely they thought it was that the chemotherapy would cure their cancer. A majority (69% of lung cancer patients and 81% of colon cancer patients) felt that chemotherapy might cure their disease. Those who rated their physicians very favorably in satisfaction surveys were more likely to feel that that chemotherapy might be curative, compared to those who rated their physician less favorably (OR: 1.90; 95% CI: 1.33‐2.72).

Cautions

The study did not include patients who died soon after diagnosis and thus does not provide information about those who opted for chemotherapy but did not survive to the interview. It is possible that responses were influenced by participants' need to express optimism (social desirability bias). It is not clear how or whether prognostic disclosure by physicians caused the lower satisfaction ratings.

Implications

Despite the fact that stage IV lung and colon cancer are not curable with chemotherapy, a majority of patients reported believing that chemotherapy might cure their disease. Hospital clinicians should be aware that many patients who they view as terminally ill believe their illness may be cured.

Older Patients Who Viewed a Goals‐of‐Care Video at Admission to a Skilled Nursing Facility Were More Likely to Prefer Comfort Care

Volandes AE, Brandeis GH, Davis AD, et al. A randomized controlled trial of a goals‐of‐care video for elderly patients admitted to skilled nursing facilities. J Palliat Med. 2012;15:805811.

Background

Seriously ill older patients are frequently discharged from hospitals to skilled nursing facilities (SNFs). It is important to clarify and document patients' goals for care at the time of admission to SNFs, to ensure that care provided there is consistent with patients' preferences. Previous work has shown promise using videos to assist patients in advance‐care planning, providing realistic and standardized portrayals of different treatment options.[9, 10]

Findings

English‐speaking patients at least 65 years of age who did not have altered mental status were randomized to hear a verbal description (n=51) or view a 6‐minute video (n=50) that presented the same information accompanied by pictures of patients of 3 possible goals of medical care: life‐prolonging care, limited medical care, and comfort care. After the video or narrative, patients were asked what their care preference would be if they became more ill while at the SNF. Patients who viewed the video were more likely to report a preference for comfort care, compared to patients who received the narrative, 80% vs 57%, P=0.02. In a review of medical records, only 31% of patients who reported a preference for comfort care had a do not resuscitate order at the SNF.

Cautions

The study was conducted at 2 nursing homes located in the Boston, Massachusetts area, which may limit generalizability. Assessors were not blinded to whether the patient saw the video or received the narrative, which may have introduced bias. The authors note that the video aimed to present the different care options without valuing one over the other, though it may have inadvertently presented one option in a more favorable light.

Implications

Videos may be powerful tools for helping nursing home patients to clarify goals of care, and might be applied in the hospital setting prior to transferring patients to nursing homes. There is a significant opportunity to improve concordance of care with preferences through better documentation and implementation of code status orders when transferring patients to SNFs.

Acknowledgments

Disclosures: Drs. Anderson and Johnson and Mr. Horton received an honorarium and support for travel to present findings resulting from the literature review at the Annual Assembly of the American Academy of Hospice and Palliative Medicine and Hospice and Palliative Nurses Association on March 16, 2013 in New Orleans, Louisiana. Dr. Anderson was funded by the National Center for Advancing Translational Sciences, National Institutes of Health, through UCSF‐CTSI grant number KL2TR000143. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. The authors report no conflicts of interest.

APPENDIX

Journals That Were Hand Searched to Identify Articles, By Topic Area

General:

  • British Medical Journal
  • Journal of the American Medical Association
  • Lancet
  • New England Journal of Medicine

Internal medicine:

  • Annals Internal Medicine
  • Archives Internal Medicine
  • Journal of General Internal Medicine
  • Journal of Hospital Medicine

Palliative care and symptom management:

  • Journal Pain and Symptom Management
  • Journal of Palliative Care
  • Journal of Palliative Medicine
  • Palliative Medicine
  • Pain

Oncology:

  • Journal of Clinical Oncology
  • Supportive Care in Cancer

Critical care:

  • American Journal of Respiratory and Critical Care Medicine
  • Critical Care Medicine

Pediatrics:

  • Pediatrics

Geriatrics:

  • Journal of the American Geriatrics Society

Education:

  • Academic Medicine

Nursing:

  • Journal of Hospice and Palliative Nursing
  • Oncology Nursing Forum
References
  1. The Dartmouth Atlas of Health Care. Percent of Medicare decedents hospitalized at least once during the last six months of life 2007. Available at: http://www.dartmouthatlas.org/data/table.aspx?ind=133. Accessed October 30, 2013.
  2. Teno JM, Gozalo PL, Bynum JP, et al. Change in end‐of‐life care for Medicare beneficiaries: site of death, place of care, and health care transitions in 2000, 2005, and 2009. JAMA. 2013;309(5):470477.
  3. Warren JL, Barbera L, Bremner KE, et al. End‐of‐life care for lung cancer patients in the United States and Ontario. J Natl Cancer Inst. 2011;103(11):853862.
  4. Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN. Core competencies in hospital medicine: development and methodology. J Hosp Med. 2006;1(suppl 1):4856.
  5. Society of Hospital Medicine; 2008.The core competencies in hospital medicine. http://www.hospitalmedicine.org/Content/NavigationMenu/Education/CoreCurriculum/Core_Competencies.htm. Accessed October 30, 2013.
  6. Roberts M, Neville E, Berrisford R, Antunes G, Ali N. Management of a malignant pleural effusion: British Thoracic Society Pleural Disease Guideline. Thorax. 2010;65:ii32ii40.
  7. Hawley PH, Byeon JJ. A comparison of sennosides‐based bowel protocols with and without docusate in hospitalized patients with cancer. J Palliat Med. 2008;11(4):575581.
  8. Girota S, Nallamothu B, Spertus J, Li Y, Krumholz M, Chan P. Trends in survival after In‐hospital cardiac arrest. N Engl J Med. 2012;367:19121920.
  9. El‐Jawahri A, Podgurski LM, Eichler AF, et al. Use of video to facilitate end‐of‐life discussions with patients with cancer: a randomized controlled trial. J Clin Oncol. 2010;28(2):305310.
  10. Volandes AE, Levin TT, Slovin S, et al. Augmenting advance care planning in poor prognosis cancer with a video decision aid: a preintervention‐postintervention study. Cancer. 2012;118(17):43314338.
References
  1. The Dartmouth Atlas of Health Care. Percent of Medicare decedents hospitalized at least once during the last six months of life 2007. Available at: http://www.dartmouthatlas.org/data/table.aspx?ind=133. Accessed October 30, 2013.
  2. Teno JM, Gozalo PL, Bynum JP, et al. Change in end‐of‐life care for Medicare beneficiaries: site of death, place of care, and health care transitions in 2000, 2005, and 2009. JAMA. 2013;309(5):470477.
  3. Warren JL, Barbera L, Bremner KE, et al. End‐of‐life care for lung cancer patients in the United States and Ontario. J Natl Cancer Inst. 2011;103(11):853862.
  4. Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN. Core competencies in hospital medicine: development and methodology. J Hosp Med. 2006;1(suppl 1):4856.
  5. Society of Hospital Medicine; 2008.The core competencies in hospital medicine. http://www.hospitalmedicine.org/Content/NavigationMenu/Education/CoreCurriculum/Core_Competencies.htm. Accessed October 30, 2013.
  6. Roberts M, Neville E, Berrisford R, Antunes G, Ali N. Management of a malignant pleural effusion: British Thoracic Society Pleural Disease Guideline. Thorax. 2010;65:ii32ii40.
  7. Hawley PH, Byeon JJ. A comparison of sennosides‐based bowel protocols with and without docusate in hospitalized patients with cancer. J Palliat Med. 2008;11(4):575581.
  8. Girota S, Nallamothu B, Spertus J, Li Y, Krumholz M, Chan P. Trends in survival after In‐hospital cardiac arrest. N Engl J Med. 2012;367:19121920.
  9. El‐Jawahri A, Podgurski LM, Eichler AF, et al. Use of video to facilitate end‐of‐life discussions with patients with cancer: a randomized controlled trial. J Clin Oncol. 2010;28(2):305310.
  10. Volandes AE, Levin TT, Slovin S, et al. Augmenting advance care planning in poor prognosis cancer with a video decision aid: a preintervention‐postintervention study. Cancer. 2012;118(17):43314338.
Issue
Journal of Hospital Medicine - 8(12)
Issue
Journal of Hospital Medicine - 8(12)
Page Number
715-720
Page Number
715-720
Publications
Publications
Article Type
Display Headline
Update in hospital palliative care
Display Headline
Update in hospital palliative care
Sections
Article Source
© 2013 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Wendy G. Anderson, MD, University of California, San Francisco, 521 Parnassus Avenue, Box 0131, San Francisco, CA 94143‐0131; Telephone: 415‐502‐2399; Fax: 415‐476‐5020; E‐mail: wendy.anderson@ucsf.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Academic Hospitalist Balanced Scorecard

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Development and implementation of a balanced scorecard in an academic hospitalist group

The field of hospital medicine, now the fastest growing specialty in medical history,[1] was born out of pressure to improve the efficiency and quality of clinical care in US hospitals.[2] Delivering safe and high‐value clinical care is a central goal of the field and has been an essential component of its growth and success.

The clinical demands on academic hospitalists have grown recently, fueled by the need to staff services previously covered by housestaff, whose hours are now restricted. Despite these new demands, expectations have grown in other arenas as well. Academic hospitalist groups (AHGs) are often expected to make significant contributions in quality improvement, patient safety, education, research, and administration. With broad expectations beyond clinical care, AHGs face unique challenges. Groups that focus mainly on providing coverage and improving clinical performance may find that they are unable to fully contribute in these other domains. To be successful, AHGs must develop strategies that balance their energies, resources, and performance.

The balanced scorecard (BSC) was introduced by Kaplan and Norton in 1992 to allow corporations to view their performance broadly, rather than narrowly focusing on financial measures. The BSC requires organizations to develop a balanced portfolio of performance metrics across 4 key perspectives: financial, customers, internal processes, and learning and growth. Metrics within these perspectives should help answer fundamental questions about the organization (Table 1).[3] Over time, the BSC evolved from a performance measurement tool to a strategic management system.[4] Successful organizations translate their mission and vision to specific strategic objectives in each of the 4 perspectives, delineate how these objectives will help the organization reach its vision with a strategy map,[5] and then utilize the BSC to track and monitor performance to ensure that the vision is achieved.[6]

BSC Perspectives and the Questions That They Answer About the Organization: Traditional and Revised for AHCs
BSC Perspective Traditional Questions[3] Questions Revised for AHCs
  • NOTE: Adapted with permission from Zelman, et al. Academic Medicine. 1999; vol 74. Wolters Kluwer Health. [11] Abbreviations: AHCs, academic health centers; BSC, balanced scorecard.

Financial How do we look to our shareholders? What financial condition must we be in to allow us to accomplish our mission?
Customers How do customers see us? How do we ensure that our services and products add the level of value desired by our stakeholders?
Internal processes What must we excel at? How do we produce our products and services to add maximum value for our customers and stakeholders?
Learning and growth How can we continue to improve and create value? How do we ensure that we change and improve in order to achieve our vision?

Although originally conceived for businesses, the BSC has found its way into the healthcare industry, with reports of successful implementation in organizations ranging from individual departments to research collaboratives[7] to national healthcare systems.[8] However, there are few reports of BSC implementation in academic health centers.[9, 10] Because most academic centers are not‐for‐profit, Zelman suggests that the 4 BSC perspectives be modified to better fit their unique characteristics (Table 1).[11] To the best of our knowledge, there is no literature describing the development of a BSC in an academic hospitalist group. In this article, we describe the development of, and early experiences with, an academic hospital medicine BSC developed as part of a strategic planning initiative.

METHODS

The University of California, San Francisco (UCSF) Division of Hospital Medicine (DHM) was established in 2005. Currently, there are more than 50 faculty members, having doubled in the last 4 years. In addition to staffing several housestaff and nonhousestaff clinical services, faculty are involved in a wide variety of nonclinical endeavors at local and national levels. They participate and lead initiatives in education, faculty development, patient safety, care efficiency, quality improvement, information technology, and global health. There is an active research enterprise that generates nearly $5 million in grant funding annually.

Needs Assessment

During a division retreat in 2009, faculty identified several areas in need of improvement, including: clinical care processes, educational promotion, faculty development, and work‐life balance. Based on these needs, divisional mission and vision statements were created (Table 2).

UCSF DHM Mission and Vision Statements
  • NOTE: Abbreviations: DHM, Division of Hospital Medicine; UCSF, University of California, San Francisco.

Our mission: to provide the highest quality clinical care, education, research, and innovation in academic hospital medicine.
Our vision: to be the best division of hospital medicine by promoting excellence, integrity, innovation, and professional satisfaction among our faculty, trainees, and staff.

Division leadership made it a priority to create a strategic plan to address these wide‐ranging issues. To accomplish this, we recognized the need to develop a formal way of translating our vision into specific and measurable objectives, establish systems of performance measurement, improve accountability, and effectively communicate these strategic goals to the group. Based on these needs, we set out to develop a divisional BSC.

Development

At the time of BSC development, the DHM was organized into 4 functional areas: quality and safety, education, faculty development, and academics and research. A task force was formed, comprised of 8 senior faculty representing these key areas. The mission and vision statements were used as the foundation for the development of division goals and objectives. The group was careful to choose objectives within each of the 4 BSC perspectives for academic centers, as defined by Zelman (Table 1). The taskforce then brainstormed specific metrics that would track performance within the 4 functional areas. The only stipulation during this process was that the metrics had to meet the following criteria:

  1. Important to the division and to the individual faculty members
  2. Measurable through either current or developed processes
  3. Data are valid and their validity trusted by the faculty members
  4. Amenable to improvement by faculty (ie, through their individual action they could impact the metric)

From the subsequent list of metrics, we used a modified Delphi method to rank‐order them by importance to arrive at our final set of metrics. Kaplan and Norton noted that focusing on a manageable number of metrics (ie, a handful in each BSC perspective) is important for an achievable strategic vision.[6] With the metrics chosen, we identified data sources or developed new systems to collect data for which there was no current source. We assigned individuals responsible for collecting and analyzing the data, identified local or national benchmarks, if available, and established performance targets for the coming year, when possible.

The BSC is updated quarterly, and results are presented to the division during a noon meeting and posted on the division website. Metrics are re‐evaluated on a yearly basis. They are continued, modified, or discarded depending on performance and/or changes in strategic priorities.

The initial BSC focused on division‐wide metrics and performance. Early efforts to develop the scorecard were framed as experimental, with no clear decision taken regarding how metrics might ultimately be used to improve performance (ie, how public to make both individual and group results, whether to tie bonus payments to performance).

RESULTS

There were 41 initial metrics considered by the division BSC task force (Table 3). Of these, 16 were chosen for the initial BSC through the modified Delphi method. Over the past 2 years, these initial metrics have been modified to reflect current strategic goals and objectives. Figure 1 illustrates the BSC for fiscal year (FY) 2012. An online version of this, complete with graphical representations of the data and metric definitions, can be found at http://hospitalmedicine.ucsf.edu/bsc/fy2012.html. Our strategy map (Figure 2) demonstrates how these metrics are interconnected across the 4 BSC perspectives and how they fit into our overall strategic plan.

Figure 1
Division of Hospital Medicine balance scorecard FY 2012. Green shading signifies at or above target; pink shading signifies below target. Abbreviations: CY, calendar year; FY, fiscal year, NA, not available; Q, quarter.
Figure 2
Division of Hospital Medicine strategy map. Arrows denote relationships between objectives spanning the 4 balanced scorecard perspectives. Abbreviations: HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; PCP, primary care physician.
Brainstormed Competencies Across the Four DHM Functional Areas
Quality, Safety, and Operations Education Academics and Research Faculty Development
  • NOTE: Abbreviations: CME, continuing medical education; DHM, Division of Hospital Medicine; ICU, intensive care unit.

Appropriate level of care CME courses taught Abstracts accepted Attendance and participation
Billing and documentation Curriculum development Academic reputation Being an agent of change
Clinical efficiency Student/housestaff feedback Grant funding Division citizenship
Clinical professionalism Mentoring Mentorship Job satisfaction
Communication Quality of teaching rounds Papers published Mentorship
Core measures performance Participation in national organizations Committees and task forces
Practice evidence‐based medicine
Fund of knowledge
Guideline adherence
Unplanned transfers to ICU
Implementation and initiation of projects
Length of stay
Medical errors
Mortality
Multidisciplinary approach to patient care
Multisource feedback evaluations
Never events
Patient‐centered care
Patient satisfaction
Practice‐based learning
Procedures
Readmissions
Reputation and expertise
Seeing patient on the day of admission
Quality of transfers of care

DISCUSSION

Like many hospitalist groups, our division has experienced tremendous growth, both in our numbers and the breadth of roles that we fill. With this growth has come increasing expectations in multiple domains, competing priorities, and limited resources. We successfully developed a BSC as a tool to help our division reach its vision: balancing high quality clinical care, education, academics, and faculty development while maintaining a strong sense of community. We have found that the BSC has helped us meet several key goals.

The first goal was to allow for a broad view of our performance. This is the BSC's most basic function, and we saw immediate and tangible benefits. The scorecard provided a broad snapshot of our performance in a single place. For example, in the clinical domain, we saw that our direct cost per case was increasing despite our adjusted average length of stay remaining stable from FY2010‐FY2011. In academics and research, we saw that the number of abstracts accepted at national meetings increased by almost 30% in FY2011 (Figure 1).

The second goal was to create transparency and accountability. By measuring performance and displaying it on the division Web site, the BSC has promoted transparency. If performance does not meet our targets, the division as a whole becomes accountable. Leadership must understand why performance fell short and initiate changes to improve it. For instance, the rising direct cost per case has spurred the development of a high‐value care committee tasked with finding ways of reducing cost while providing high‐quality care.[12]

The third goal was to communicate goals and engage our faculty. As our division has grown, ensuring a shared vision among our entire faculty became an increasing challenge. The BSC functions as a communication platform between leadership and faculty, and yielded multiple benefits. As the metrics were born out of our mission and vision, the BSC has become a tangible representation of our core values. Moreover, individual faculty can see that they are part of a greater, high‐performing organization and realize they can impact the group's performance through their individual effort. For example, this has helped promote receptivity to carefully disseminated individual performance measures for billing and documentation, and patient satisfaction, in conjunction with faculty development in these areas.

The fourth goal was to ensure that we use data to guide strategic decisions. We felt that strategic decisions needed to be based on objective, rather than perceived or anecdotal, information. This meant translating our vision into measurable objectives that would drive performance improvement. For example, before the BSC, we were committed to the dissemination of our research and innovations. Yet, we quickly realized that we did not have a system to collect even basic data on academic performancea deficit we filled by leveraging information gathered from online databases and faculty curricula vitae. These data allowed us, for the first time, to objectively reflect on this as a strategic goal and to have an ongoing mechanism to monitor academic productivity.

Lessons Learned/Keys to Success

With our initial experience, we have gained insight that may be helpful to other AHGs considering implementing a BSC. First, and most importantly, AHGs should take the necessary time to build consensus and buy‐in. Particularly in areas where data are analyzed for the first time, faculty are often wary about the validity of the data or the purpose and utility of performance measurement. Faculty may be concerned about how collection of performance data could affect promotion or create a hostile and competitive work environment.

This concern grows when one moves from division‐wide to individual data. It is inevitable that the collection and dissemination of performance data will create some level of discomfort among faculty members, which can be a force for improvement or for angst. These issues should be anticipated, discussed, and actively managed. It is critical to be transparent with how data will be used. We have made clear that the transition from group to individual performance data, and from simple transparency to incentives, will be done thoughtfully and with tremendous input from our faculty. This tension can also be mitigated by choosing metrics that are internally driven, rather than determined by external groups (ie, following the principle that the measures should be important to the division and individual faculty members).

Next, the process of developing a mature BSC takes time. Much of our first year was spent developing systems for measurement, collecting data, and determining appropriate comparators and targets. The data in the first BSC functioned mainly as a baseline marker of performance. Some metrics, particularly in education and academics, had no national or local benchmarks. In these cases we identified comparable groups (such as other medical teaching services or other well‐established AHGs) or merely used our prior year's performance as a benchmark. Also, some of our metrics did not initially have performance targets. In most instances, this was because this was the first time that we looked at these data, and it was unclear what an appropriate target would be until more data became available.

Moving into our third year, we are seeing a natural evolution in the BSC's use. Some metrics that were initially chosen have been replaced or modified to reflect changing goals and priorities. Functional directors participate in choosing and developing performance metrics in their area. Previously, there was no formal structure for these groups to develop and measure strategic objectives and be accountable for performance improvement. They are now expected to define goals with measurable outcomes, to report progress to division leadership, and to develop their own scorecard to track performance. Each group chooses 2 to 4 metrics within their domain that are the most important for the division to improve on, which are then included in the division BSC.

We have also made efforts to build synergy between our BSC and performance goals set by external groups. Although continuing to favor metrics that are internally driven and meaningful to our faculty, we recognize that our goals must also reflect the needs and interests of broader stakeholders. For example, hand hygiene rates and patient satisfaction scores are UCSF medical center and divisional priorities (the former includes them in a financial incentive system for managers, staff, and many physicians) and are incorporated into the BSC as division‐wide incentive metrics.

Limitations

Our project has several limitations. It was conducted at a single institution, and our metrics may not be generalizable to other groups. However, the main goal of this article was not to focus on specific metrics but the process that we undertook to choose and develop them. Other institutions will likely identify different metrics based on their specific strategic objectives. We are also early in our experience with the BSC, and it is still not clear what effect it will have on the desired outcomes for our objectives. However, Henriksen recently reported that implementing a BSC at a large academic health center, in parallel with other performance improvement initiatives, resulted in substantial improvement in their chosen performance metrics.[13]

Despite the several years of development, we still view this as an early version of a BSC. To fully realize its benefits, an organization must choose metrics that will not simply measure performance but drive it. Our current BSC relies primarily on lagging measures, which show what our performance has been, and includes few leading metrics, which can predict trends in performance. As explained by Kaplan and Norton, this type of BSC risks skewing toward controlling rather than driving performance.[14] A mature BSC will include a mix of leading and lagging indicators, the combination illustrating a logical progression from measurement to performance. For instance, we measure total grant funding per year, which is a lagging indicator. However, to be most effective we could measure the percent of faculty who have attended grant‐writing workshops, the number of new grant sources identified, or the number of grant proposals submitted each quarter. These leading indicators would allow us to see performance trends that could be improved before the final outcome, total grant funding, is realized.

Finally, the issues surrounding the acceptability of this overall strategy will likely hinge on how we implement the more complex steps that relate to transparency, individual attribution, and perhaps ultimately incentives. Success in this area depends as much on culture as on strategy.

Next Steps

The next major step in the evolution of the BSC, and part of a broader faculty development program, will be the development of individual BSCs. They will be created using a similar methodology and allow faculty to reflect on their performance compared to peers and recognized benchmarks. Ideally, this will allow hospitalists in our group to establish personal strategic plans and monitor their performance over time. Individualizing these BSCs will be critical; although a research‐oriented faculty member might be striving for more than 5 publications and a large grant in a year, a clinician‐educator may seek outstanding teaching reviews and completion of a key quality improvement project. Both efforts need to be highly valued, and the divisional BSC should roll up these varied individual goals into a balanced whole.

In conclusion, we successfully developed and implemented a BSC to aid in strategic planning. The BSC ensures that we make strategic decisions using data, identify internally driven objectives, develop systems of performance measurement, and increase transparency and accountability. Our hope is that this description of the development of our BSC will be useful to other groups considering a similar endeavor.

Acknowledgments

The authors thank Noori Dhillon, Sadaf Akbaryar, Katie Quinn, Gerri Berg, and Maria Novelero for data collection and analysis. The authors also thank the faculty and staff who participated in the development process of the BSC.

Disclosure

Nothing to report.

Files
References
  1. Wachter RM. The hospitalist field turns 15: new opportunities and challenges. J Hosp Med. 2011;6(4):E1E4.
  2. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335(7):514517.
  3. Kaplan RS, Norton DP. The balanced scorecard—measures that drive performance. Harv Bus Rev. 1992;70(1):7179.
  4. Kaplan RS, Norton DP. Using the balanced scorecard as a strategic management system. Harv Bus Rev. 1996;74(1):7585.
  5. Kaplan RS, Norton DP. Having trouble with your strategy? Then map it. Harv Bus Rev. 2000;78:167176, 202.
  6. Kaplan RS, Norton DP. Putting the balanced scorecard to work. Harv Bus Rev. 1993;71:134147.
  7. Stanley R, Lillis KA, Zuspan SJ, et al. Development and implementation of a performance measure tool in an academic pediatric research network. Contemp Clin Trials. 2010;31(5):429437.
  8. Gurd B, Gao T. Lives in the balance: an analysis of the balanced scorecard (BSC) in healthcare organizations. Int J Prod Perform Manag. 2007;57(1):621.
  9. Rimar S, Garstka SJ. The “Balanced Scorecard”: development and implementation in an academic clinical department. Acad Med. 1999;74(2):114122.
  10. Zbinden AM. Introducing a balanced scorecard management system in a university anesthesiology department. Anesth Analg. 2002;95(6):17311738, table of contents.
  11. Zelman WN, Blazer D, Gower JM, Bumgarner PO, Cancilla LM. Issues for academic health centers to consider before implementing a balanced‐scorecard effort. Acad Med. 1999;74(12):12691277.
  12. Rosenbaum L, Lamas D. Cents and sensitivity—teaching physicians to think about costs. N Engl J Med. 2012;367(2):99101.
  13. Meliones JN, Alton M, Mericle J, et al. 10‐year experience integrating strategic performance improvement initiatives: can the balanced scorecard, Six Sigma, and team training all thrive in a single hospital? In: Henriksen K, Battles JB, Keyes MA, Grady ML, eds. Advances in Patient Safety: New Directions and Alternative Approaches. Vol 3. Performance and Tools. Rockville, MD: Agency for Healthcare Research and Quality; 2008. Available at: http://www.ncbi.nlm.nih.gov/books/NBK43660. Accessed 15 June 2011.
  14. Kaplan RS, Norton DP. Linking the balanced scorecard to strategy. Calif Manage Rev. 1996;39(1):5379.
Article PDF
Issue
Journal of Hospital Medicine - 8(3)
Publications
Page Number
148-153
Sections
Files
Files
Article PDF
Article PDF

The field of hospital medicine, now the fastest growing specialty in medical history,[1] was born out of pressure to improve the efficiency and quality of clinical care in US hospitals.[2] Delivering safe and high‐value clinical care is a central goal of the field and has been an essential component of its growth and success.

The clinical demands on academic hospitalists have grown recently, fueled by the need to staff services previously covered by housestaff, whose hours are now restricted. Despite these new demands, expectations have grown in other arenas as well. Academic hospitalist groups (AHGs) are often expected to make significant contributions in quality improvement, patient safety, education, research, and administration. With broad expectations beyond clinical care, AHGs face unique challenges. Groups that focus mainly on providing coverage and improving clinical performance may find that they are unable to fully contribute in these other domains. To be successful, AHGs must develop strategies that balance their energies, resources, and performance.

The balanced scorecard (BSC) was introduced by Kaplan and Norton in 1992 to allow corporations to view their performance broadly, rather than narrowly focusing on financial measures. The BSC requires organizations to develop a balanced portfolio of performance metrics across 4 key perspectives: financial, customers, internal processes, and learning and growth. Metrics within these perspectives should help answer fundamental questions about the organization (Table 1).[3] Over time, the BSC evolved from a performance measurement tool to a strategic management system.[4] Successful organizations translate their mission and vision to specific strategic objectives in each of the 4 perspectives, delineate how these objectives will help the organization reach its vision with a strategy map,[5] and then utilize the BSC to track and monitor performance to ensure that the vision is achieved.[6]

BSC Perspectives and the Questions That They Answer About the Organization: Traditional and Revised for AHCs
BSC Perspective Traditional Questions[3] Questions Revised for AHCs
  • NOTE: Adapted with permission from Zelman, et al. Academic Medicine. 1999; vol 74. Wolters Kluwer Health. [11] Abbreviations: AHCs, academic health centers; BSC, balanced scorecard.

Financial How do we look to our shareholders? What financial condition must we be in to allow us to accomplish our mission?
Customers How do customers see us? How do we ensure that our services and products add the level of value desired by our stakeholders?
Internal processes What must we excel at? How do we produce our products and services to add maximum value for our customers and stakeholders?
Learning and growth How can we continue to improve and create value? How do we ensure that we change and improve in order to achieve our vision?

Although originally conceived for businesses, the BSC has found its way into the healthcare industry, with reports of successful implementation in organizations ranging from individual departments to research collaboratives[7] to national healthcare systems.[8] However, there are few reports of BSC implementation in academic health centers.[9, 10] Because most academic centers are not‐for‐profit, Zelman suggests that the 4 BSC perspectives be modified to better fit their unique characteristics (Table 1).[11] To the best of our knowledge, there is no literature describing the development of a BSC in an academic hospitalist group. In this article, we describe the development of, and early experiences with, an academic hospital medicine BSC developed as part of a strategic planning initiative.

METHODS

The University of California, San Francisco (UCSF) Division of Hospital Medicine (DHM) was established in 2005. Currently, there are more than 50 faculty members, having doubled in the last 4 years. In addition to staffing several housestaff and nonhousestaff clinical services, faculty are involved in a wide variety of nonclinical endeavors at local and national levels. They participate and lead initiatives in education, faculty development, patient safety, care efficiency, quality improvement, information technology, and global health. There is an active research enterprise that generates nearly $5 million in grant funding annually.

Needs Assessment

During a division retreat in 2009, faculty identified several areas in need of improvement, including: clinical care processes, educational promotion, faculty development, and work‐life balance. Based on these needs, divisional mission and vision statements were created (Table 2).

UCSF DHM Mission and Vision Statements
  • NOTE: Abbreviations: DHM, Division of Hospital Medicine; UCSF, University of California, San Francisco.

Our mission: to provide the highest quality clinical care, education, research, and innovation in academic hospital medicine.
Our vision: to be the best division of hospital medicine by promoting excellence, integrity, innovation, and professional satisfaction among our faculty, trainees, and staff.

Division leadership made it a priority to create a strategic plan to address these wide‐ranging issues. To accomplish this, we recognized the need to develop a formal way of translating our vision into specific and measurable objectives, establish systems of performance measurement, improve accountability, and effectively communicate these strategic goals to the group. Based on these needs, we set out to develop a divisional BSC.

Development

At the time of BSC development, the DHM was organized into 4 functional areas: quality and safety, education, faculty development, and academics and research. A task force was formed, comprised of 8 senior faculty representing these key areas. The mission and vision statements were used as the foundation for the development of division goals and objectives. The group was careful to choose objectives within each of the 4 BSC perspectives for academic centers, as defined by Zelman (Table 1). The taskforce then brainstormed specific metrics that would track performance within the 4 functional areas. The only stipulation during this process was that the metrics had to meet the following criteria:

  1. Important to the division and to the individual faculty members
  2. Measurable through either current or developed processes
  3. Data are valid and their validity trusted by the faculty members
  4. Amenable to improvement by faculty (ie, through their individual action they could impact the metric)

From the subsequent list of metrics, we used a modified Delphi method to rank‐order them by importance to arrive at our final set of metrics. Kaplan and Norton noted that focusing on a manageable number of metrics (ie, a handful in each BSC perspective) is important for an achievable strategic vision.[6] With the metrics chosen, we identified data sources or developed new systems to collect data for which there was no current source. We assigned individuals responsible for collecting and analyzing the data, identified local or national benchmarks, if available, and established performance targets for the coming year, when possible.

The BSC is updated quarterly, and results are presented to the division during a noon meeting and posted on the division website. Metrics are re‐evaluated on a yearly basis. They are continued, modified, or discarded depending on performance and/or changes in strategic priorities.

The initial BSC focused on division‐wide metrics and performance. Early efforts to develop the scorecard were framed as experimental, with no clear decision taken regarding how metrics might ultimately be used to improve performance (ie, how public to make both individual and group results, whether to tie bonus payments to performance).

RESULTS

There were 41 initial metrics considered by the division BSC task force (Table 3). Of these, 16 were chosen for the initial BSC through the modified Delphi method. Over the past 2 years, these initial metrics have been modified to reflect current strategic goals and objectives. Figure 1 illustrates the BSC for fiscal year (FY) 2012. An online version of this, complete with graphical representations of the data and metric definitions, can be found at http://hospitalmedicine.ucsf.edu/bsc/fy2012.html. Our strategy map (Figure 2) demonstrates how these metrics are interconnected across the 4 BSC perspectives and how they fit into our overall strategic plan.

Figure 1
Division of Hospital Medicine balance scorecard FY 2012. Green shading signifies at or above target; pink shading signifies below target. Abbreviations: CY, calendar year; FY, fiscal year, NA, not available; Q, quarter.
Figure 2
Division of Hospital Medicine strategy map. Arrows denote relationships between objectives spanning the 4 balanced scorecard perspectives. Abbreviations: HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; PCP, primary care physician.
Brainstormed Competencies Across the Four DHM Functional Areas
Quality, Safety, and Operations Education Academics and Research Faculty Development
  • NOTE: Abbreviations: CME, continuing medical education; DHM, Division of Hospital Medicine; ICU, intensive care unit.

Appropriate level of care CME courses taught Abstracts accepted Attendance and participation
Billing and documentation Curriculum development Academic reputation Being an agent of change
Clinical efficiency Student/housestaff feedback Grant funding Division citizenship
Clinical professionalism Mentoring Mentorship Job satisfaction
Communication Quality of teaching rounds Papers published Mentorship
Core measures performance Participation in national organizations Committees and task forces
Practice evidence‐based medicine
Fund of knowledge
Guideline adherence
Unplanned transfers to ICU
Implementation and initiation of projects
Length of stay
Medical errors
Mortality
Multidisciplinary approach to patient care
Multisource feedback evaluations
Never events
Patient‐centered care
Patient satisfaction
Practice‐based learning
Procedures
Readmissions
Reputation and expertise
Seeing patient on the day of admission
Quality of transfers of care

DISCUSSION

Like many hospitalist groups, our division has experienced tremendous growth, both in our numbers and the breadth of roles that we fill. With this growth has come increasing expectations in multiple domains, competing priorities, and limited resources. We successfully developed a BSC as a tool to help our division reach its vision: balancing high quality clinical care, education, academics, and faculty development while maintaining a strong sense of community. We have found that the BSC has helped us meet several key goals.

The first goal was to allow for a broad view of our performance. This is the BSC's most basic function, and we saw immediate and tangible benefits. The scorecard provided a broad snapshot of our performance in a single place. For example, in the clinical domain, we saw that our direct cost per case was increasing despite our adjusted average length of stay remaining stable from FY2010‐FY2011. In academics and research, we saw that the number of abstracts accepted at national meetings increased by almost 30% in FY2011 (Figure 1).

The second goal was to create transparency and accountability. By measuring performance and displaying it on the division Web site, the BSC has promoted transparency. If performance does not meet our targets, the division as a whole becomes accountable. Leadership must understand why performance fell short and initiate changes to improve it. For instance, the rising direct cost per case has spurred the development of a high‐value care committee tasked with finding ways of reducing cost while providing high‐quality care.[12]

The third goal was to communicate goals and engage our faculty. As our division has grown, ensuring a shared vision among our entire faculty became an increasing challenge. The BSC functions as a communication platform between leadership and faculty, and yielded multiple benefits. As the metrics were born out of our mission and vision, the BSC has become a tangible representation of our core values. Moreover, individual faculty can see that they are part of a greater, high‐performing organization and realize they can impact the group's performance through their individual effort. For example, this has helped promote receptivity to carefully disseminated individual performance measures for billing and documentation, and patient satisfaction, in conjunction with faculty development in these areas.

The fourth goal was to ensure that we use data to guide strategic decisions. We felt that strategic decisions needed to be based on objective, rather than perceived or anecdotal, information. This meant translating our vision into measurable objectives that would drive performance improvement. For example, before the BSC, we were committed to the dissemination of our research and innovations. Yet, we quickly realized that we did not have a system to collect even basic data on academic performancea deficit we filled by leveraging information gathered from online databases and faculty curricula vitae. These data allowed us, for the first time, to objectively reflect on this as a strategic goal and to have an ongoing mechanism to monitor academic productivity.

Lessons Learned/Keys to Success

With our initial experience, we have gained insight that may be helpful to other AHGs considering implementing a BSC. First, and most importantly, AHGs should take the necessary time to build consensus and buy‐in. Particularly in areas where data are analyzed for the first time, faculty are often wary about the validity of the data or the purpose and utility of performance measurement. Faculty may be concerned about how collection of performance data could affect promotion or create a hostile and competitive work environment.

This concern grows when one moves from division‐wide to individual data. It is inevitable that the collection and dissemination of performance data will create some level of discomfort among faculty members, which can be a force for improvement or for angst. These issues should be anticipated, discussed, and actively managed. It is critical to be transparent with how data will be used. We have made clear that the transition from group to individual performance data, and from simple transparency to incentives, will be done thoughtfully and with tremendous input from our faculty. This tension can also be mitigated by choosing metrics that are internally driven, rather than determined by external groups (ie, following the principle that the measures should be important to the division and individual faculty members).

Next, the process of developing a mature BSC takes time. Much of our first year was spent developing systems for measurement, collecting data, and determining appropriate comparators and targets. The data in the first BSC functioned mainly as a baseline marker of performance. Some metrics, particularly in education and academics, had no national or local benchmarks. In these cases we identified comparable groups (such as other medical teaching services or other well‐established AHGs) or merely used our prior year's performance as a benchmark. Also, some of our metrics did not initially have performance targets. In most instances, this was because this was the first time that we looked at these data, and it was unclear what an appropriate target would be until more data became available.

Moving into our third year, we are seeing a natural evolution in the BSC's use. Some metrics that were initially chosen have been replaced or modified to reflect changing goals and priorities. Functional directors participate in choosing and developing performance metrics in their area. Previously, there was no formal structure for these groups to develop and measure strategic objectives and be accountable for performance improvement. They are now expected to define goals with measurable outcomes, to report progress to division leadership, and to develop their own scorecard to track performance. Each group chooses 2 to 4 metrics within their domain that are the most important for the division to improve on, which are then included in the division BSC.

We have also made efforts to build synergy between our BSC and performance goals set by external groups. Although continuing to favor metrics that are internally driven and meaningful to our faculty, we recognize that our goals must also reflect the needs and interests of broader stakeholders. For example, hand hygiene rates and patient satisfaction scores are UCSF medical center and divisional priorities (the former includes them in a financial incentive system for managers, staff, and many physicians) and are incorporated into the BSC as division‐wide incentive metrics.

Limitations

Our project has several limitations. It was conducted at a single institution, and our metrics may not be generalizable to other groups. However, the main goal of this article was not to focus on specific metrics but the process that we undertook to choose and develop them. Other institutions will likely identify different metrics based on their specific strategic objectives. We are also early in our experience with the BSC, and it is still not clear what effect it will have on the desired outcomes for our objectives. However, Henriksen recently reported that implementing a BSC at a large academic health center, in parallel with other performance improvement initiatives, resulted in substantial improvement in their chosen performance metrics.[13]

Despite the several years of development, we still view this as an early version of a BSC. To fully realize its benefits, an organization must choose metrics that will not simply measure performance but drive it. Our current BSC relies primarily on lagging measures, which show what our performance has been, and includes few leading metrics, which can predict trends in performance. As explained by Kaplan and Norton, this type of BSC risks skewing toward controlling rather than driving performance.[14] A mature BSC will include a mix of leading and lagging indicators, the combination illustrating a logical progression from measurement to performance. For instance, we measure total grant funding per year, which is a lagging indicator. However, to be most effective we could measure the percent of faculty who have attended grant‐writing workshops, the number of new grant sources identified, or the number of grant proposals submitted each quarter. These leading indicators would allow us to see performance trends that could be improved before the final outcome, total grant funding, is realized.

Finally, the issues surrounding the acceptability of this overall strategy will likely hinge on how we implement the more complex steps that relate to transparency, individual attribution, and perhaps ultimately incentives. Success in this area depends as much on culture as on strategy.

Next Steps

The next major step in the evolution of the BSC, and part of a broader faculty development program, will be the development of individual BSCs. They will be created using a similar methodology and allow faculty to reflect on their performance compared to peers and recognized benchmarks. Ideally, this will allow hospitalists in our group to establish personal strategic plans and monitor their performance over time. Individualizing these BSCs will be critical; although a research‐oriented faculty member might be striving for more than 5 publications and a large grant in a year, a clinician‐educator may seek outstanding teaching reviews and completion of a key quality improvement project. Both efforts need to be highly valued, and the divisional BSC should roll up these varied individual goals into a balanced whole.

In conclusion, we successfully developed and implemented a BSC to aid in strategic planning. The BSC ensures that we make strategic decisions using data, identify internally driven objectives, develop systems of performance measurement, and increase transparency and accountability. Our hope is that this description of the development of our BSC will be useful to other groups considering a similar endeavor.

Acknowledgments

The authors thank Noori Dhillon, Sadaf Akbaryar, Katie Quinn, Gerri Berg, and Maria Novelero for data collection and analysis. The authors also thank the faculty and staff who participated in the development process of the BSC.

Disclosure

Nothing to report.

The field of hospital medicine, now the fastest growing specialty in medical history,[1] was born out of pressure to improve the efficiency and quality of clinical care in US hospitals.[2] Delivering safe and high‐value clinical care is a central goal of the field and has been an essential component of its growth and success.

The clinical demands on academic hospitalists have grown recently, fueled by the need to staff services previously covered by housestaff, whose hours are now restricted. Despite these new demands, expectations have grown in other arenas as well. Academic hospitalist groups (AHGs) are often expected to make significant contributions in quality improvement, patient safety, education, research, and administration. With broad expectations beyond clinical care, AHGs face unique challenges. Groups that focus mainly on providing coverage and improving clinical performance may find that they are unable to fully contribute in these other domains. To be successful, AHGs must develop strategies that balance their energies, resources, and performance.

The balanced scorecard (BSC) was introduced by Kaplan and Norton in 1992 to allow corporations to view their performance broadly, rather than narrowly focusing on financial measures. The BSC requires organizations to develop a balanced portfolio of performance metrics across 4 key perspectives: financial, customers, internal processes, and learning and growth. Metrics within these perspectives should help answer fundamental questions about the organization (Table 1).[3] Over time, the BSC evolved from a performance measurement tool to a strategic management system.[4] Successful organizations translate their mission and vision to specific strategic objectives in each of the 4 perspectives, delineate how these objectives will help the organization reach its vision with a strategy map,[5] and then utilize the BSC to track and monitor performance to ensure that the vision is achieved.[6]

BSC Perspectives and the Questions That They Answer About the Organization: Traditional and Revised for AHCs
BSC Perspective Traditional Questions[3] Questions Revised for AHCs
  • NOTE: Adapted with permission from Zelman, et al. Academic Medicine. 1999; vol 74. Wolters Kluwer Health. [11] Abbreviations: AHCs, academic health centers; BSC, balanced scorecard.

Financial How do we look to our shareholders? What financial condition must we be in to allow us to accomplish our mission?
Customers How do customers see us? How do we ensure that our services and products add the level of value desired by our stakeholders?
Internal processes What must we excel at? How do we produce our products and services to add maximum value for our customers and stakeholders?
Learning and growth How can we continue to improve and create value? How do we ensure that we change and improve in order to achieve our vision?

Although originally conceived for businesses, the BSC has found its way into the healthcare industry, with reports of successful implementation in organizations ranging from individual departments to research collaboratives[7] to national healthcare systems.[8] However, there are few reports of BSC implementation in academic health centers.[9, 10] Because most academic centers are not‐for‐profit, Zelman suggests that the 4 BSC perspectives be modified to better fit their unique characteristics (Table 1).[11] To the best of our knowledge, there is no literature describing the development of a BSC in an academic hospitalist group. In this article, we describe the development of, and early experiences with, an academic hospital medicine BSC developed as part of a strategic planning initiative.

METHODS

The University of California, San Francisco (UCSF) Division of Hospital Medicine (DHM) was established in 2005. Currently, there are more than 50 faculty members, having doubled in the last 4 years. In addition to staffing several housestaff and nonhousestaff clinical services, faculty are involved in a wide variety of nonclinical endeavors at local and national levels. They participate and lead initiatives in education, faculty development, patient safety, care efficiency, quality improvement, information technology, and global health. There is an active research enterprise that generates nearly $5 million in grant funding annually.

Needs Assessment

During a division retreat in 2009, faculty identified several areas in need of improvement, including: clinical care processes, educational promotion, faculty development, and work‐life balance. Based on these needs, divisional mission and vision statements were created (Table 2).

UCSF DHM Mission and Vision Statements
  • NOTE: Abbreviations: DHM, Division of Hospital Medicine; UCSF, University of California, San Francisco.

Our mission: to provide the highest quality clinical care, education, research, and innovation in academic hospital medicine.
Our vision: to be the best division of hospital medicine by promoting excellence, integrity, innovation, and professional satisfaction among our faculty, trainees, and staff.

Division leadership made it a priority to create a strategic plan to address these wide‐ranging issues. To accomplish this, we recognized the need to develop a formal way of translating our vision into specific and measurable objectives, establish systems of performance measurement, improve accountability, and effectively communicate these strategic goals to the group. Based on these needs, we set out to develop a divisional BSC.

Development

At the time of BSC development, the DHM was organized into 4 functional areas: quality and safety, education, faculty development, and academics and research. A task force was formed, comprised of 8 senior faculty representing these key areas. The mission and vision statements were used as the foundation for the development of division goals and objectives. The group was careful to choose objectives within each of the 4 BSC perspectives for academic centers, as defined by Zelman (Table 1). The taskforce then brainstormed specific metrics that would track performance within the 4 functional areas. The only stipulation during this process was that the metrics had to meet the following criteria:

  1. Important to the division and to the individual faculty members
  2. Measurable through either current or developed processes
  3. Data are valid and their validity trusted by the faculty members
  4. Amenable to improvement by faculty (ie, through their individual action they could impact the metric)

From the subsequent list of metrics, we used a modified Delphi method to rank‐order them by importance to arrive at our final set of metrics. Kaplan and Norton noted that focusing on a manageable number of metrics (ie, a handful in each BSC perspective) is important for an achievable strategic vision.[6] With the metrics chosen, we identified data sources or developed new systems to collect data for which there was no current source. We assigned individuals responsible for collecting and analyzing the data, identified local or national benchmarks, if available, and established performance targets for the coming year, when possible.

The BSC is updated quarterly, and results are presented to the division during a noon meeting and posted on the division website. Metrics are re‐evaluated on a yearly basis. They are continued, modified, or discarded depending on performance and/or changes in strategic priorities.

The initial BSC focused on division‐wide metrics and performance. Early efforts to develop the scorecard were framed as experimental, with no clear decision taken regarding how metrics might ultimately be used to improve performance (ie, how public to make both individual and group results, whether to tie bonus payments to performance).

RESULTS

There were 41 initial metrics considered by the division BSC task force (Table 3). Of these, 16 were chosen for the initial BSC through the modified Delphi method. Over the past 2 years, these initial metrics have been modified to reflect current strategic goals and objectives. Figure 1 illustrates the BSC for fiscal year (FY) 2012. An online version of this, complete with graphical representations of the data and metric definitions, can be found at http://hospitalmedicine.ucsf.edu/bsc/fy2012.html. Our strategy map (Figure 2) demonstrates how these metrics are interconnected across the 4 BSC perspectives and how they fit into our overall strategic plan.

Figure 1
Division of Hospital Medicine balance scorecard FY 2012. Green shading signifies at or above target; pink shading signifies below target. Abbreviations: CY, calendar year; FY, fiscal year, NA, not available; Q, quarter.
Figure 2
Division of Hospital Medicine strategy map. Arrows denote relationships between objectives spanning the 4 balanced scorecard perspectives. Abbreviations: HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; PCP, primary care physician.
Brainstormed Competencies Across the Four DHM Functional Areas
Quality, Safety, and Operations Education Academics and Research Faculty Development
  • NOTE: Abbreviations: CME, continuing medical education; DHM, Division of Hospital Medicine; ICU, intensive care unit.

Appropriate level of care CME courses taught Abstracts accepted Attendance and participation
Billing and documentation Curriculum development Academic reputation Being an agent of change
Clinical efficiency Student/housestaff feedback Grant funding Division citizenship
Clinical professionalism Mentoring Mentorship Job satisfaction
Communication Quality of teaching rounds Papers published Mentorship
Core measures performance Participation in national organizations Committees and task forces
Practice evidence‐based medicine
Fund of knowledge
Guideline adherence
Unplanned transfers to ICU
Implementation and initiation of projects
Length of stay
Medical errors
Mortality
Multidisciplinary approach to patient care
Multisource feedback evaluations
Never events
Patient‐centered care
Patient satisfaction
Practice‐based learning
Procedures
Readmissions
Reputation and expertise
Seeing patient on the day of admission
Quality of transfers of care

DISCUSSION

Like many hospitalist groups, our division has experienced tremendous growth, both in our numbers and the breadth of roles that we fill. With this growth has come increasing expectations in multiple domains, competing priorities, and limited resources. We successfully developed a BSC as a tool to help our division reach its vision: balancing high quality clinical care, education, academics, and faculty development while maintaining a strong sense of community. We have found that the BSC has helped us meet several key goals.

The first goal was to allow for a broad view of our performance. This is the BSC's most basic function, and we saw immediate and tangible benefits. The scorecard provided a broad snapshot of our performance in a single place. For example, in the clinical domain, we saw that our direct cost per case was increasing despite our adjusted average length of stay remaining stable from FY2010‐FY2011. In academics and research, we saw that the number of abstracts accepted at national meetings increased by almost 30% in FY2011 (Figure 1).

The second goal was to create transparency and accountability. By measuring performance and displaying it on the division Web site, the BSC has promoted transparency. If performance does not meet our targets, the division as a whole becomes accountable. Leadership must understand why performance fell short and initiate changes to improve it. For instance, the rising direct cost per case has spurred the development of a high‐value care committee tasked with finding ways of reducing cost while providing high‐quality care.[12]

The third goal was to communicate goals and engage our faculty. As our division has grown, ensuring a shared vision among our entire faculty became an increasing challenge. The BSC functions as a communication platform between leadership and faculty, and yielded multiple benefits. As the metrics were born out of our mission and vision, the BSC has become a tangible representation of our core values. Moreover, individual faculty can see that they are part of a greater, high‐performing organization and realize they can impact the group's performance through their individual effort. For example, this has helped promote receptivity to carefully disseminated individual performance measures for billing and documentation, and patient satisfaction, in conjunction with faculty development in these areas.

The fourth goal was to ensure that we use data to guide strategic decisions. We felt that strategic decisions needed to be based on objective, rather than perceived or anecdotal, information. This meant translating our vision into measurable objectives that would drive performance improvement. For example, before the BSC, we were committed to the dissemination of our research and innovations. Yet, we quickly realized that we did not have a system to collect even basic data on academic performancea deficit we filled by leveraging information gathered from online databases and faculty curricula vitae. These data allowed us, for the first time, to objectively reflect on this as a strategic goal and to have an ongoing mechanism to monitor academic productivity.

Lessons Learned/Keys to Success

With our initial experience, we have gained insight that may be helpful to other AHGs considering implementing a BSC. First, and most importantly, AHGs should take the necessary time to build consensus and buy‐in. Particularly in areas where data are analyzed for the first time, faculty are often wary about the validity of the data or the purpose and utility of performance measurement. Faculty may be concerned about how collection of performance data could affect promotion or create a hostile and competitive work environment.

This concern grows when one moves from division‐wide to individual data. It is inevitable that the collection and dissemination of performance data will create some level of discomfort among faculty members, which can be a force for improvement or for angst. These issues should be anticipated, discussed, and actively managed. It is critical to be transparent with how data will be used. We have made clear that the transition from group to individual performance data, and from simple transparency to incentives, will be done thoughtfully and with tremendous input from our faculty. This tension can also be mitigated by choosing metrics that are internally driven, rather than determined by external groups (ie, following the principle that the measures should be important to the division and individual faculty members).

Next, the process of developing a mature BSC takes time. Much of our first year was spent developing systems for measurement, collecting data, and determining appropriate comparators and targets. The data in the first BSC functioned mainly as a baseline marker of performance. Some metrics, particularly in education and academics, had no national or local benchmarks. In these cases we identified comparable groups (such as other medical teaching services or other well‐established AHGs) or merely used our prior year's performance as a benchmark. Also, some of our metrics did not initially have performance targets. In most instances, this was because this was the first time that we looked at these data, and it was unclear what an appropriate target would be until more data became available.

Moving into our third year, we are seeing a natural evolution in the BSC's use. Some metrics that were initially chosen have been replaced or modified to reflect changing goals and priorities. Functional directors participate in choosing and developing performance metrics in their area. Previously, there was no formal structure for these groups to develop and measure strategic objectives and be accountable for performance improvement. They are now expected to define goals with measurable outcomes, to report progress to division leadership, and to develop their own scorecard to track performance. Each group chooses 2 to 4 metrics within their domain that are the most important for the division to improve on, which are then included in the division BSC.

We have also made efforts to build synergy between our BSC and performance goals set by external groups. Although continuing to favor metrics that are internally driven and meaningful to our faculty, we recognize that our goals must also reflect the needs and interests of broader stakeholders. For example, hand hygiene rates and patient satisfaction scores are UCSF medical center and divisional priorities (the former includes them in a financial incentive system for managers, staff, and many physicians) and are incorporated into the BSC as division‐wide incentive metrics.

Limitations

Our project has several limitations. It was conducted at a single institution, and our metrics may not be generalizable to other groups. However, the main goal of this article was not to focus on specific metrics but the process that we undertook to choose and develop them. Other institutions will likely identify different metrics based on their specific strategic objectives. We are also early in our experience with the BSC, and it is still not clear what effect it will have on the desired outcomes for our objectives. However, Henriksen recently reported that implementing a BSC at a large academic health center, in parallel with other performance improvement initiatives, resulted in substantial improvement in their chosen performance metrics.[13]

Despite the several years of development, we still view this as an early version of a BSC. To fully realize its benefits, an organization must choose metrics that will not simply measure performance but drive it. Our current BSC relies primarily on lagging measures, which show what our performance has been, and includes few leading metrics, which can predict trends in performance. As explained by Kaplan and Norton, this type of BSC risks skewing toward controlling rather than driving performance.[14] A mature BSC will include a mix of leading and lagging indicators, the combination illustrating a logical progression from measurement to performance. For instance, we measure total grant funding per year, which is a lagging indicator. However, to be most effective we could measure the percent of faculty who have attended grant‐writing workshops, the number of new grant sources identified, or the number of grant proposals submitted each quarter. These leading indicators would allow us to see performance trends that could be improved before the final outcome, total grant funding, is realized.

Finally, the issues surrounding the acceptability of this overall strategy will likely hinge on how we implement the more complex steps that relate to transparency, individual attribution, and perhaps ultimately incentives. Success in this area depends as much on culture as on strategy.

Next Steps

The next major step in the evolution of the BSC, and part of a broader faculty development program, will be the development of individual BSCs. They will be created using a similar methodology and allow faculty to reflect on their performance compared to peers and recognized benchmarks. Ideally, this will allow hospitalists in our group to establish personal strategic plans and monitor their performance over time. Individualizing these BSCs will be critical; although a research‐oriented faculty member might be striving for more than 5 publications and a large grant in a year, a clinician‐educator may seek outstanding teaching reviews and completion of a key quality improvement project. Both efforts need to be highly valued, and the divisional BSC should roll up these varied individual goals into a balanced whole.

In conclusion, we successfully developed and implemented a BSC to aid in strategic planning. The BSC ensures that we make strategic decisions using data, identify internally driven objectives, develop systems of performance measurement, and increase transparency and accountability. Our hope is that this description of the development of our BSC will be useful to other groups considering a similar endeavor.

Acknowledgments

The authors thank Noori Dhillon, Sadaf Akbaryar, Katie Quinn, Gerri Berg, and Maria Novelero for data collection and analysis. The authors also thank the faculty and staff who participated in the development process of the BSC.

Disclosure

Nothing to report.

References
  1. Wachter RM. The hospitalist field turns 15: new opportunities and challenges. J Hosp Med. 2011;6(4):E1E4.
  2. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335(7):514517.
  3. Kaplan RS, Norton DP. The balanced scorecard—measures that drive performance. Harv Bus Rev. 1992;70(1):7179.
  4. Kaplan RS, Norton DP. Using the balanced scorecard as a strategic management system. Harv Bus Rev. 1996;74(1):7585.
  5. Kaplan RS, Norton DP. Having trouble with your strategy? Then map it. Harv Bus Rev. 2000;78:167176, 202.
  6. Kaplan RS, Norton DP. Putting the balanced scorecard to work. Harv Bus Rev. 1993;71:134147.
  7. Stanley R, Lillis KA, Zuspan SJ, et al. Development and implementation of a performance measure tool in an academic pediatric research network. Contemp Clin Trials. 2010;31(5):429437.
  8. Gurd B, Gao T. Lives in the balance: an analysis of the balanced scorecard (BSC) in healthcare organizations. Int J Prod Perform Manag. 2007;57(1):621.
  9. Rimar S, Garstka SJ. The “Balanced Scorecard”: development and implementation in an academic clinical department. Acad Med. 1999;74(2):114122.
  10. Zbinden AM. Introducing a balanced scorecard management system in a university anesthesiology department. Anesth Analg. 2002;95(6):17311738, table of contents.
  11. Zelman WN, Blazer D, Gower JM, Bumgarner PO, Cancilla LM. Issues for academic health centers to consider before implementing a balanced‐scorecard effort. Acad Med. 1999;74(12):12691277.
  12. Rosenbaum L, Lamas D. Cents and sensitivity—teaching physicians to think about costs. N Engl J Med. 2012;367(2):99101.
  13. Meliones JN, Alton M, Mericle J, et al. 10‐year experience integrating strategic performance improvement initiatives: can the balanced scorecard, Six Sigma, and team training all thrive in a single hospital? In: Henriksen K, Battles JB, Keyes MA, Grady ML, eds. Advances in Patient Safety: New Directions and Alternative Approaches. Vol 3. Performance and Tools. Rockville, MD: Agency for Healthcare Research and Quality; 2008. Available at: http://www.ncbi.nlm.nih.gov/books/NBK43660. Accessed 15 June 2011.
  14. Kaplan RS, Norton DP. Linking the balanced scorecard to strategy. Calif Manage Rev. 1996;39(1):5379.
References
  1. Wachter RM. The hospitalist field turns 15: new opportunities and challenges. J Hosp Med. 2011;6(4):E1E4.
  2. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335(7):514517.
  3. Kaplan RS, Norton DP. The balanced scorecard—measures that drive performance. Harv Bus Rev. 1992;70(1):7179.
  4. Kaplan RS, Norton DP. Using the balanced scorecard as a strategic management system. Harv Bus Rev. 1996;74(1):7585.
  5. Kaplan RS, Norton DP. Having trouble with your strategy? Then map it. Harv Bus Rev. 2000;78:167176, 202.
  6. Kaplan RS, Norton DP. Putting the balanced scorecard to work. Harv Bus Rev. 1993;71:134147.
  7. Stanley R, Lillis KA, Zuspan SJ, et al. Development and implementation of a performance measure tool in an academic pediatric research network. Contemp Clin Trials. 2010;31(5):429437.
  8. Gurd B, Gao T. Lives in the balance: an analysis of the balanced scorecard (BSC) in healthcare organizations. Int J Prod Perform Manag. 2007;57(1):621.
  9. Rimar S, Garstka SJ. The “Balanced Scorecard”: development and implementation in an academic clinical department. Acad Med. 1999;74(2):114122.
  10. Zbinden AM. Introducing a balanced scorecard management system in a university anesthesiology department. Anesth Analg. 2002;95(6):17311738, table of contents.
  11. Zelman WN, Blazer D, Gower JM, Bumgarner PO, Cancilla LM. Issues for academic health centers to consider before implementing a balanced‐scorecard effort. Acad Med. 1999;74(12):12691277.
  12. Rosenbaum L, Lamas D. Cents and sensitivity—teaching physicians to think about costs. N Engl J Med. 2012;367(2):99101.
  13. Meliones JN, Alton M, Mericle J, et al. 10‐year experience integrating strategic performance improvement initiatives: can the balanced scorecard, Six Sigma, and team training all thrive in a single hospital? In: Henriksen K, Battles JB, Keyes MA, Grady ML, eds. Advances in Patient Safety: New Directions and Alternative Approaches. Vol 3. Performance and Tools. Rockville, MD: Agency for Healthcare Research and Quality; 2008. Available at: http://www.ncbi.nlm.nih.gov/books/NBK43660. Accessed 15 June 2011.
  14. Kaplan RS, Norton DP. Linking the balanced scorecard to strategy. Calif Manage Rev. 1996;39(1):5379.
Issue
Journal of Hospital Medicine - 8(3)
Issue
Journal of Hospital Medicine - 8(3)
Page Number
148-153
Page Number
148-153
Publications
Publications
Article Type
Display Headline
Development and implementation of a balanced scorecard in an academic hospitalist group
Display Headline
Development and implementation of a balanced scorecard in an academic hospitalist group
Sections
Article Source
Copyright © 2013 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Michael Hwa, MD, University of California, San Francisco, 533 Parnassus Avenue, Box 0131, San Francisco, CA 94143; Telephone: 415‐502‐1413; Fax: 415‐514‐2094; E‐mail: mhwa@medicine.ucsf.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Survey of Hospitalist Supervision

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Survey of overnight academic hospitalist supervision of trainees

In 2003, the Accreditation Council for Graduate Medical Education (ACGME) announced the first in a series of guidelines related to the regulation and oversight of residency training.1 The initial iteration specifically focused on the total and consecutive numbers of duty hours worked by trainees. These limitations began a new era of shift work in internal medicine residency training. With decreases in housestaff admitting capacity, clinical work has frequently been offloaded to non‐teaching or attending‐only services, increasing the demand for hospitalists to fill the void in physician‐staffed care in the hospital.2, 3 Since the implementation of the 2003 ACGME guidelines and a growing focus on patient safety, there has been increased study of, and call for, oversight of trainees in medicine; among these was the 2008 Institute of Medicine report,4 calling for 24/7 attending‐level supervision. The updated ACGME requirements,5 effective July 1, 2011, mandate enhanced on‐site supervision of trainee physicians. These new regulations not only define varying levels of supervision for trainees, including direct supervision with the physical presence of a supervisor and the degree of availability of said supervisor, they also describe ensuring the quality of supervision provided.5 While continuous attending‐level supervision is not yet mandated, many residency programs look to their academic hospitalists to fill the supervisory void, particularly at night. However, what specific roles hospitalists play in the nighttime supervision of trainees or the impact of this supervision remains unclear. To date, no study has examined a broad sample of hospitalist programs in teaching hospitals and the types of resident oversight they provide. We aimed to describe the current state of academic hospitalists in the clinical supervision of housestaff, specifically during the overnight period, and hospitalist perceptions of how the new ACGME requirements would impact traineehospitalist interactions.

METHODS

The Housestaff Oversight Subcommittee, a working group of the Society of General Internal Medicine (SGIM) Academic Hospitalist Task Force, surveyed a sample of academic hospitalist program leaders to assess the current status of trainee supervision performed by hospitalists. Programs were considered academic if they were located in the primary hospital of a residency that participates in the National Resident Matching Program for Internal Medicine. To obtain a broad geographic spectrum of academic hospitalist programs, all programs, both university and community‐based, in 4 states and 2 metropolitan regions were sampled: Washington, Oregon, Texas, Maryland, and the Philadelphia and Chicago metropolitan areas. Hospitalist program leaders were identified by members of the Taskforce using individual program websites and by querying departmental leadership at eligible teaching hospitals. Respondents were contacted by e‐mail for participation. None of the authors of the manuscript were participants in the survey.

The survey was developed by consensus of the working group after reviewing the salient literature and included additional questions queried to internal medicine program directors.6 The 19‐item SurveyMonkey instrument included questions about hospitalists' role in trainees' education and evaluation. A Likert‐type scale was used to assess perceptions regarding the impact of on‐site hospitalist supervision on trainee autonomy and hospitalist workload (1 = strongly disagree to 5 = strongly agree). Descriptive statistics were performed and, where appropriate, t test and Fisher's exact test were performed to identify associations between program characteristics and perceptions. Stata SE was used (STATA Corp, College Station, TX) for all statistical analysis.

RESULTS

The survey was sent to 47 individuals identified as likely hospitalist program leaders and completed by 41 individuals (87%). However, 7 respondents turned out not to be program leaders and were therefore excluded, resulting in a 72% (34/47) survey response rate.

The programs for which we did not obtain responses were similar to respondent programs, and did not include a larger proportion of community‐based programs or overrepresent a specific geographic region. Twenty‐five (73%) of the 34 hospitalist program leaders were male, with an average age of 44.3 years, and an average of 12 years post‐residency training (range, 530 years). They reported leading groups with an average of 18 full‐time equivalent (FTE) faculty (range, 350 persons).

Relationship of Hospitalist Programs With the Residency Program

The majority (32/34, 94%) of respondents describe their program as having traditional housestaffhospitalist interactions on an attending‐covered housestaff teaching service. Other hospitalists' clinical roles included: attending on uncovered (non‐housestaff services; 29/34, 85%); nighttime coverage (24/34, 70%); attending on consult services with housestaff (24/34, 70%). All respondents reported that hospitalist faculty are expected to participate in housestaff teaching or to fulfill other educational roles within the residency training program. These educational roles include participating in didactics or educational conferences, and serving as advisors. Additionally, the faculty of 30 (88%) programs have a formal evaluative role over the housestaff they supervise on teaching services (eg, members of formal housestaff evaluation committee). Finally, 28 (82%) programs have faculty who play administrative roles in the residency programs, such as involvement in program leadership or recruitment. Although 63% of the corresponding internal medicine residency programs have a formal housestaff supervision policy, only 43% of program leaders stated that their hospitalists receive formal faculty development on how to provide this supervision to resident trainees. Instead, the majority of hospitalist programs were described as having teaching expectations in the absence of a formal policy.

Twenty‐one programs (21/34, 61%) described having an attending hospitalist physician on‐site overnight to provide ongoing patient care or admit new patients. Of those with on‐site attending coverage, a minority of programs (8/21, 38%) reported having a formal defined supervisory role of housestaff trainees for hospitalists during the overnight period. In these 8 programs, this defined role included a requirement for housestaff to present newly admitted patients or contact hospitalists with questions regarding patient management. Twenty‐four percent (5/21) of the programs with nighttime coverage stated that the role of the nocturnal attending was only to cover the non‐teaching services, without housestaff interaction or supervision. The remainder of programs (8/21, 38%) describe only informal interactions between housestaff and hospitalist faculty, without clearly defined expectations for supervision.

Perceptions of New Regulations and Night Work

Hospitalist leaders viewed increased supervision of housestaff both positively and negatively. Leaders were asked their level of agreement with the potential impact of increased hospitalist nighttime supervision. Of respondents, 85% (27/32) agreed that formal overnight supervision by an attending hospitalist would improve patient safety, and 60% (20/33) agreed that formal overnight supervision would improve traineehospitalist relationships. In addition, 60% (20/33) of respondents felt that nighttime supervision of housestaff by faculty hospitalists would improve resident education. However, approximately 40% (13/33) expressed concern that increased on‐site hospitalist supervision would hamper resident decision‐making autonomy, and 75% (25/33) agreed that a formal housestaff supervisory role would increase hospitalist work load. The perception of increased workload was influenced by a hospitalist program's current supervisory role. Hospitalists programs providing formal nighttime supervision for housestaff, compared to those with informal or poorly defined faculty roles, were less likely to perceive these new regulations as resulting in an increase in hospitalist workload (3.72 vs 4.42; P = 0.02). In addition, hospitalist programs with a formal nighttime role were more likely to identify lack of specific parameters for attending‐level contact as a barrier to residents not contacting their supervisors during the overnight period (2.54 vs 3.54; P = 0.03). No differences in perception of the regulations were noted for those hospitalist programs which had existing faculty development on clinical supervision.

DISCUSSION

This study provides important information about how academic hospitalists currently contribute to the supervision of internal medicine residents. While academic hospitalist groups frequently have faculty providing clinical care on‐site at night, and often hospitalists provide overnight supervision of internal medicine trainees, formal supervision of trainees is not uniform, and few hospitalists groups have a mechanism to provide training or faculty development on how to effectively supervise resident trainees. Hospitalist leaders expressed concerns that creating additional formal overnight supervisory responsibilities may add to an already burdened overnight hospitalist. Formalizing this supervisory role, including explicit role definitions and faculty training for trainee supervision, is necessary.

Though our sample size is small, we captured a diverse geographic range of both university and community‐based academic hospitalist programs by surveying group leaders in several distinct regions. We are unable to comment on differences between responding and non‐responding hospitalist programs, but there does not appear to be a systematic difference between these groups.

Our findings are consistent with work describing a lack of structured conceptual frameworks in effectively supervising trainees,7, 8 and also, at times, nebulous expectations for hospitalist faculty. We found that the existence of a formal supervisory policy within the associated residency program, as well as defined roles for hospitalists, increases the likelihood of positive perceptions of the new ACGME supervisory recommendations. However, the existence of these requirements does not mean that all programs are capable of following them. While additional discussion is required to best delineate a formal overnight hospitalist role in trainee supervision, clearly defining expectations for both faculty and trainees, and their interactions, may alleviate the struggles that exist in programs with ill‐defined roles for hospitalist faculty supervision. While faculty duty hours standards do not exist, additional duties of nighttime coverage for hospitalists suggests that close attention should be paid to burn‐out.9 Faculty development on nighttime supervision and teaching may help maximize both learning and patient care efficiency, and provide a framework for this often unstructured educational time.

Acknowledgements

The research reported here was supported by the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service (REA 05‐129, CDA 07‐022). The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs.

Files
References
  1. Philibert I,Friedman P,Williams WT.New requirements for resident duty hours.JAMA.2002;288:11121114.
  2. Nuckol T,Bhattacharya J,Wolman DM,Ulmer C,Escarce J.Cost implications of reduced work hours and workloads for resident physicians.N Engl J Med.2009;360:22022215.
  3. Horwitz L.Why have working hour restrictions apparently not improved patient safety?BMJ.2011;342:d1200.
  4. Ulmer C, Wolman DM, Johns MME, eds.Resident Duty Hours: Enhancing Sleep, Supervision, and Safety.Washington, DC:National Academies Press;2008.
  5. Nasca TJ,Day SH,Amis ES;for the ACGME Duty Hour Task Force.The new recommendations on duty hours from the ACGME Task Force.N Engl J Med.2010;363.
  6. Association of Program Directors in Internal Medicine (APDIM) Survey 2009. Available at: http://www.im.org/toolbox/surveys/SurveyDataand Reports/APDIMSurveyData/Documents/2009_APDIM_summary_web. pdf. Accessed on July 30, 2012.
  7. Kennedy TJ,Lingard L,Baker GR,Kitchen L,Regehr G.Clinical oversight: conceptualizing the relationship between supervision and safety.J Gen Intern Med.2007;22(8):10801085.
  8. Farnan JM,Johnson JK,Meltzer DO, et al.Strategies for effective on‐call supervision for internal medicine residents: the SUPERB/SAFETY model.J Grad Med Educ.2010;2(1):4652.
  9. Glasheen J,Misky G,Reid M,Harrison R,Sharpe B,Auerbach A.Career satisfaction and burn‐out in academic hospital medicine.Arch Intern Med.2011;171(8):782785.
Article PDF
Issue
Journal of Hospital Medicine - 7(7)
Publications
Page Number
521-523
Sections
Files
Files
Article PDF
Article PDF

In 2003, the Accreditation Council for Graduate Medical Education (ACGME) announced the first in a series of guidelines related to the regulation and oversight of residency training.1 The initial iteration specifically focused on the total and consecutive numbers of duty hours worked by trainees. These limitations began a new era of shift work in internal medicine residency training. With decreases in housestaff admitting capacity, clinical work has frequently been offloaded to non‐teaching or attending‐only services, increasing the demand for hospitalists to fill the void in physician‐staffed care in the hospital.2, 3 Since the implementation of the 2003 ACGME guidelines and a growing focus on patient safety, there has been increased study of, and call for, oversight of trainees in medicine; among these was the 2008 Institute of Medicine report,4 calling for 24/7 attending‐level supervision. The updated ACGME requirements,5 effective July 1, 2011, mandate enhanced on‐site supervision of trainee physicians. These new regulations not only define varying levels of supervision for trainees, including direct supervision with the physical presence of a supervisor and the degree of availability of said supervisor, they also describe ensuring the quality of supervision provided.5 While continuous attending‐level supervision is not yet mandated, many residency programs look to their academic hospitalists to fill the supervisory void, particularly at night. However, what specific roles hospitalists play in the nighttime supervision of trainees or the impact of this supervision remains unclear. To date, no study has examined a broad sample of hospitalist programs in teaching hospitals and the types of resident oversight they provide. We aimed to describe the current state of academic hospitalists in the clinical supervision of housestaff, specifically during the overnight period, and hospitalist perceptions of how the new ACGME requirements would impact traineehospitalist interactions.

METHODS

The Housestaff Oversight Subcommittee, a working group of the Society of General Internal Medicine (SGIM) Academic Hospitalist Task Force, surveyed a sample of academic hospitalist program leaders to assess the current status of trainee supervision performed by hospitalists. Programs were considered academic if they were located in the primary hospital of a residency that participates in the National Resident Matching Program for Internal Medicine. To obtain a broad geographic spectrum of academic hospitalist programs, all programs, both university and community‐based, in 4 states and 2 metropolitan regions were sampled: Washington, Oregon, Texas, Maryland, and the Philadelphia and Chicago metropolitan areas. Hospitalist program leaders were identified by members of the Taskforce using individual program websites and by querying departmental leadership at eligible teaching hospitals. Respondents were contacted by e‐mail for participation. None of the authors of the manuscript were participants in the survey.

The survey was developed by consensus of the working group after reviewing the salient literature and included additional questions queried to internal medicine program directors.6 The 19‐item SurveyMonkey instrument included questions about hospitalists' role in trainees' education and evaluation. A Likert‐type scale was used to assess perceptions regarding the impact of on‐site hospitalist supervision on trainee autonomy and hospitalist workload (1 = strongly disagree to 5 = strongly agree). Descriptive statistics were performed and, where appropriate, t test and Fisher's exact test were performed to identify associations between program characteristics and perceptions. Stata SE was used (STATA Corp, College Station, TX) for all statistical analysis.

RESULTS

The survey was sent to 47 individuals identified as likely hospitalist program leaders and completed by 41 individuals (87%). However, 7 respondents turned out not to be program leaders and were therefore excluded, resulting in a 72% (34/47) survey response rate.

The programs for which we did not obtain responses were similar to respondent programs, and did not include a larger proportion of community‐based programs or overrepresent a specific geographic region. Twenty‐five (73%) of the 34 hospitalist program leaders were male, with an average age of 44.3 years, and an average of 12 years post‐residency training (range, 530 years). They reported leading groups with an average of 18 full‐time equivalent (FTE) faculty (range, 350 persons).

Relationship of Hospitalist Programs With the Residency Program

The majority (32/34, 94%) of respondents describe their program as having traditional housestaffhospitalist interactions on an attending‐covered housestaff teaching service. Other hospitalists' clinical roles included: attending on uncovered (non‐housestaff services; 29/34, 85%); nighttime coverage (24/34, 70%); attending on consult services with housestaff (24/34, 70%). All respondents reported that hospitalist faculty are expected to participate in housestaff teaching or to fulfill other educational roles within the residency training program. These educational roles include participating in didactics or educational conferences, and serving as advisors. Additionally, the faculty of 30 (88%) programs have a formal evaluative role over the housestaff they supervise on teaching services (eg, members of formal housestaff evaluation committee). Finally, 28 (82%) programs have faculty who play administrative roles in the residency programs, such as involvement in program leadership or recruitment. Although 63% of the corresponding internal medicine residency programs have a formal housestaff supervision policy, only 43% of program leaders stated that their hospitalists receive formal faculty development on how to provide this supervision to resident trainees. Instead, the majority of hospitalist programs were described as having teaching expectations in the absence of a formal policy.

Twenty‐one programs (21/34, 61%) described having an attending hospitalist physician on‐site overnight to provide ongoing patient care or admit new patients. Of those with on‐site attending coverage, a minority of programs (8/21, 38%) reported having a formal defined supervisory role of housestaff trainees for hospitalists during the overnight period. In these 8 programs, this defined role included a requirement for housestaff to present newly admitted patients or contact hospitalists with questions regarding patient management. Twenty‐four percent (5/21) of the programs with nighttime coverage stated that the role of the nocturnal attending was only to cover the non‐teaching services, without housestaff interaction or supervision. The remainder of programs (8/21, 38%) describe only informal interactions between housestaff and hospitalist faculty, without clearly defined expectations for supervision.

Perceptions of New Regulations and Night Work

Hospitalist leaders viewed increased supervision of housestaff both positively and negatively. Leaders were asked their level of agreement with the potential impact of increased hospitalist nighttime supervision. Of respondents, 85% (27/32) agreed that formal overnight supervision by an attending hospitalist would improve patient safety, and 60% (20/33) agreed that formal overnight supervision would improve traineehospitalist relationships. In addition, 60% (20/33) of respondents felt that nighttime supervision of housestaff by faculty hospitalists would improve resident education. However, approximately 40% (13/33) expressed concern that increased on‐site hospitalist supervision would hamper resident decision‐making autonomy, and 75% (25/33) agreed that a formal housestaff supervisory role would increase hospitalist work load. The perception of increased workload was influenced by a hospitalist program's current supervisory role. Hospitalists programs providing formal nighttime supervision for housestaff, compared to those with informal or poorly defined faculty roles, were less likely to perceive these new regulations as resulting in an increase in hospitalist workload (3.72 vs 4.42; P = 0.02). In addition, hospitalist programs with a formal nighttime role were more likely to identify lack of specific parameters for attending‐level contact as a barrier to residents not contacting their supervisors during the overnight period (2.54 vs 3.54; P = 0.03). No differences in perception of the regulations were noted for those hospitalist programs which had existing faculty development on clinical supervision.

DISCUSSION

This study provides important information about how academic hospitalists currently contribute to the supervision of internal medicine residents. While academic hospitalist groups frequently have faculty providing clinical care on‐site at night, and often hospitalists provide overnight supervision of internal medicine trainees, formal supervision of trainees is not uniform, and few hospitalists groups have a mechanism to provide training or faculty development on how to effectively supervise resident trainees. Hospitalist leaders expressed concerns that creating additional formal overnight supervisory responsibilities may add to an already burdened overnight hospitalist. Formalizing this supervisory role, including explicit role definitions and faculty training for trainee supervision, is necessary.

Though our sample size is small, we captured a diverse geographic range of both university and community‐based academic hospitalist programs by surveying group leaders in several distinct regions. We are unable to comment on differences between responding and non‐responding hospitalist programs, but there does not appear to be a systematic difference between these groups.

Our findings are consistent with work describing a lack of structured conceptual frameworks in effectively supervising trainees,7, 8 and also, at times, nebulous expectations for hospitalist faculty. We found that the existence of a formal supervisory policy within the associated residency program, as well as defined roles for hospitalists, increases the likelihood of positive perceptions of the new ACGME supervisory recommendations. However, the existence of these requirements does not mean that all programs are capable of following them. While additional discussion is required to best delineate a formal overnight hospitalist role in trainee supervision, clearly defining expectations for both faculty and trainees, and their interactions, may alleviate the struggles that exist in programs with ill‐defined roles for hospitalist faculty supervision. While faculty duty hours standards do not exist, additional duties of nighttime coverage for hospitalists suggests that close attention should be paid to burn‐out.9 Faculty development on nighttime supervision and teaching may help maximize both learning and patient care efficiency, and provide a framework for this often unstructured educational time.

Acknowledgements

The research reported here was supported by the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service (REA 05‐129, CDA 07‐022). The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs.

In 2003, the Accreditation Council for Graduate Medical Education (ACGME) announced the first in a series of guidelines related to the regulation and oversight of residency training.1 The initial iteration specifically focused on the total and consecutive numbers of duty hours worked by trainees. These limitations began a new era of shift work in internal medicine residency training. With decreases in housestaff admitting capacity, clinical work has frequently been offloaded to non‐teaching or attending‐only services, increasing the demand for hospitalists to fill the void in physician‐staffed care in the hospital.2, 3 Since the implementation of the 2003 ACGME guidelines and a growing focus on patient safety, there has been increased study of, and call for, oversight of trainees in medicine; among these was the 2008 Institute of Medicine report,4 calling for 24/7 attending‐level supervision. The updated ACGME requirements,5 effective July 1, 2011, mandate enhanced on‐site supervision of trainee physicians. These new regulations not only define varying levels of supervision for trainees, including direct supervision with the physical presence of a supervisor and the degree of availability of said supervisor, they also describe ensuring the quality of supervision provided.5 While continuous attending‐level supervision is not yet mandated, many residency programs look to their academic hospitalists to fill the supervisory void, particularly at night. However, what specific roles hospitalists play in the nighttime supervision of trainees or the impact of this supervision remains unclear. To date, no study has examined a broad sample of hospitalist programs in teaching hospitals and the types of resident oversight they provide. We aimed to describe the current state of academic hospitalists in the clinical supervision of housestaff, specifically during the overnight period, and hospitalist perceptions of how the new ACGME requirements would impact traineehospitalist interactions.

METHODS

The Housestaff Oversight Subcommittee, a working group of the Society of General Internal Medicine (SGIM) Academic Hospitalist Task Force, surveyed a sample of academic hospitalist program leaders to assess the current status of trainee supervision performed by hospitalists. Programs were considered academic if they were located in the primary hospital of a residency that participates in the National Resident Matching Program for Internal Medicine. To obtain a broad geographic spectrum of academic hospitalist programs, all programs, both university and community‐based, in 4 states and 2 metropolitan regions were sampled: Washington, Oregon, Texas, Maryland, and the Philadelphia and Chicago metropolitan areas. Hospitalist program leaders were identified by members of the Taskforce using individual program websites and by querying departmental leadership at eligible teaching hospitals. Respondents were contacted by e‐mail for participation. None of the authors of the manuscript were participants in the survey.

The survey was developed by consensus of the working group after reviewing the salient literature and included additional questions queried to internal medicine program directors.6 The 19‐item SurveyMonkey instrument included questions about hospitalists' role in trainees' education and evaluation. A Likert‐type scale was used to assess perceptions regarding the impact of on‐site hospitalist supervision on trainee autonomy and hospitalist workload (1 = strongly disagree to 5 = strongly agree). Descriptive statistics were performed and, where appropriate, t test and Fisher's exact test were performed to identify associations between program characteristics and perceptions. Stata SE was used (STATA Corp, College Station, TX) for all statistical analysis.

RESULTS

The survey was sent to 47 individuals identified as likely hospitalist program leaders and completed by 41 individuals (87%). However, 7 respondents turned out not to be program leaders and were therefore excluded, resulting in a 72% (34/47) survey response rate.

The programs for which we did not obtain responses were similar to respondent programs, and did not include a larger proportion of community‐based programs or overrepresent a specific geographic region. Twenty‐five (73%) of the 34 hospitalist program leaders were male, with an average age of 44.3 years, and an average of 12 years post‐residency training (range, 530 years). They reported leading groups with an average of 18 full‐time equivalent (FTE) faculty (range, 350 persons).

Relationship of Hospitalist Programs With the Residency Program

The majority (32/34, 94%) of respondents describe their program as having traditional housestaffhospitalist interactions on an attending‐covered housestaff teaching service. Other hospitalists' clinical roles included: attending on uncovered (non‐housestaff services; 29/34, 85%); nighttime coverage (24/34, 70%); attending on consult services with housestaff (24/34, 70%). All respondents reported that hospitalist faculty are expected to participate in housestaff teaching or to fulfill other educational roles within the residency training program. These educational roles include participating in didactics or educational conferences, and serving as advisors. Additionally, the faculty of 30 (88%) programs have a formal evaluative role over the housestaff they supervise on teaching services (eg, members of formal housestaff evaluation committee). Finally, 28 (82%) programs have faculty who play administrative roles in the residency programs, such as involvement in program leadership or recruitment. Although 63% of the corresponding internal medicine residency programs have a formal housestaff supervision policy, only 43% of program leaders stated that their hospitalists receive formal faculty development on how to provide this supervision to resident trainees. Instead, the majority of hospitalist programs were described as having teaching expectations in the absence of a formal policy.

Twenty‐one programs (21/34, 61%) described having an attending hospitalist physician on‐site overnight to provide ongoing patient care or admit new patients. Of those with on‐site attending coverage, a minority of programs (8/21, 38%) reported having a formal defined supervisory role of housestaff trainees for hospitalists during the overnight period. In these 8 programs, this defined role included a requirement for housestaff to present newly admitted patients or contact hospitalists with questions regarding patient management. Twenty‐four percent (5/21) of the programs with nighttime coverage stated that the role of the nocturnal attending was only to cover the non‐teaching services, without housestaff interaction or supervision. The remainder of programs (8/21, 38%) describe only informal interactions between housestaff and hospitalist faculty, without clearly defined expectations for supervision.

Perceptions of New Regulations and Night Work

Hospitalist leaders viewed increased supervision of housestaff both positively and negatively. Leaders were asked their level of agreement with the potential impact of increased hospitalist nighttime supervision. Of respondents, 85% (27/32) agreed that formal overnight supervision by an attending hospitalist would improve patient safety, and 60% (20/33) agreed that formal overnight supervision would improve traineehospitalist relationships. In addition, 60% (20/33) of respondents felt that nighttime supervision of housestaff by faculty hospitalists would improve resident education. However, approximately 40% (13/33) expressed concern that increased on‐site hospitalist supervision would hamper resident decision‐making autonomy, and 75% (25/33) agreed that a formal housestaff supervisory role would increase hospitalist work load. The perception of increased workload was influenced by a hospitalist program's current supervisory role. Hospitalists programs providing formal nighttime supervision for housestaff, compared to those with informal or poorly defined faculty roles, were less likely to perceive these new regulations as resulting in an increase in hospitalist workload (3.72 vs 4.42; P = 0.02). In addition, hospitalist programs with a formal nighttime role were more likely to identify lack of specific parameters for attending‐level contact as a barrier to residents not contacting their supervisors during the overnight period (2.54 vs 3.54; P = 0.03). No differences in perception of the regulations were noted for those hospitalist programs which had existing faculty development on clinical supervision.

DISCUSSION

This study provides important information about how academic hospitalists currently contribute to the supervision of internal medicine residents. While academic hospitalist groups frequently have faculty providing clinical care on‐site at night, and often hospitalists provide overnight supervision of internal medicine trainees, formal supervision of trainees is not uniform, and few hospitalists groups have a mechanism to provide training or faculty development on how to effectively supervise resident trainees. Hospitalist leaders expressed concerns that creating additional formal overnight supervisory responsibilities may add to an already burdened overnight hospitalist. Formalizing this supervisory role, including explicit role definitions and faculty training for trainee supervision, is necessary.

Though our sample size is small, we captured a diverse geographic range of both university and community‐based academic hospitalist programs by surveying group leaders in several distinct regions. We are unable to comment on differences between responding and non‐responding hospitalist programs, but there does not appear to be a systematic difference between these groups.

Our findings are consistent with work describing a lack of structured conceptual frameworks in effectively supervising trainees,7, 8 and also, at times, nebulous expectations for hospitalist faculty. We found that the existence of a formal supervisory policy within the associated residency program, as well as defined roles for hospitalists, increases the likelihood of positive perceptions of the new ACGME supervisory recommendations. However, the existence of these requirements does not mean that all programs are capable of following them. While additional discussion is required to best delineate a formal overnight hospitalist role in trainee supervision, clearly defining expectations for both faculty and trainees, and their interactions, may alleviate the struggles that exist in programs with ill‐defined roles for hospitalist faculty supervision. While faculty duty hours standards do not exist, additional duties of nighttime coverage for hospitalists suggests that close attention should be paid to burn‐out.9 Faculty development on nighttime supervision and teaching may help maximize both learning and patient care efficiency, and provide a framework for this often unstructured educational time.

Acknowledgements

The research reported here was supported by the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service (REA 05‐129, CDA 07‐022). The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs.

References
  1. Philibert I,Friedman P,Williams WT.New requirements for resident duty hours.JAMA.2002;288:11121114.
  2. Nuckol T,Bhattacharya J,Wolman DM,Ulmer C,Escarce J.Cost implications of reduced work hours and workloads for resident physicians.N Engl J Med.2009;360:22022215.
  3. Horwitz L.Why have working hour restrictions apparently not improved patient safety?BMJ.2011;342:d1200.
  4. Ulmer C, Wolman DM, Johns MME, eds.Resident Duty Hours: Enhancing Sleep, Supervision, and Safety.Washington, DC:National Academies Press;2008.
  5. Nasca TJ,Day SH,Amis ES;for the ACGME Duty Hour Task Force.The new recommendations on duty hours from the ACGME Task Force.N Engl J Med.2010;363.
  6. Association of Program Directors in Internal Medicine (APDIM) Survey 2009. Available at: http://www.im.org/toolbox/surveys/SurveyDataand Reports/APDIMSurveyData/Documents/2009_APDIM_summary_web. pdf. Accessed on July 30, 2012.
  7. Kennedy TJ,Lingard L,Baker GR,Kitchen L,Regehr G.Clinical oversight: conceptualizing the relationship between supervision and safety.J Gen Intern Med.2007;22(8):10801085.
  8. Farnan JM,Johnson JK,Meltzer DO, et al.Strategies for effective on‐call supervision for internal medicine residents: the SUPERB/SAFETY model.J Grad Med Educ.2010;2(1):4652.
  9. Glasheen J,Misky G,Reid M,Harrison R,Sharpe B,Auerbach A.Career satisfaction and burn‐out in academic hospital medicine.Arch Intern Med.2011;171(8):782785.
References
  1. Philibert I,Friedman P,Williams WT.New requirements for resident duty hours.JAMA.2002;288:11121114.
  2. Nuckol T,Bhattacharya J,Wolman DM,Ulmer C,Escarce J.Cost implications of reduced work hours and workloads for resident physicians.N Engl J Med.2009;360:22022215.
  3. Horwitz L.Why have working hour restrictions apparently not improved patient safety?BMJ.2011;342:d1200.
  4. Ulmer C, Wolman DM, Johns MME, eds.Resident Duty Hours: Enhancing Sleep, Supervision, and Safety.Washington, DC:National Academies Press;2008.
  5. Nasca TJ,Day SH,Amis ES;for the ACGME Duty Hour Task Force.The new recommendations on duty hours from the ACGME Task Force.N Engl J Med.2010;363.
  6. Association of Program Directors in Internal Medicine (APDIM) Survey 2009. Available at: http://www.im.org/toolbox/surveys/SurveyDataand Reports/APDIMSurveyData/Documents/2009_APDIM_summary_web. pdf. Accessed on July 30, 2012.
  7. Kennedy TJ,Lingard L,Baker GR,Kitchen L,Regehr G.Clinical oversight: conceptualizing the relationship between supervision and safety.J Gen Intern Med.2007;22(8):10801085.
  8. Farnan JM,Johnson JK,Meltzer DO, et al.Strategies for effective on‐call supervision for internal medicine residents: the SUPERB/SAFETY model.J Grad Med Educ.2010;2(1):4652.
  9. Glasheen J,Misky G,Reid M,Harrison R,Sharpe B,Auerbach A.Career satisfaction and burn‐out in academic hospital medicine.Arch Intern Med.2011;171(8):782785.
Issue
Journal of Hospital Medicine - 7(7)
Issue
Journal of Hospital Medicine - 7(7)
Page Number
521-523
Page Number
521-523
Publications
Publications
Article Type
Display Headline
Survey of overnight academic hospitalist supervision of trainees
Display Headline
Survey of overnight academic hospitalist supervision of trainees
Sections
Article Source
Copyright © 2012 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Department of Medicine and Pritzker School of Medicine, The University of Chicago, 5841 S Maryland Ave, MC 2007, AMB W216, Chicago, IL 60637
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Overnight Resident Supervision

Article Type
Changed
Mon, 05/22/2017 - 18:35
Display Headline
Effects of increased overnight supervision on resident education, decision‐making, and autonomy

Postgraduate medical education has traditionally relied on a training model of progressive independence, where housestaff learn patient care through increasing autonomy and decreasing levels of supervision.1 While this framework has little empirical backing, it is grounded in sound educational theory from similar disciplines and endorsed by medical associations.1, 2 The Accreditation Council for Graduate Medical Education (ACGME) recently implemented regulations requiring that first‐year residents have a qualified supervisor physically present or immediately available at all times.3 Previously, oversight by an offsite supervisor (for example, an attending physician at home) was considered adequate. These new regulations, although motivated by patient safety imperatives,4 have elicited concerns that increased supervision may lead to decreased housestaff autonomy and an increased reliance on supervisors for clinical guidance.5 Such changes could ultimately produce less qualified practitioners by the completion of training.

Critics of the current training model point to a patient safety mechanism where housestaff must take responsibility for requesting attending‐level help when situations arise that surpass their skill level.5 For resident physicians, however, the decision to request support is often complex and dependent not only on the clinical question, but also on unique and variable trainee and supervisor factors.6 Survey data from 1999, prior to the current training regulations, showed that increased faculty presence improved resident reports of educational value, quality of patient care, and autonomy.7 A recent survey, performed after the initiation of overnight attending supervision at an academic medical center, demonstrated perceived improvements in educational value and patient‐level outcomes by both faculty and housestaff.8 Whether increased supervision and resident autonomy can coexist remains undetermined.

Overnight rotations for residents (commonly referred to as night float) are often times of little direct or indirect supervision. A recent systematic review of clinical supervision practices for housestaff in all fields found scarce literature on overnight supervision practices.9 There remains limited and conflicting data regarding the quality of patient care provided by the resident night float,10 as well as evidence revealing a low perceived educational value of night rotations when compared with non‐night float rotations.11 Yet in 2006, more than three‐quarters of all internal medicine programs employed night float rotations.12 In response to ACGME guidelines mandating decreased shift lengths with continued restrictions on overall duty hours, it appears likely even more training programs will implement night float systems.

The presence of overnight hospitalists (also known as nocturnists) is growing within the academic setting, yet their role in relation to trainees is either poorly defined13 or independent of housestaff.14 To better understand the impact of increasing levels of supervision on residency training, we investigated housestaff perceptions of education, autonomy, and clinical decision‐making before and after implementation of an in‐hospital, overnight attending physician (nocturnist).

METHODS

The study was conducted at a 570‐bed academic, tertiary care medical center affiliated with an internal medicine residency program of 170 housestaff. At our institution, all first year residents perform a week of intern night float consisting of overnight cross‐coverage of general medicine patients on the floor, step‐down, and intensive care units (ICUs). Second and third year residents each complete 4 to 6 days of resident night float each year at this hospital. They are responsible for assisting the intern night float with cross‐coverage, in addition to admitting general medicine patients to the floor, step‐down unit, and intensive care units. Every night at our medical center, 1 intern night float and 1 resident night float are on duty in the hospital; this is in addition to a resident from the on‐call medicine team and a resident working in the ICU. Prior to July 2010, no internal medicine attending physicians were physically present in the hospital at night. Oversight for the intern and resident night float was provided by the attending physician for the on‐call resident ward team, who was at home and available by pager. The night float housestaff were instructed to contact the responsible attending physician only when a major change in clinical status occurred for hospitalized or newly admitted patients, though this expectation was neither standardized nor monitored.

We established a nocturnist program at the start of the 2010 academic year. The position was staffed by hospitalists from within the Division of Hospital Medicine without the use of moonlighters. Two‐thirds of shifts were filled by 3 dedicated nocturnists with remaining staffing provided by junior hospitalist faculty. The dedicated nocturnists had recently completed their internal medicine residency at our institution. Shift length was 12 hours and dedicated nocturnists worked, on average, 10 shifts per month. The nocturnist filled a critical overnight safety role through mandatory bedside staffing of newly admitted ICU patients within 2 hours of admission, discussion in person or via telephone of newly admitted step‐down unit patients within 6 hours of admission, and direct or indirect supervision of the care of any patients undergoing a major change in clinical status. The overnight hospitalist was also available for clinical questions and to assist housestaff with triaging of overnight admissions. After nocturnist implementation, overnight housestaff received direct supervision or had immediate access to direct supervision, while prior to the nocturnist, residents had access only to indirect supervision.

In addition, the nocturnist admitted medicine patients after 1 AM in a 1:1 ratio with the admitting night float resident, performed medical consults, and provided coverage of non‐teaching medicine services. While actual volume numbers were not obtained, the estimated average of resident admissions per night was 2 to 3, and the number of nocturnist admissions was 1 to 2. The nocturnist also met nightly with night float housestaff for half‐hour didactics focusing on the management of common overnight clinical scenarios. The role of the new nocturnist was described to all housestaff in orientation materials given prior to their night float rotation and their general medicine ward rotation.

We administered pre‐rolling surveys and post‐rolling surveys of internal medicine intern and resident physicians who underwent the night float rotation at our hospital during the 2010 to 2011 academic year. Surveys examined housestaff perceptions of the night float rotation with regard to supervisory roles, educational and clinical value, and clinical decision‐making prior to and after implementation of the nocturnist. Surveys were designed by the study investigators based on prior literature,1, 510 personal experience, and housestaff suggestion, and were refined during works‐in‐progress meetings. Surveys were composed of Likert‐style questions asking housestaff to rate their level of agreement (15, strongly disagree to strongly agree) with statements regarding the supervisory and educational experience of the night float rotation, and to judge their frequency of contact (15, never to always/nightly) with an attending physician for specific clinical scenarios. The clinical scenarios described situations dealing with attendingresident communication around transfers of care, diagnostic evaluation, therapeutic interventions, and adverse events. Scenarios were taken from previous literature describing supervision preferences of faculty and residents during times of critical clinical decision‐making.15

One week prior to the beginning their night float rotation for the 20102011 academic year, housestaff were sent an e‐mail request to complete an online survey asking about their night float rotation during the prior academic year, when no nocturnist was present. One week after completion of their night float rotation for the 20102011 academic year, housestaff received an e‐mail with a link to a post‐survey asking about their recently completed, nocturnist‐supervised, night float rotation. First year residents received only a post‐survey at the completion of their night float rotation, as they would be unable to reflect on prior experience.

Informed consent was imbedded within the e‐mail survey request. Survey requests were sent by a fellow within the Division of Hospital Medicine with a brief message cosigned by an associate program director of the residency program. We did not collect unique identifiers from respondents in order to offer additional assurances to the participants that the survey was anonymous. There was no incentive offered for completion of the survey. Survey data were anonymous and downloaded to a database by a third party. Data were analyzed using Microsoft Excel, and pre‐responses and post‐responses compared using a Student t test. The study was approved by the medical center's Institutional Review Board.

RESULTS

Rates of response for pre‐surveys and post‐surveys were 57% (43 respondents) and 51% (53 respondents), respectively. Due to response rates and in order to convey accurately the perceptions of the training program as a whole, we collapsed responses of the pre‐surveys and post‐surveys based on level of training. After implementation of the overnight attending, we observed a significant increase in the perceived clinical value of the night float rotation (3.95 vs 4.27, P = 0.01) as well as in the adequacy of overnight supervision (3.65 vs 4.30, P < 0.0001; Table 1). There was no reported change in housestaff decision‐making autonomy (4.35 vs 4.45, P = 0.44). In addition, we noted a nonsignificant trend towards an increased perception of the night float rotation as a valuable educational experience (3.83 vs 4.04, P = 0.24). After implementation of the nocturnist, more resident physicians agreed that overnight supervision by an attending positively impacted patient outcomes (3.79 vs 4.30, P = 0.002).

General Perceptions of the Night Float Rotation
StatementPre‐Nocturnist (n = 43) Mean (SD)Post‐Nocturnist (n = 53) Mean (SD)P Value
  • NOTE: Responses are strongly disagree (1) to strongly agree (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

Night float is a valuable educational rotation3.83 (0.81)4.04 (0.83)0.24
Night float is a valuable clinical rotation3.95 (0.65)4.27 (0.59)0.01
I have adequate overnight supervision3.65 (0.76)4.30 (0.72)<0.0001
I have sufficient autonomy to make clinical decisions4.35 (0.57)4.45 (0.60)0.44
Overnight supervision by an attending positively impacts patient outcomes3.79 (0.88)4.30 (0.74)0.002

After implementation of the nocturnist, night float providers demonstrated increased rates of contacting an attending physician overnight (Table 2). There were significantly greater rates of attending contact for transfers from outside facilities (2.00 vs 3.20, P = 0.006) and during times of adverse events (2.51 vs 3.25, P = 0.04). We observed a reported increase in attending contact prior to ordering invasive diagnostic procedures (1.75 vs 2.76, P = 0.004) and noninvasive diagnostic procedures (1.09 vs 1.31, P = 0.03), as well as prior to initiation of intravenous antibiotics (1.11 vs 1.47, P = 0.007) and vasopressors (1.52 vs 2.40, P = 0.004).

Self‐Reported Incidence of Overnight Attending Contact During Critical Decision‐Making
ScenarioPre‐Nocturnist (n = 42) Mean (SD)Post‐Nocturnist (n = 51) Mean (SD)P Value
  • NOTE: Responses are never contact (1) to always contact (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

Receive transfer from outside facility2.00 (1.27)3.20 (1.58)0.006
Prior to ordering noninvasive diagnostic procedure1.09 (0.29)1.31 (0.58)0.03
Prior to ordering an invasive procedure1.75 (0.84)2.76 (1.45)0.004
Prior to initiation of intravenous antibiotics1.11 (0.32)1.47 (0.76)0.007
Prior to initiation of vasopressors1.52 (0.82)2.40 (1.49)0.004
Patient experiencing adverse event, regardless of cause2.51 (1.31)3.25 (1.34)0.04

After initiating the program, the nocturnist became the most commonly contacted overnight provider by the night float housestaff (Table 3). We observed a decrease in peer to peer contact between the night float housestaff and the on‐call overnight resident after implementation of the nocturnist (2.67 vs 2.04, P = 0.006).

Self‐Reported Incidence of Night Float Contact With Overnight Providers for Patient Care
ProviderPre‐Nocturnist (n = 43) Mean (SD)Post‐Nocturnist (n = 53) Mean (SD)P Value
  • NOTE: Responses are never (1) to nightly (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: ICU, intensive care unit; PMD, primary medical doctor; SD, standard deviation.

ICU Fellow1.86 (0.70)1.86 (0.83)0.96
On‐call resident2.67 (0.89)2.04 (0.92)0.006
ICU resident2.14 (0.74)2.04 (0.91)0.56
On‐call medicine attending1.41 (0.79)1.26 (0.52)0.26
Patient's PMD1.27 (0.31)1.15 (0.41)0.31
Referring MD1.32 (0.60)1.15 (0.45)0.11
Nocturnist 3.59 (1.22) 

Attending presence led to increased agreement that there was a defined overnight attending to contact (2.97 vs 1.96, P < 0.0001) and a decreased fear of waking an attending overnight for assistance (3.26 vs 2.72, P = 0.03). Increased attending availability, however, did not change resident physician's fear of revealing knowledge gaps, their desire to make decisions independently, or their belief that contacting an attending would not change a patient's outcome (Table 4).

Reasons Night Float Housestaff Do Not Contact an Attending Physician
StatementPre‐Nocturnist (n = 42) Mean (SD)Post‐Nocturnist (n = 52) Mean (SD)P Value
  • NOTE: Responses are strongly disagree (1) to strongly agree (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

No defined attending to contact2.97 (1.35)1.96 (0.92)<0.0001
Fear of waking an attending3.26 (1.25)2.72 (1.09)0.03
Fear of revealing knowledge gaps2.26 (1.14)2.25 (0.96)0.95
Would rather make decision on own3.40 (0.93)3.03 (1.06)0.08
Will not change patient outcome3.26 (1.06)3.21 (1.03)0.81

DISCUSSION

The ACGME's new duty hour regulations require that supervision for first‐year residents be provided by a qualified physician (advanced resident, fellow, or attending physician) who is physically present at the hospital. Our study demonstrates that increased direct overnight supervision provided by an in‐house nocturnist enhanced the clinical value of the night float rotation and the perceived quality of patient care. In our study, increased attending supervision did not reduce perceived decision‐making autonomy, and in fact led to increased rates of attending contact during times of critical clinical decision‐making. Such results may help assuage fears that recent regulations mandating enhanced attending supervision will produce less capable practitioners, and offers reassurance that such changes are positively impacting patient care.

Many academic institutions are implementing nocturnists, although their precise roles and responsibilities are still being defined. Our nocturnist program was explicitly designed with housestaff supervision as a core responsibility, with the goal of improving patient safety and housestaff education overnight. We found that availability barriers to attending contact were logically decreased with in‐house faculty presence. Potentially harmful attitudes, however, around requesting support (such as fear of revealing knowledge gaps or the desire to make decisions independently) remained. Furthermore, despite statistically significant increases in contact between faculty and residents at times of critical decision‐making, overall rates of attending contact for diagnostic and therapeutic interventions remained low. It is unknown from our study or previous research, however, what level of contact is appropriate or ideal for many clinical scenarios.

Additionally, we described a novel role of an academic nocturnist at a tertiary care teaching hospital and offered a potential template for the development of academic nocturnists at similar institutions seeking to increase direct overnight supervision. Such roles have not been previously well defined in the literature. Based on our experience, the nocturnist's role was manageable and well utilized by housestaff, particularly for assistance with critically ill patients and overnight triaging. We believe there are a number of factors associated with the success of this role. First, clear guidelines were presented to housestaff and nocturnists regarding expectations for supervision (for example, staffing ICU admissions within 2 hours). These guidelines likely contributed to the increased attending contact observed during critical clinical decision‐making, as well as the perceived improved patient outcomes by our housestaff. Second, the nocturnists were expected to be an integral part of the overnight care team. In many systems, the nocturnists act completely independently of the housestaff teams, creating an additional barrier to contact and communication. In our system, because of clear guidelines and their integral role in staffing overnight admissions, the nocturnists were an essential partner in care for the housestaff. Third, most of the nocturnists had recently completed their residency training at this institution. Although our survey does not directly address this, we believe their knowledge of the hospital, appreciation of the role of the intern and the resident within our system, and understanding of the need to preserve housestaff autonomy were essential to building a successful nocturnist role. Lastly, the nocturnists were not only expected to supervise and staff new admissions, but were also given a teaching expectation. We believe they were viewed by housestaff as qualified teaching attendings, similar to the daytime hospitalist. These findings may provide guidelines for other institutions seeking to balance overnight hospitalist supervision with preserving resident's ability to make autonomous decisions.

There are several limitations to our study. The findings represent the experience of internal medicine housestaff at a single academic, tertiary care medical center and may not be reflective of other institutions or specialties. We asked housestaff to recall night float experiences from the prior year, which may have introduced recall bias, though responses were obtained before participants underwent the new curriculum. Maturation of housestaff over time could have led to changes in perceived autonomy, value of the night float rotation, and rates of attending contact independent of nocturnist implementation. In addition, there may have been unaccounted changes to other elements of the residency program, hospital, or patient volume between rotations. The implementation of the nocturnist, however, was the only major change to our training program that academic year, and there were no significant changes in patient volume, structure of the teaching or non‐resident services, or other policies around resident supervision.

It is possible that the nocturnist may have contributed to reports of increased clinical value and perceived quality of patient care simply by decreasing overnight workload for housestaff, and enhanced supervision and teaching may have played a lesser role. Even if this were true, optimizing resident workload is in itself an important goal for teaching hospitals and residency programs alike in order to maximize patient safety. Inclusion of intern post‐rotation surveys may have influenced data; though, we had no reason to suspect the surveyed interns would respond in a different manner than prior resident groups. The responses of both junior and senior housestaff were pooled; while this potentially weighted the results in favor of higher responding groups, we felt that it conveyed the residents' accurate sentiments on the program. Finally, while we compared two models of overnight supervision, we reported only housestaff perceptions of education, autonomy, patient outcomes, and supervisory contact, and not direct measures of knowledge or patient care. Further research will be required to define the relationship between supervision practices and patient‐level clinical outcomes.

The new ACGME regulations around resident supervision, as well as the broader movement to improve the safety and quality of care, require residency programs to negotiate a delicate balance between providing high‐quality patient care while preserving graduated independence in clinical training. Our study demonstrates that increased overnight supervision by nocturnists with well‐defined supervisory and teaching roles can preserve housestaff autonomy, improve the clinical experience for trainees, increase access to support during times of critical decision‐making, and potentially lead to improved patient outcomes.

Acknowledgements

Disclosures: No authors received commercial support for the submitted work. Dr Arora reports being an editorial board member for Agency for Healthcare Research and Quality (AHRQ) Web M&M, receiving grants from the ACGME for previous work, and receiving payment for speaking on graduate medical education supervision.

Files
References
  1. Kennedy TJ,Regehr G,Baker GR,Lingard LA.Progressive independence in clinical training: a tradition worth defending?Acad Med.2005;80(10 suppl):S106S111.
  2. Joint Committee of the Group on Resident Affairs and Organization of Resident Representatives.Patient Safety and Graduate Medical Education.Washington, DC:Association of American Medical Colleges; February2003:6.
  3. Accreditation Council on Graduate Medical Education.Common Program Requirements. Available at: http://www.acgme.org/acWebsite/home/Common_Program_Requirements_07012011.pdf. Accessed October 16,2011.
  4. The IOM medical errors report: 5 years later, the journey continues.Qual Lett Health Lead.2005;17(1):210.
  5. Bush RW.Supervision in medical education: logical fallacies and clear choices.J Grad Med Educ.2010;2(1):141143.
  6. Kennedy TJ,Regehr G,Baker GR,Lingard L.Preserving professional credibility: grounded theory study of medical trainees' requests for clinical support.BMJ.2009;338:b128.
  7. Phy MP,Offord KP,Manning DM,Bundrick JB,Huddleston JM.Increased faculty presence on inpatient teaching services.Mayo Clin Proc.2004;79(3):332336.
  8. Trowbridge RL,Almeder L,Jacquet M,Fairfield KM.The effect of overnight in‐house attending coverage on perceptions of care and education on a general medical service.J Grad Med Educ.2010;2(1):5356.
  9. Farnan JM,Petty LA,Georgitis E, et al.A systematic review: the effect of clinical supervision on patient and residency education outcomes.Acad Med.2012;87(4):428442.
  10. Jasti H,Hanusa BH,Switzer GE,Granieri R,Elnicki M.Residents' perceptions of a night float system.BMC Med Educ.2009;9:52.
  11. Luks AM,Smith CS,Robins L,Wipf JE.Resident perceptions of the educational value of night float rotations.Teach Learn Med.2010;22(3):196201.
  12. Wallach SL,Alam K,Diaz N,Shine D.How do internal medicine residency programs evaluate their resident float experiences?South Med J.2006;99(9):919923.
  13. Beasley BW,McBride J,McDonald FS.Hospitalist involvement in internal medicine residencies.J Hosp Med.2009;4(8):471475.
  14. Ogden PE,Sibbitt S,Howell M, et al.Complying with ACGME resident duty hour restrictions: restructuring the 80 hour workweek to enhance education and patient safety at Texas A81(12):10261031.
  15. Farnan JM,Johnson JK,Meltzer DO,Humphrey HJ,Arora VM.On‐call supervision and resident autonomy: from micromanager to absentee attending.Am J Med.2009;122(8):784788.
Article PDF
Issue
Journal of Hospital Medicine - 7(8)
Publications
Page Number
606-610
Sections
Files
Files
Article PDF
Article PDF

Postgraduate medical education has traditionally relied on a training model of progressive independence, where housestaff learn patient care through increasing autonomy and decreasing levels of supervision.1 While this framework has little empirical backing, it is grounded in sound educational theory from similar disciplines and endorsed by medical associations.1, 2 The Accreditation Council for Graduate Medical Education (ACGME) recently implemented regulations requiring that first‐year residents have a qualified supervisor physically present or immediately available at all times.3 Previously, oversight by an offsite supervisor (for example, an attending physician at home) was considered adequate. These new regulations, although motivated by patient safety imperatives,4 have elicited concerns that increased supervision may lead to decreased housestaff autonomy and an increased reliance on supervisors for clinical guidance.5 Such changes could ultimately produce less qualified practitioners by the completion of training.

Critics of the current training model point to a patient safety mechanism where housestaff must take responsibility for requesting attending‐level help when situations arise that surpass their skill level.5 For resident physicians, however, the decision to request support is often complex and dependent not only on the clinical question, but also on unique and variable trainee and supervisor factors.6 Survey data from 1999, prior to the current training regulations, showed that increased faculty presence improved resident reports of educational value, quality of patient care, and autonomy.7 A recent survey, performed after the initiation of overnight attending supervision at an academic medical center, demonstrated perceived improvements in educational value and patient‐level outcomes by both faculty and housestaff.8 Whether increased supervision and resident autonomy can coexist remains undetermined.

Overnight rotations for residents (commonly referred to as night float) are often times of little direct or indirect supervision. A recent systematic review of clinical supervision practices for housestaff in all fields found scarce literature on overnight supervision practices.9 There remains limited and conflicting data regarding the quality of patient care provided by the resident night float,10 as well as evidence revealing a low perceived educational value of night rotations when compared with non‐night float rotations.11 Yet in 2006, more than three‐quarters of all internal medicine programs employed night float rotations.12 In response to ACGME guidelines mandating decreased shift lengths with continued restrictions on overall duty hours, it appears likely even more training programs will implement night float systems.

The presence of overnight hospitalists (also known as nocturnists) is growing within the academic setting, yet their role in relation to trainees is either poorly defined13 or independent of housestaff.14 To better understand the impact of increasing levels of supervision on residency training, we investigated housestaff perceptions of education, autonomy, and clinical decision‐making before and after implementation of an in‐hospital, overnight attending physician (nocturnist).

METHODS

The study was conducted at a 570‐bed academic, tertiary care medical center affiliated with an internal medicine residency program of 170 housestaff. At our institution, all first year residents perform a week of intern night float consisting of overnight cross‐coverage of general medicine patients on the floor, step‐down, and intensive care units (ICUs). Second and third year residents each complete 4 to 6 days of resident night float each year at this hospital. They are responsible for assisting the intern night float with cross‐coverage, in addition to admitting general medicine patients to the floor, step‐down unit, and intensive care units. Every night at our medical center, 1 intern night float and 1 resident night float are on duty in the hospital; this is in addition to a resident from the on‐call medicine team and a resident working in the ICU. Prior to July 2010, no internal medicine attending physicians were physically present in the hospital at night. Oversight for the intern and resident night float was provided by the attending physician for the on‐call resident ward team, who was at home and available by pager. The night float housestaff were instructed to contact the responsible attending physician only when a major change in clinical status occurred for hospitalized or newly admitted patients, though this expectation was neither standardized nor monitored.

We established a nocturnist program at the start of the 2010 academic year. The position was staffed by hospitalists from within the Division of Hospital Medicine without the use of moonlighters. Two‐thirds of shifts were filled by 3 dedicated nocturnists with remaining staffing provided by junior hospitalist faculty. The dedicated nocturnists had recently completed their internal medicine residency at our institution. Shift length was 12 hours and dedicated nocturnists worked, on average, 10 shifts per month. The nocturnist filled a critical overnight safety role through mandatory bedside staffing of newly admitted ICU patients within 2 hours of admission, discussion in person or via telephone of newly admitted step‐down unit patients within 6 hours of admission, and direct or indirect supervision of the care of any patients undergoing a major change in clinical status. The overnight hospitalist was also available for clinical questions and to assist housestaff with triaging of overnight admissions. After nocturnist implementation, overnight housestaff received direct supervision or had immediate access to direct supervision, while prior to the nocturnist, residents had access only to indirect supervision.

In addition, the nocturnist admitted medicine patients after 1 AM in a 1:1 ratio with the admitting night float resident, performed medical consults, and provided coverage of non‐teaching medicine services. While actual volume numbers were not obtained, the estimated average of resident admissions per night was 2 to 3, and the number of nocturnist admissions was 1 to 2. The nocturnist also met nightly with night float housestaff for half‐hour didactics focusing on the management of common overnight clinical scenarios. The role of the new nocturnist was described to all housestaff in orientation materials given prior to their night float rotation and their general medicine ward rotation.

We administered pre‐rolling surveys and post‐rolling surveys of internal medicine intern and resident physicians who underwent the night float rotation at our hospital during the 2010 to 2011 academic year. Surveys examined housestaff perceptions of the night float rotation with regard to supervisory roles, educational and clinical value, and clinical decision‐making prior to and after implementation of the nocturnist. Surveys were designed by the study investigators based on prior literature,1, 510 personal experience, and housestaff suggestion, and were refined during works‐in‐progress meetings. Surveys were composed of Likert‐style questions asking housestaff to rate their level of agreement (15, strongly disagree to strongly agree) with statements regarding the supervisory and educational experience of the night float rotation, and to judge their frequency of contact (15, never to always/nightly) with an attending physician for specific clinical scenarios. The clinical scenarios described situations dealing with attendingresident communication around transfers of care, diagnostic evaluation, therapeutic interventions, and adverse events. Scenarios were taken from previous literature describing supervision preferences of faculty and residents during times of critical clinical decision‐making.15

One week prior to the beginning their night float rotation for the 20102011 academic year, housestaff were sent an e‐mail request to complete an online survey asking about their night float rotation during the prior academic year, when no nocturnist was present. One week after completion of their night float rotation for the 20102011 academic year, housestaff received an e‐mail with a link to a post‐survey asking about their recently completed, nocturnist‐supervised, night float rotation. First year residents received only a post‐survey at the completion of their night float rotation, as they would be unable to reflect on prior experience.

Informed consent was imbedded within the e‐mail survey request. Survey requests were sent by a fellow within the Division of Hospital Medicine with a brief message cosigned by an associate program director of the residency program. We did not collect unique identifiers from respondents in order to offer additional assurances to the participants that the survey was anonymous. There was no incentive offered for completion of the survey. Survey data were anonymous and downloaded to a database by a third party. Data were analyzed using Microsoft Excel, and pre‐responses and post‐responses compared using a Student t test. The study was approved by the medical center's Institutional Review Board.

RESULTS

Rates of response for pre‐surveys and post‐surveys were 57% (43 respondents) and 51% (53 respondents), respectively. Due to response rates and in order to convey accurately the perceptions of the training program as a whole, we collapsed responses of the pre‐surveys and post‐surveys based on level of training. After implementation of the overnight attending, we observed a significant increase in the perceived clinical value of the night float rotation (3.95 vs 4.27, P = 0.01) as well as in the adequacy of overnight supervision (3.65 vs 4.30, P < 0.0001; Table 1). There was no reported change in housestaff decision‐making autonomy (4.35 vs 4.45, P = 0.44). In addition, we noted a nonsignificant trend towards an increased perception of the night float rotation as a valuable educational experience (3.83 vs 4.04, P = 0.24). After implementation of the nocturnist, more resident physicians agreed that overnight supervision by an attending positively impacted patient outcomes (3.79 vs 4.30, P = 0.002).

General Perceptions of the Night Float Rotation
StatementPre‐Nocturnist (n = 43) Mean (SD)Post‐Nocturnist (n = 53) Mean (SD)P Value
  • NOTE: Responses are strongly disagree (1) to strongly agree (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

Night float is a valuable educational rotation3.83 (0.81)4.04 (0.83)0.24
Night float is a valuable clinical rotation3.95 (0.65)4.27 (0.59)0.01
I have adequate overnight supervision3.65 (0.76)4.30 (0.72)<0.0001
I have sufficient autonomy to make clinical decisions4.35 (0.57)4.45 (0.60)0.44
Overnight supervision by an attending positively impacts patient outcomes3.79 (0.88)4.30 (0.74)0.002

After implementation of the nocturnist, night float providers demonstrated increased rates of contacting an attending physician overnight (Table 2). There were significantly greater rates of attending contact for transfers from outside facilities (2.00 vs 3.20, P = 0.006) and during times of adverse events (2.51 vs 3.25, P = 0.04). We observed a reported increase in attending contact prior to ordering invasive diagnostic procedures (1.75 vs 2.76, P = 0.004) and noninvasive diagnostic procedures (1.09 vs 1.31, P = 0.03), as well as prior to initiation of intravenous antibiotics (1.11 vs 1.47, P = 0.007) and vasopressors (1.52 vs 2.40, P = 0.004).

Self‐Reported Incidence of Overnight Attending Contact During Critical Decision‐Making
ScenarioPre‐Nocturnist (n = 42) Mean (SD)Post‐Nocturnist (n = 51) Mean (SD)P Value
  • NOTE: Responses are never contact (1) to always contact (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

Receive transfer from outside facility2.00 (1.27)3.20 (1.58)0.006
Prior to ordering noninvasive diagnostic procedure1.09 (0.29)1.31 (0.58)0.03
Prior to ordering an invasive procedure1.75 (0.84)2.76 (1.45)0.004
Prior to initiation of intravenous antibiotics1.11 (0.32)1.47 (0.76)0.007
Prior to initiation of vasopressors1.52 (0.82)2.40 (1.49)0.004
Patient experiencing adverse event, regardless of cause2.51 (1.31)3.25 (1.34)0.04

After initiating the program, the nocturnist became the most commonly contacted overnight provider by the night float housestaff (Table 3). We observed a decrease in peer to peer contact between the night float housestaff and the on‐call overnight resident after implementation of the nocturnist (2.67 vs 2.04, P = 0.006).

Self‐Reported Incidence of Night Float Contact With Overnight Providers for Patient Care
ProviderPre‐Nocturnist (n = 43) Mean (SD)Post‐Nocturnist (n = 53) Mean (SD)P Value
  • NOTE: Responses are never (1) to nightly (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: ICU, intensive care unit; PMD, primary medical doctor; SD, standard deviation.

ICU Fellow1.86 (0.70)1.86 (0.83)0.96
On‐call resident2.67 (0.89)2.04 (0.92)0.006
ICU resident2.14 (0.74)2.04 (0.91)0.56
On‐call medicine attending1.41 (0.79)1.26 (0.52)0.26
Patient's PMD1.27 (0.31)1.15 (0.41)0.31
Referring MD1.32 (0.60)1.15 (0.45)0.11
Nocturnist 3.59 (1.22) 

Attending presence led to increased agreement that there was a defined overnight attending to contact (2.97 vs 1.96, P < 0.0001) and a decreased fear of waking an attending overnight for assistance (3.26 vs 2.72, P = 0.03). Increased attending availability, however, did not change resident physician's fear of revealing knowledge gaps, their desire to make decisions independently, or their belief that contacting an attending would not change a patient's outcome (Table 4).

Reasons Night Float Housestaff Do Not Contact an Attending Physician
StatementPre‐Nocturnist (n = 42) Mean (SD)Post‐Nocturnist (n = 52) Mean (SD)P Value
  • NOTE: Responses are strongly disagree (1) to strongly agree (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

No defined attending to contact2.97 (1.35)1.96 (0.92)<0.0001
Fear of waking an attending3.26 (1.25)2.72 (1.09)0.03
Fear of revealing knowledge gaps2.26 (1.14)2.25 (0.96)0.95
Would rather make decision on own3.40 (0.93)3.03 (1.06)0.08
Will not change patient outcome3.26 (1.06)3.21 (1.03)0.81

DISCUSSION

The ACGME's new duty hour regulations require that supervision for first‐year residents be provided by a qualified physician (advanced resident, fellow, or attending physician) who is physically present at the hospital. Our study demonstrates that increased direct overnight supervision provided by an in‐house nocturnist enhanced the clinical value of the night float rotation and the perceived quality of patient care. In our study, increased attending supervision did not reduce perceived decision‐making autonomy, and in fact led to increased rates of attending contact during times of critical clinical decision‐making. Such results may help assuage fears that recent regulations mandating enhanced attending supervision will produce less capable practitioners, and offers reassurance that such changes are positively impacting patient care.

Many academic institutions are implementing nocturnists, although their precise roles and responsibilities are still being defined. Our nocturnist program was explicitly designed with housestaff supervision as a core responsibility, with the goal of improving patient safety and housestaff education overnight. We found that availability barriers to attending contact were logically decreased with in‐house faculty presence. Potentially harmful attitudes, however, around requesting support (such as fear of revealing knowledge gaps or the desire to make decisions independently) remained. Furthermore, despite statistically significant increases in contact between faculty and residents at times of critical decision‐making, overall rates of attending contact for diagnostic and therapeutic interventions remained low. It is unknown from our study or previous research, however, what level of contact is appropriate or ideal for many clinical scenarios.

Additionally, we described a novel role of an academic nocturnist at a tertiary care teaching hospital and offered a potential template for the development of academic nocturnists at similar institutions seeking to increase direct overnight supervision. Such roles have not been previously well defined in the literature. Based on our experience, the nocturnist's role was manageable and well utilized by housestaff, particularly for assistance with critically ill patients and overnight triaging. We believe there are a number of factors associated with the success of this role. First, clear guidelines were presented to housestaff and nocturnists regarding expectations for supervision (for example, staffing ICU admissions within 2 hours). These guidelines likely contributed to the increased attending contact observed during critical clinical decision‐making, as well as the perceived improved patient outcomes by our housestaff. Second, the nocturnists were expected to be an integral part of the overnight care team. In many systems, the nocturnists act completely independently of the housestaff teams, creating an additional barrier to contact and communication. In our system, because of clear guidelines and their integral role in staffing overnight admissions, the nocturnists were an essential partner in care for the housestaff. Third, most of the nocturnists had recently completed their residency training at this institution. Although our survey does not directly address this, we believe their knowledge of the hospital, appreciation of the role of the intern and the resident within our system, and understanding of the need to preserve housestaff autonomy were essential to building a successful nocturnist role. Lastly, the nocturnists were not only expected to supervise and staff new admissions, but were also given a teaching expectation. We believe they were viewed by housestaff as qualified teaching attendings, similar to the daytime hospitalist. These findings may provide guidelines for other institutions seeking to balance overnight hospitalist supervision with preserving resident's ability to make autonomous decisions.

There are several limitations to our study. The findings represent the experience of internal medicine housestaff at a single academic, tertiary care medical center and may not be reflective of other institutions or specialties. We asked housestaff to recall night float experiences from the prior year, which may have introduced recall bias, though responses were obtained before participants underwent the new curriculum. Maturation of housestaff over time could have led to changes in perceived autonomy, value of the night float rotation, and rates of attending contact independent of nocturnist implementation. In addition, there may have been unaccounted changes to other elements of the residency program, hospital, or patient volume between rotations. The implementation of the nocturnist, however, was the only major change to our training program that academic year, and there were no significant changes in patient volume, structure of the teaching or non‐resident services, or other policies around resident supervision.

It is possible that the nocturnist may have contributed to reports of increased clinical value and perceived quality of patient care simply by decreasing overnight workload for housestaff, and enhanced supervision and teaching may have played a lesser role. Even if this were true, optimizing resident workload is in itself an important goal for teaching hospitals and residency programs alike in order to maximize patient safety. Inclusion of intern post‐rotation surveys may have influenced data; though, we had no reason to suspect the surveyed interns would respond in a different manner than prior resident groups. The responses of both junior and senior housestaff were pooled; while this potentially weighted the results in favor of higher responding groups, we felt that it conveyed the residents' accurate sentiments on the program. Finally, while we compared two models of overnight supervision, we reported only housestaff perceptions of education, autonomy, patient outcomes, and supervisory contact, and not direct measures of knowledge or patient care. Further research will be required to define the relationship between supervision practices and patient‐level clinical outcomes.

The new ACGME regulations around resident supervision, as well as the broader movement to improve the safety and quality of care, require residency programs to negotiate a delicate balance between providing high‐quality patient care while preserving graduated independence in clinical training. Our study demonstrates that increased overnight supervision by nocturnists with well‐defined supervisory and teaching roles can preserve housestaff autonomy, improve the clinical experience for trainees, increase access to support during times of critical decision‐making, and potentially lead to improved patient outcomes.

Acknowledgements

Disclosures: No authors received commercial support for the submitted work. Dr Arora reports being an editorial board member for Agency for Healthcare Research and Quality (AHRQ) Web M&M, receiving grants from the ACGME for previous work, and receiving payment for speaking on graduate medical education supervision.

Postgraduate medical education has traditionally relied on a training model of progressive independence, where housestaff learn patient care through increasing autonomy and decreasing levels of supervision.1 While this framework has little empirical backing, it is grounded in sound educational theory from similar disciplines and endorsed by medical associations.1, 2 The Accreditation Council for Graduate Medical Education (ACGME) recently implemented regulations requiring that first‐year residents have a qualified supervisor physically present or immediately available at all times.3 Previously, oversight by an offsite supervisor (for example, an attending physician at home) was considered adequate. These new regulations, although motivated by patient safety imperatives,4 have elicited concerns that increased supervision may lead to decreased housestaff autonomy and an increased reliance on supervisors for clinical guidance.5 Such changes could ultimately produce less qualified practitioners by the completion of training.

Critics of the current training model point to a patient safety mechanism where housestaff must take responsibility for requesting attending‐level help when situations arise that surpass their skill level.5 For resident physicians, however, the decision to request support is often complex and dependent not only on the clinical question, but also on unique and variable trainee and supervisor factors.6 Survey data from 1999, prior to the current training regulations, showed that increased faculty presence improved resident reports of educational value, quality of patient care, and autonomy.7 A recent survey, performed after the initiation of overnight attending supervision at an academic medical center, demonstrated perceived improvements in educational value and patient‐level outcomes by both faculty and housestaff.8 Whether increased supervision and resident autonomy can coexist remains undetermined.

Overnight rotations for residents (commonly referred to as night float) are often times of little direct or indirect supervision. A recent systematic review of clinical supervision practices for housestaff in all fields found scarce literature on overnight supervision practices.9 There remains limited and conflicting data regarding the quality of patient care provided by the resident night float,10 as well as evidence revealing a low perceived educational value of night rotations when compared with non‐night float rotations.11 Yet in 2006, more than three‐quarters of all internal medicine programs employed night float rotations.12 In response to ACGME guidelines mandating decreased shift lengths with continued restrictions on overall duty hours, it appears likely even more training programs will implement night float systems.

The presence of overnight hospitalists (also known as nocturnists) is growing within the academic setting, yet their role in relation to trainees is either poorly defined13 or independent of housestaff.14 To better understand the impact of increasing levels of supervision on residency training, we investigated housestaff perceptions of education, autonomy, and clinical decision‐making before and after implementation of an in‐hospital, overnight attending physician (nocturnist).

METHODS

The study was conducted at a 570‐bed academic, tertiary care medical center affiliated with an internal medicine residency program of 170 housestaff. At our institution, all first year residents perform a week of intern night float consisting of overnight cross‐coverage of general medicine patients on the floor, step‐down, and intensive care units (ICUs). Second and third year residents each complete 4 to 6 days of resident night float each year at this hospital. They are responsible for assisting the intern night float with cross‐coverage, in addition to admitting general medicine patients to the floor, step‐down unit, and intensive care units. Every night at our medical center, 1 intern night float and 1 resident night float are on duty in the hospital; this is in addition to a resident from the on‐call medicine team and a resident working in the ICU. Prior to July 2010, no internal medicine attending physicians were physically present in the hospital at night. Oversight for the intern and resident night float was provided by the attending physician for the on‐call resident ward team, who was at home and available by pager. The night float housestaff were instructed to contact the responsible attending physician only when a major change in clinical status occurred for hospitalized or newly admitted patients, though this expectation was neither standardized nor monitored.

We established a nocturnist program at the start of the 2010 academic year. The position was staffed by hospitalists from within the Division of Hospital Medicine without the use of moonlighters. Two‐thirds of shifts were filled by 3 dedicated nocturnists with remaining staffing provided by junior hospitalist faculty. The dedicated nocturnists had recently completed their internal medicine residency at our institution. Shift length was 12 hours and dedicated nocturnists worked, on average, 10 shifts per month. The nocturnist filled a critical overnight safety role through mandatory bedside staffing of newly admitted ICU patients within 2 hours of admission, discussion in person or via telephone of newly admitted step‐down unit patients within 6 hours of admission, and direct or indirect supervision of the care of any patients undergoing a major change in clinical status. The overnight hospitalist was also available for clinical questions and to assist housestaff with triaging of overnight admissions. After nocturnist implementation, overnight housestaff received direct supervision or had immediate access to direct supervision, while prior to the nocturnist, residents had access only to indirect supervision.

In addition, the nocturnist admitted medicine patients after 1 AM in a 1:1 ratio with the admitting night float resident, performed medical consults, and provided coverage of non‐teaching medicine services. While actual volume numbers were not obtained, the estimated average of resident admissions per night was 2 to 3, and the number of nocturnist admissions was 1 to 2. The nocturnist also met nightly with night float housestaff for half‐hour didactics focusing on the management of common overnight clinical scenarios. The role of the new nocturnist was described to all housestaff in orientation materials given prior to their night float rotation and their general medicine ward rotation.

We administered pre‐rolling surveys and post‐rolling surveys of internal medicine intern and resident physicians who underwent the night float rotation at our hospital during the 2010 to 2011 academic year. Surveys examined housestaff perceptions of the night float rotation with regard to supervisory roles, educational and clinical value, and clinical decision‐making prior to and after implementation of the nocturnist. Surveys were designed by the study investigators based on prior literature,1, 510 personal experience, and housestaff suggestion, and were refined during works‐in‐progress meetings. Surveys were composed of Likert‐style questions asking housestaff to rate their level of agreement (15, strongly disagree to strongly agree) with statements regarding the supervisory and educational experience of the night float rotation, and to judge their frequency of contact (15, never to always/nightly) with an attending physician for specific clinical scenarios. The clinical scenarios described situations dealing with attendingresident communication around transfers of care, diagnostic evaluation, therapeutic interventions, and adverse events. Scenarios were taken from previous literature describing supervision preferences of faculty and residents during times of critical clinical decision‐making.15

One week prior to the beginning their night float rotation for the 20102011 academic year, housestaff were sent an e‐mail request to complete an online survey asking about their night float rotation during the prior academic year, when no nocturnist was present. One week after completion of their night float rotation for the 20102011 academic year, housestaff received an e‐mail with a link to a post‐survey asking about their recently completed, nocturnist‐supervised, night float rotation. First year residents received only a post‐survey at the completion of their night float rotation, as they would be unable to reflect on prior experience.

Informed consent was imbedded within the e‐mail survey request. Survey requests were sent by a fellow within the Division of Hospital Medicine with a brief message cosigned by an associate program director of the residency program. We did not collect unique identifiers from respondents in order to offer additional assurances to the participants that the survey was anonymous. There was no incentive offered for completion of the survey. Survey data were anonymous and downloaded to a database by a third party. Data were analyzed using Microsoft Excel, and pre‐responses and post‐responses compared using a Student t test. The study was approved by the medical center's Institutional Review Board.

RESULTS

Rates of response for pre‐surveys and post‐surveys were 57% (43 respondents) and 51% (53 respondents), respectively. Due to response rates and in order to convey accurately the perceptions of the training program as a whole, we collapsed responses of the pre‐surveys and post‐surveys based on level of training. After implementation of the overnight attending, we observed a significant increase in the perceived clinical value of the night float rotation (3.95 vs 4.27, P = 0.01) as well as in the adequacy of overnight supervision (3.65 vs 4.30, P < 0.0001; Table 1). There was no reported change in housestaff decision‐making autonomy (4.35 vs 4.45, P = 0.44). In addition, we noted a nonsignificant trend towards an increased perception of the night float rotation as a valuable educational experience (3.83 vs 4.04, P = 0.24). After implementation of the nocturnist, more resident physicians agreed that overnight supervision by an attending positively impacted patient outcomes (3.79 vs 4.30, P = 0.002).

General Perceptions of the Night Float Rotation
StatementPre‐Nocturnist (n = 43) Mean (SD)Post‐Nocturnist (n = 53) Mean (SD)P Value
  • NOTE: Responses are strongly disagree (1) to strongly agree (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

Night float is a valuable educational rotation3.83 (0.81)4.04 (0.83)0.24
Night float is a valuable clinical rotation3.95 (0.65)4.27 (0.59)0.01
I have adequate overnight supervision3.65 (0.76)4.30 (0.72)<0.0001
I have sufficient autonomy to make clinical decisions4.35 (0.57)4.45 (0.60)0.44
Overnight supervision by an attending positively impacts patient outcomes3.79 (0.88)4.30 (0.74)0.002

After implementation of the nocturnist, night float providers demonstrated increased rates of contacting an attending physician overnight (Table 2). There were significantly greater rates of attending contact for transfers from outside facilities (2.00 vs 3.20, P = 0.006) and during times of adverse events (2.51 vs 3.25, P = 0.04). We observed a reported increase in attending contact prior to ordering invasive diagnostic procedures (1.75 vs 2.76, P = 0.004) and noninvasive diagnostic procedures (1.09 vs 1.31, P = 0.03), as well as prior to initiation of intravenous antibiotics (1.11 vs 1.47, P = 0.007) and vasopressors (1.52 vs 2.40, P = 0.004).

Self‐Reported Incidence of Overnight Attending Contact During Critical Decision‐Making
ScenarioPre‐Nocturnist (n = 42) Mean (SD)Post‐Nocturnist (n = 51) Mean (SD)P Value
  • NOTE: Responses are never contact (1) to always contact (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

Receive transfer from outside facility2.00 (1.27)3.20 (1.58)0.006
Prior to ordering noninvasive diagnostic procedure1.09 (0.29)1.31 (0.58)0.03
Prior to ordering an invasive procedure1.75 (0.84)2.76 (1.45)0.004
Prior to initiation of intravenous antibiotics1.11 (0.32)1.47 (0.76)0.007
Prior to initiation of vasopressors1.52 (0.82)2.40 (1.49)0.004
Patient experiencing adverse event, regardless of cause2.51 (1.31)3.25 (1.34)0.04

After initiating the program, the nocturnist became the most commonly contacted overnight provider by the night float housestaff (Table 3). We observed a decrease in peer to peer contact between the night float housestaff and the on‐call overnight resident after implementation of the nocturnist (2.67 vs 2.04, P = 0.006).

Self‐Reported Incidence of Night Float Contact With Overnight Providers for Patient Care
ProviderPre‐Nocturnist (n = 43) Mean (SD)Post‐Nocturnist (n = 53) Mean (SD)P Value
  • NOTE: Responses are never (1) to nightly (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: ICU, intensive care unit; PMD, primary medical doctor; SD, standard deviation.

ICU Fellow1.86 (0.70)1.86 (0.83)0.96
On‐call resident2.67 (0.89)2.04 (0.92)0.006
ICU resident2.14 (0.74)2.04 (0.91)0.56
On‐call medicine attending1.41 (0.79)1.26 (0.52)0.26
Patient's PMD1.27 (0.31)1.15 (0.41)0.31
Referring MD1.32 (0.60)1.15 (0.45)0.11
Nocturnist 3.59 (1.22) 

Attending presence led to increased agreement that there was a defined overnight attending to contact (2.97 vs 1.96, P < 0.0001) and a decreased fear of waking an attending overnight for assistance (3.26 vs 2.72, P = 0.03). Increased attending availability, however, did not change resident physician's fear of revealing knowledge gaps, their desire to make decisions independently, or their belief that contacting an attending would not change a patient's outcome (Table 4).

Reasons Night Float Housestaff Do Not Contact an Attending Physician
StatementPre‐Nocturnist (n = 42) Mean (SD)Post‐Nocturnist (n = 52) Mean (SD)P Value
  • NOTE: Responses are strongly disagree (1) to strongly agree (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

No defined attending to contact2.97 (1.35)1.96 (0.92)<0.0001
Fear of waking an attending3.26 (1.25)2.72 (1.09)0.03
Fear of revealing knowledge gaps2.26 (1.14)2.25 (0.96)0.95
Would rather make decision on own3.40 (0.93)3.03 (1.06)0.08
Will not change patient outcome3.26 (1.06)3.21 (1.03)0.81

DISCUSSION

The ACGME's new duty hour regulations require that supervision for first‐year residents be provided by a qualified physician (advanced resident, fellow, or attending physician) who is physically present at the hospital. Our study demonstrates that increased direct overnight supervision provided by an in‐house nocturnist enhanced the clinical value of the night float rotation and the perceived quality of patient care. In our study, increased attending supervision did not reduce perceived decision‐making autonomy, and in fact led to increased rates of attending contact during times of critical clinical decision‐making. Such results may help assuage fears that recent regulations mandating enhanced attending supervision will produce less capable practitioners, and offers reassurance that such changes are positively impacting patient care.

Many academic institutions are implementing nocturnists, although their precise roles and responsibilities are still being defined. Our nocturnist program was explicitly designed with housestaff supervision as a core responsibility, with the goal of improving patient safety and housestaff education overnight. We found that availability barriers to attending contact were logically decreased with in‐house faculty presence. Potentially harmful attitudes, however, around requesting support (such as fear of revealing knowledge gaps or the desire to make decisions independently) remained. Furthermore, despite statistically significant increases in contact between faculty and residents at times of critical decision‐making, overall rates of attending contact for diagnostic and therapeutic interventions remained low. It is unknown from our study or previous research, however, what level of contact is appropriate or ideal for many clinical scenarios.

Additionally, we described a novel role of an academic nocturnist at a tertiary care teaching hospital and offered a potential template for the development of academic nocturnists at similar institutions seeking to increase direct overnight supervision. Such roles have not been previously well defined in the literature. Based on our experience, the nocturnist's role was manageable and well utilized by housestaff, particularly for assistance with critically ill patients and overnight triaging. We believe there are a number of factors associated with the success of this role. First, clear guidelines were presented to housestaff and nocturnists regarding expectations for supervision (for example, staffing ICU admissions within 2 hours). These guidelines likely contributed to the increased attending contact observed during critical clinical decision‐making, as well as the perceived improved patient outcomes by our housestaff. Second, the nocturnists were expected to be an integral part of the overnight care team. In many systems, the nocturnists act completely independently of the housestaff teams, creating an additional barrier to contact and communication. In our system, because of clear guidelines and their integral role in staffing overnight admissions, the nocturnists were an essential partner in care for the housestaff. Third, most of the nocturnists had recently completed their residency training at this institution. Although our survey does not directly address this, we believe their knowledge of the hospital, appreciation of the role of the intern and the resident within our system, and understanding of the need to preserve housestaff autonomy were essential to building a successful nocturnist role. Lastly, the nocturnists were not only expected to supervise and staff new admissions, but were also given a teaching expectation. We believe they were viewed by housestaff as qualified teaching attendings, similar to the daytime hospitalist. These findings may provide guidelines for other institutions seeking to balance overnight hospitalist supervision with preserving resident's ability to make autonomous decisions.

There are several limitations to our study. The findings represent the experience of internal medicine housestaff at a single academic, tertiary care medical center and may not be reflective of other institutions or specialties. We asked housestaff to recall night float experiences from the prior year, which may have introduced recall bias, though responses were obtained before participants underwent the new curriculum. Maturation of housestaff over time could have led to changes in perceived autonomy, value of the night float rotation, and rates of attending contact independent of nocturnist implementation. In addition, there may have been unaccounted changes to other elements of the residency program, hospital, or patient volume between rotations. The implementation of the nocturnist, however, was the only major change to our training program that academic year, and there were no significant changes in patient volume, structure of the teaching or non‐resident services, or other policies around resident supervision.

It is possible that the nocturnist may have contributed to reports of increased clinical value and perceived quality of patient care simply by decreasing overnight workload for housestaff, and enhanced supervision and teaching may have played a lesser role. Even if this were true, optimizing resident workload is in itself an important goal for teaching hospitals and residency programs alike in order to maximize patient safety. Inclusion of intern post‐rotation surveys may have influenced data; though, we had no reason to suspect the surveyed interns would respond in a different manner than prior resident groups. The responses of both junior and senior housestaff were pooled; while this potentially weighted the results in favor of higher responding groups, we felt that it conveyed the residents' accurate sentiments on the program. Finally, while we compared two models of overnight supervision, we reported only housestaff perceptions of education, autonomy, patient outcomes, and supervisory contact, and not direct measures of knowledge or patient care. Further research will be required to define the relationship between supervision practices and patient‐level clinical outcomes.

The new ACGME regulations around resident supervision, as well as the broader movement to improve the safety and quality of care, require residency programs to negotiate a delicate balance between providing high‐quality patient care while preserving graduated independence in clinical training. Our study demonstrates that increased overnight supervision by nocturnists with well‐defined supervisory and teaching roles can preserve housestaff autonomy, improve the clinical experience for trainees, increase access to support during times of critical decision‐making, and potentially lead to improved patient outcomes.

Acknowledgements

Disclosures: No authors received commercial support for the submitted work. Dr Arora reports being an editorial board member for Agency for Healthcare Research and Quality (AHRQ) Web M&M, receiving grants from the ACGME for previous work, and receiving payment for speaking on graduate medical education supervision.

References
  1. Kennedy TJ,Regehr G,Baker GR,Lingard LA.Progressive independence in clinical training: a tradition worth defending?Acad Med.2005;80(10 suppl):S106S111.
  2. Joint Committee of the Group on Resident Affairs and Organization of Resident Representatives.Patient Safety and Graduate Medical Education.Washington, DC:Association of American Medical Colleges; February2003:6.
  3. Accreditation Council on Graduate Medical Education.Common Program Requirements. Available at: http://www.acgme.org/acWebsite/home/Common_Program_Requirements_07012011.pdf. Accessed October 16,2011.
  4. The IOM medical errors report: 5 years later, the journey continues.Qual Lett Health Lead.2005;17(1):210.
  5. Bush RW.Supervision in medical education: logical fallacies and clear choices.J Grad Med Educ.2010;2(1):141143.
  6. Kennedy TJ,Regehr G,Baker GR,Lingard L.Preserving professional credibility: grounded theory study of medical trainees' requests for clinical support.BMJ.2009;338:b128.
  7. Phy MP,Offord KP,Manning DM,Bundrick JB,Huddleston JM.Increased faculty presence on inpatient teaching services.Mayo Clin Proc.2004;79(3):332336.
  8. Trowbridge RL,Almeder L,Jacquet M,Fairfield KM.The effect of overnight in‐house attending coverage on perceptions of care and education on a general medical service.J Grad Med Educ.2010;2(1):5356.
  9. Farnan JM,Petty LA,Georgitis E, et al.A systematic review: the effect of clinical supervision on patient and residency education outcomes.Acad Med.2012;87(4):428442.
  10. Jasti H,Hanusa BH,Switzer GE,Granieri R,Elnicki M.Residents' perceptions of a night float system.BMC Med Educ.2009;9:52.
  11. Luks AM,Smith CS,Robins L,Wipf JE.Resident perceptions of the educational value of night float rotations.Teach Learn Med.2010;22(3):196201.
  12. Wallach SL,Alam K,Diaz N,Shine D.How do internal medicine residency programs evaluate their resident float experiences?South Med J.2006;99(9):919923.
  13. Beasley BW,McBride J,McDonald FS.Hospitalist involvement in internal medicine residencies.J Hosp Med.2009;4(8):471475.
  14. Ogden PE,Sibbitt S,Howell M, et al.Complying with ACGME resident duty hour restrictions: restructuring the 80 hour workweek to enhance education and patient safety at Texas A81(12):10261031.
  15. Farnan JM,Johnson JK,Meltzer DO,Humphrey HJ,Arora VM.On‐call supervision and resident autonomy: from micromanager to absentee attending.Am J Med.2009;122(8):784788.
References
  1. Kennedy TJ,Regehr G,Baker GR,Lingard LA.Progressive independence in clinical training: a tradition worth defending?Acad Med.2005;80(10 suppl):S106S111.
  2. Joint Committee of the Group on Resident Affairs and Organization of Resident Representatives.Patient Safety and Graduate Medical Education.Washington, DC:Association of American Medical Colleges; February2003:6.
  3. Accreditation Council on Graduate Medical Education.Common Program Requirements. Available at: http://www.acgme.org/acWebsite/home/Common_Program_Requirements_07012011.pdf. Accessed October 16,2011.
  4. The IOM medical errors report: 5 years later, the journey continues.Qual Lett Health Lead.2005;17(1):210.
  5. Bush RW.Supervision in medical education: logical fallacies and clear choices.J Grad Med Educ.2010;2(1):141143.
  6. Kennedy TJ,Regehr G,Baker GR,Lingard L.Preserving professional credibility: grounded theory study of medical trainees' requests for clinical support.BMJ.2009;338:b128.
  7. Phy MP,Offord KP,Manning DM,Bundrick JB,Huddleston JM.Increased faculty presence on inpatient teaching services.Mayo Clin Proc.2004;79(3):332336.
  8. Trowbridge RL,Almeder L,Jacquet M,Fairfield KM.The effect of overnight in‐house attending coverage on perceptions of care and education on a general medical service.J Grad Med Educ.2010;2(1):5356.
  9. Farnan JM,Petty LA,Georgitis E, et al.A systematic review: the effect of clinical supervision on patient and residency education outcomes.Acad Med.2012;87(4):428442.
  10. Jasti H,Hanusa BH,Switzer GE,Granieri R,Elnicki M.Residents' perceptions of a night float system.BMC Med Educ.2009;9:52.
  11. Luks AM,Smith CS,Robins L,Wipf JE.Resident perceptions of the educational value of night float rotations.Teach Learn Med.2010;22(3):196201.
  12. Wallach SL,Alam K,Diaz N,Shine D.How do internal medicine residency programs evaluate their resident float experiences?South Med J.2006;99(9):919923.
  13. Beasley BW,McBride J,McDonald FS.Hospitalist involvement in internal medicine residencies.J Hosp Med.2009;4(8):471475.
  14. Ogden PE,Sibbitt S,Howell M, et al.Complying with ACGME resident duty hour restrictions: restructuring the 80 hour workweek to enhance education and patient safety at Texas A81(12):10261031.
  15. Farnan JM,Johnson JK,Meltzer DO,Humphrey HJ,Arora VM.On‐call supervision and resident autonomy: from micromanager to absentee attending.Am J Med.2009;122(8):784788.
Issue
Journal of Hospital Medicine - 7(8)
Issue
Journal of Hospital Medicine - 7(8)
Page Number
606-610
Page Number
606-610
Publications
Publications
Article Type
Display Headline
Effects of increased overnight supervision on resident education, decision‐making, and autonomy
Display Headline
Effects of increased overnight supervision on resident education, decision‐making, and autonomy
Sections
Article Source

Copyright © 2012 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Division of Hospital Medicine, San Francisco General Hospital, Department of Medicine, University of California San Francisco, 1001 Potrero Ave, Room 5H‐4, San Francisco, CA 94110
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Erratum: Investing in the future: Building an academic hospitalist faculty development program

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Erratum: Investing in the future: Building an academic hospitalist faculty development program

The disclosure statement for the following article, Investing in the Future: Building an Academic Hospitalist Faculty Development Program, by Niraj L. Sehgal, MD, MPH, Bradley A. Sharpe, MD, Andrew A. Auerbach, MD, MPH, Robert M. Wachter, MD, that published in Volume 6, Issue 3 pages 161166 of the Journal of Hospital Medicine, was incorrect. The correct disclosure statement is: All authors report no relevant conflicts of interest. The publisher regrets this error.

Article PDF
Issue
Journal of Hospital Medicine - 6(4)
Publications
Page Number
243-243
Article PDF
Article PDF

The disclosure statement for the following article, Investing in the Future: Building an Academic Hospitalist Faculty Development Program, by Niraj L. Sehgal, MD, MPH, Bradley A. Sharpe, MD, Andrew A. Auerbach, MD, MPH, Robert M. Wachter, MD, that published in Volume 6, Issue 3 pages 161166 of the Journal of Hospital Medicine, was incorrect. The correct disclosure statement is: All authors report no relevant conflicts of interest. The publisher regrets this error.

The disclosure statement for the following article, Investing in the Future: Building an Academic Hospitalist Faculty Development Program, by Niraj L. Sehgal, MD, MPH, Bradley A. Sharpe, MD, Andrew A. Auerbach, MD, MPH, Robert M. Wachter, MD, that published in Volume 6, Issue 3 pages 161166 of the Journal of Hospital Medicine, was incorrect. The correct disclosure statement is: All authors report no relevant conflicts of interest. The publisher regrets this error.

Issue
Journal of Hospital Medicine - 6(4)
Issue
Journal of Hospital Medicine - 6(4)
Page Number
243-243
Page Number
243-243
Publications
Publications
Article Type
Display Headline
Erratum: Investing in the future: Building an academic hospitalist faculty development program
Display Headline
Erratum: Investing in the future: Building an academic hospitalist faculty development program
Article Source
Copyright © 2011 Society of Hospital Medicine
Disallow All Ads
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media