Affiliations
Division of Pulmonary, Allergy, and Critical Care Medicine, Department of Internal Medicine, University of Texas Medical Branch, Galveston, Texas
Sealy Center of Aging, University of Texas Medical Branch, Galveston, Texas
Email
Kathlyn.fletcher@va.gov
Given name(s)
Kathlyn E.
Family name
Fletcher
Degrees
MD, MA

Teaching Physical Examination to Medical Students on Inpatient Medicine Teams: A Prospective, Mixed-Methods Descriptive Study

Article Type
Changed
Wed, 07/11/2018 - 06:53

 

1Medical College of Wisconsin Affiliated Hospitals, Milwaukee, Wisconsin. At the time of this study, Dr. Bergl was with the Division of General Internal Medicine, Medical College of Wisconsin, Milwaukee, Wisconsin. 2Medical College of Wisconsin, Milwaukee, Wisconsin.Physical examination (PE) is a core clinical skill in undergraduate medical education.1 Although the optimal approach to teaching clinical skills is debated, robust preclinical curricula should generally be followed by iterative skill development during clinical rotations.2,3

The internal medicine rotation represents a critical time to enhance PE skills. Diagnostic decision making and PE are highly prioritized competencies for the internal medicine clerkship,4 and students will likely utilize many core examination skills1,2 during this time. Bedside teaching of PE during the internal medicine service also provides an opportunity for students to receive feedback based on direct observation,5 a sine qua non of competency-based assessment.

Unfortunately, current internal medicine training environments limit opportunities for workplace-based instruction in PE. Recent studies suggest diminishing time spent on bedside patient care and teaching, with computer-based “indirect patient care” dominating much of the clinical workday of internal medicine services.6-8 However, the literature does not delineate how often medical students are enhancing their PE skills during clinical rotations or describe how the educational environment may influence PE teaching.

We aimed to describe the content and context of PE instruction during the internal medicine clerkship workflow. Specifically, we sought to explore what strategies physician team members used to teach PE to students. We also sought to describe factors in the inpatient learning environment that might explain why physical examination (PE) instruction occurs infrequently.

METHODS

We conducted a prospective mixed-methods study using time motion analysis, checklists on clinical teaching, and daily open-ended observations written by a trained observer from June through August 2015 at a single academic medical center. Subjects were recruited from internal medicine teaching teams and were allowed to opt out. Teaching teams had 2 formats: (1) traditional team with an attending physician (hospitalist or general internist), a senior resident, 2 interns, a fourth-year medical student, and 2 third-year students or (2) hospitalist team in which a third-year student works directly with a hospitalist and advanced practitioner. The proposal was submitted to the Medical College of Wisconsin Institutional Review Board and deemed exempt from further review.

All observations were carried out by a single investigator (A.T.), who was a second-year medical student at the time. To train this observer and to pilot the data collection instruments, our lead investigator (P.B.) directly supervised our observer on 4 separate occasions, totaling over 12 hours of mentored co-observation. Immediately after each training session, both investigators (A.T. and P.B.) debriefed to compare notes, to review checklists on recorded observations, and to discuss areas of uncertainty. During the training period, formal metrics of agreement (eg, kappa coefficients) were not gathered, as data collection instruments were still being refined.

Observation periods were centered on third-year medical students and their interactions with patients and members of the teaching team. Observed activities included pre-rounding, teaching rounds with the attending physician, and new patient admissions during call days. Observations generally occurred between the hours of 7 AM and 6 PM, and we limited periods of observation to 3 consecutive hours to minimize observer fatigue. Observation periods were selected to maximize the number of subjects and teams observed, to adequately capture pre-rounding and new admissions activities, and to account for variations in rounding styles throughout the call cycle. Teams were excluded if a member of the study team was an attending physician on the clinical team or if any member of the patient care team had opted out of the study.

Data were collected on paper checklists that included idealized bedside teaching activities around PE. Teaching activities were identified through a review of relevant literature9,10 and were further informed by our senior investigator’s own experience with faculty development in this area11 and team members’ attendance at bedside teaching workshops. At the end of each day, our observer also wrote brief observations that summarized factors affecting bedside teaching of PE. Checklist data were transferred to an Excel file (Microsoft), and written observations were imported into NVivo 10 (QRS International, Melbourne, Australia) for coding and analysis.

Checklist data were analyzed using simple descriptive statistics. We compared time spent on various types of rounding using ANOVA, and we used a Student two-tailed t-test to compare the amount of time students spent examining patients on pre-rounds versus new admissions. To ascertain differences in the frequency of PE teaching activities by location, we used chi-squared tests. Statistical analysis was performed using embedded statistics functions in Microsoft Excel. A P value of <.05 was used as the cut-off for significance.

We analyzed the written observations using conventional qualitative content analysis. Two investigators (A.T. and P.B.) reviewed the written comments and used open coding to devise a preliminary inductive coding scheme. Codes were refined iteratively, and a schema of categories and nodes was outlined in a codebook that was periodically reviewed by the entire research team. The coding investigators met regularly to ensure consistency in coding, and a third team member remained available to reconcile significant disagreements in code definitions.

 

 

RESULTS

Eighty-one subjects participated in the study: 21 were attending physicians, 12 residents, 21 interns, 11 senior medical students, and 26 junior medical students. We observed 16 distinct inpatient teaching teams and 329 unique patient-related events (discussions and/or patient-clinician encounters), with most events being observed during attending rounds (269/329, or 82%). There were 123 encounters at the bedside, averaging 7 minutes; 43 encounters occurred in the hallway, averaging 8 minutes each; and 163 encounters occurred in a workroom and averaged 7 minutes per patient discussion. We also observed 28 student-patient encounters during pre-round activities and 30 student-patient encounters during new admissions.

Teaching and Direct Observation

During attending rounds at the bedside, the attending physician examined the patient 82 times out of 123 patient encounters (67%). Teaching activities during these PEs were mostly limited to the attending physician or senior resident noting findings (37 instances out of 82 examinations, or 45%). Rarely did the teacher ask students to re-examine the patient before revealing relevant findings (5 instances out of 82 examinations, or 6%), and only during 15% of bedside examinations did the attending physician directly observe students performing a portion of the PE. As demonstrated in Table 1, discussions at the bedside were more likely to reference the PE (P < .001, chi-squared) and more often resulted in specific plans to verify physical findings (P < .001, chi-squared) compared with patient-related discussions in other settings. The location of rounding activities, however, did not affect how often teams incorporated PE into clinical decision-making (P = .82).

During 28 pre-rounding encounters, students usually examined the patient (26 out of 28 instances, 93%) but were observed only 4 times doing so (out of 26 instances, or 15%). During 30 new patient admissions, students examined 27 patients (90%) and had their PE observed 6 times (out of 27 instances, or 22%). There were no significant differences in frequency of these activities (P > .05, chi-squared) between pre-rounds or new admissions.

Observations on Teaching Strategies

In the written observations, we categorized various methods being used to teach PE. Bedside teaching of PE most often involved teachers simply describing or discussing physical findings (42 mentions in observations) or verifying a student’s reported findings (15 mentions). Teachers were also observed to use bedside teaching to contextualize findings (13 mentions), such as relating the quality of bowel sounds to the patient’s constipation or to discuss expected pupillary light reflexes in a neurologically intact patient. Less commonly, attending physicians narrated steps in their PE technique (9 mentions). Students were infrequently encouraged to practice a specific PE skill again (7 mentions) or allowed to re-examine and reconsider their initial interpretations (5 mentions).

Our written observations also identified factors that may impact clinical instruction of PE as shown in Table 2. In the learning environment, physical space, place, and timing of teaching moments all impacted PE teaching on the wards. Clinical workload and a focus on efficiency appeared to diminish the quality of PE instruction, such as by limiting the number of participants or by leading teams to conduct “sit-down rounds” in workrooms.

DISCUSSION

This observational study of clinical teaching on internal medicine teaching services demonstrates that PE teaching is most likely to occur during bedside rounding. However, even in bedside encounters, most PE instruction is limited to physician team members pointing out significant findings. Although physical findings were mentioned for the majority of patients seen on rounds, attending physicians infrequently verified students’ or residents’ findings, demonstrated technique, or incorporated PE into clinical decision making. We witnessed an alarming dearth of direct observation of students and almost no real-time feedback in performing and teaching PE. Thus, students rarely had opportunities to engage in higher-order learning activities related to PE on the internal medicine rotation.

We posit that the learning environment influenced PE instruction on the internal medicine rotation. To optimize inpatient teaching of PE, attending physicians need to consider the factors we identified in Table 2. Such teaching may be effective with a more limited number of participants and without distraction from technology. Time constraints are one of the major perceived barriers to bedside teaching of PE, and our data support this concern, as teams spent an average of only 7 minutes on each bedside encounter. However, many of the strategies observed to be used in real-time PE instruction, such as validating the learners’ findings or examining patients as a team, naturally fit into clinical routines and generally do not require extra thought or preparation.

One of the key strengths of our study is the use of direct observation of students and their teachers. This study is unique in its exclusive focus on PE and its description of factors affecting PE teaching activities on an internal medicine service. This observational, descriptive study also has obvious limitations. The study was conducted at a single institution during a limited time period. Moreover, the study period June through August, which was chosen based on our observer’s availability, includes the transition to a new academic year (July 1, 2015) when medical students and residents were becoming acclimated to their new roles. Additionally, the data were collected by a single researcher, and observer bias may affect the results of qualitative analysis of journal entries.

In conclusion, this study highlights the infrequency of applied PE skills in the daily clinical and educational workflow of internal medicine teaching teams. These findings may reflect a more widespread problem in clinical education, and replication of our findings at other teaching centers could galvanize faculty development around bedside PE teaching.

 

 

Disclosures

Dr. Bergl has nothing to disclose. Ms. Taylor reports grant support from the Cohen Endowment for Medical Student Research at the Medical College of Wisconsin during the conduct of the study. Mrs. Klumb, Ms. Quirk, Dr. Muntz, and Dr. Fletcher have nothing to disclose.

Funding

This work was funded in part by the Cohen Endowment for Medical Student Research at the Medical College of Wisconsin.

Files
References

1. Corbett E, Berkow R, Bernstein L, et al on behalf of the AAMC Task Force on the Preclerkship Clinical Skills Education of Medical Students. Recommendations for clinical skills curricula for undergraduate medical education. Achieving excellence in basic clinical method through clinical skills education: The medical school clinical skills curriculum. Association of American Medical Colleges; 2008. https://www.aamc.org/download/130608/data/clinicalskills_oct09.qxd.pdf.pdf. Accessed July 12, 2017.
2. Gowda D, Blatt B, Fink MJ, Kosowicz LY, Baecker A, Silvestri RC. A core physical exam for medical students: Results of a national survey. Acad Med. 2014;89(3):436-442. PubMed
3. Uchida T, Farnan JM, Schwartz JE, Heiman HL. Teaching the physical examination: A longitudinal strategy for tomorrow’s physicians. Acad Med. 2014;89(3):373-375. PubMed
4. Fazio S, De Fer T, Goroll A . Core Medicine Clerkship Curriculum Guide: A resource for teachers and learners. Clerkship Directors in Internal Medicine and Society of General Internal Medicine; 2006. http://www.im.org/d/do/2285/. Accessed July 12, 2017.
5. Gonzalo J, Heist B, Duffy B, et al. Content and timing of feedback and reflection: A multi-center qualitative study of experienced bedside teachers. BMC Med Educ. 2014;(14):212. doi: 10.1186/1472-6920-14-212. PubMed
6. Stickrath C, Noble M, Prochazka A, et al. Attending rounds in the current era: What is and is not happening. JAMA Intern Med. 2013;173(12):1084-1089. PubMed
7. Block L, Habicht R, Wu AW, et al. In the wake of the 2003 and 2011 duty hours regulations, how do internal medicine interns spend their time? J Gen Intern Med. 2013;28(8):1042-1047. PubMed
8. Wenger N, Méan M, Castioni J, Marques-Vidal P, Waeber G, Garnier A. Allocation of internal medicine resident time in a Swiss Hospital: A time and motion study of day and evening shifts. Ann Intern Med. 2017;166(8):579-586. PubMed
9. Ramani S. Twelve tips for excellent physical examination teaching. Med Teach. 2008;30(9-10):851-856. PubMed
10. Gonzalo JD, Heist BS, Duffy BL, et al. The art of bedside rounds: A multi-center qualitative study of strategies used by experienced bedside teachers. J Gen Intern Med. 2013;28(3):412-420. PubMed
11. Janicik RW, Fletcher KE. Teaching at the bedside: A new model. Med Teach. 2003;25(2):127-130. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(6)
Publications
Topics
Page Number
399-402
Sections
Files
Files
Article PDF
Article PDF
Related Articles

 

1Medical College of Wisconsin Affiliated Hospitals, Milwaukee, Wisconsin. At the time of this study, Dr. Bergl was with the Division of General Internal Medicine, Medical College of Wisconsin, Milwaukee, Wisconsin. 2Medical College of Wisconsin, Milwaukee, Wisconsin.Physical examination (PE) is a core clinical skill in undergraduate medical education.1 Although the optimal approach to teaching clinical skills is debated, robust preclinical curricula should generally be followed by iterative skill development during clinical rotations.2,3

The internal medicine rotation represents a critical time to enhance PE skills. Diagnostic decision making and PE are highly prioritized competencies for the internal medicine clerkship,4 and students will likely utilize many core examination skills1,2 during this time. Bedside teaching of PE during the internal medicine service also provides an opportunity for students to receive feedback based on direct observation,5 a sine qua non of competency-based assessment.

Unfortunately, current internal medicine training environments limit opportunities for workplace-based instruction in PE. Recent studies suggest diminishing time spent on bedside patient care and teaching, with computer-based “indirect patient care” dominating much of the clinical workday of internal medicine services.6-8 However, the literature does not delineate how often medical students are enhancing their PE skills during clinical rotations or describe how the educational environment may influence PE teaching.

We aimed to describe the content and context of PE instruction during the internal medicine clerkship workflow. Specifically, we sought to explore what strategies physician team members used to teach PE to students. We also sought to describe factors in the inpatient learning environment that might explain why physical examination (PE) instruction occurs infrequently.

METHODS

We conducted a prospective mixed-methods study using time motion analysis, checklists on clinical teaching, and daily open-ended observations written by a trained observer from June through August 2015 at a single academic medical center. Subjects were recruited from internal medicine teaching teams and were allowed to opt out. Teaching teams had 2 formats: (1) traditional team with an attending physician (hospitalist or general internist), a senior resident, 2 interns, a fourth-year medical student, and 2 third-year students or (2) hospitalist team in which a third-year student works directly with a hospitalist and advanced practitioner. The proposal was submitted to the Medical College of Wisconsin Institutional Review Board and deemed exempt from further review.

All observations were carried out by a single investigator (A.T.), who was a second-year medical student at the time. To train this observer and to pilot the data collection instruments, our lead investigator (P.B.) directly supervised our observer on 4 separate occasions, totaling over 12 hours of mentored co-observation. Immediately after each training session, both investigators (A.T. and P.B.) debriefed to compare notes, to review checklists on recorded observations, and to discuss areas of uncertainty. During the training period, formal metrics of agreement (eg, kappa coefficients) were not gathered, as data collection instruments were still being refined.

Observation periods were centered on third-year medical students and their interactions with patients and members of the teaching team. Observed activities included pre-rounding, teaching rounds with the attending physician, and new patient admissions during call days. Observations generally occurred between the hours of 7 AM and 6 PM, and we limited periods of observation to 3 consecutive hours to minimize observer fatigue. Observation periods were selected to maximize the number of subjects and teams observed, to adequately capture pre-rounding and new admissions activities, and to account for variations in rounding styles throughout the call cycle. Teams were excluded if a member of the study team was an attending physician on the clinical team or if any member of the patient care team had opted out of the study.

Data were collected on paper checklists that included idealized bedside teaching activities around PE. Teaching activities were identified through a review of relevant literature9,10 and were further informed by our senior investigator’s own experience with faculty development in this area11 and team members’ attendance at bedside teaching workshops. At the end of each day, our observer also wrote brief observations that summarized factors affecting bedside teaching of PE. Checklist data were transferred to an Excel file (Microsoft), and written observations were imported into NVivo 10 (QRS International, Melbourne, Australia) for coding and analysis.

Checklist data were analyzed using simple descriptive statistics. We compared time spent on various types of rounding using ANOVA, and we used a Student two-tailed t-test to compare the amount of time students spent examining patients on pre-rounds versus new admissions. To ascertain differences in the frequency of PE teaching activities by location, we used chi-squared tests. Statistical analysis was performed using embedded statistics functions in Microsoft Excel. A P value of <.05 was used as the cut-off for significance.

We analyzed the written observations using conventional qualitative content analysis. Two investigators (A.T. and P.B.) reviewed the written comments and used open coding to devise a preliminary inductive coding scheme. Codes were refined iteratively, and a schema of categories and nodes was outlined in a codebook that was periodically reviewed by the entire research team. The coding investigators met regularly to ensure consistency in coding, and a third team member remained available to reconcile significant disagreements in code definitions.

 

 

RESULTS

Eighty-one subjects participated in the study: 21 were attending physicians, 12 residents, 21 interns, 11 senior medical students, and 26 junior medical students. We observed 16 distinct inpatient teaching teams and 329 unique patient-related events (discussions and/or patient-clinician encounters), with most events being observed during attending rounds (269/329, or 82%). There were 123 encounters at the bedside, averaging 7 minutes; 43 encounters occurred in the hallway, averaging 8 minutes each; and 163 encounters occurred in a workroom and averaged 7 minutes per patient discussion. We also observed 28 student-patient encounters during pre-round activities and 30 student-patient encounters during new admissions.

Teaching and Direct Observation

During attending rounds at the bedside, the attending physician examined the patient 82 times out of 123 patient encounters (67%). Teaching activities during these PEs were mostly limited to the attending physician or senior resident noting findings (37 instances out of 82 examinations, or 45%). Rarely did the teacher ask students to re-examine the patient before revealing relevant findings (5 instances out of 82 examinations, or 6%), and only during 15% of bedside examinations did the attending physician directly observe students performing a portion of the PE. As demonstrated in Table 1, discussions at the bedside were more likely to reference the PE (P < .001, chi-squared) and more often resulted in specific plans to verify physical findings (P < .001, chi-squared) compared with patient-related discussions in other settings. The location of rounding activities, however, did not affect how often teams incorporated PE into clinical decision-making (P = .82).

During 28 pre-rounding encounters, students usually examined the patient (26 out of 28 instances, 93%) but were observed only 4 times doing so (out of 26 instances, or 15%). During 30 new patient admissions, students examined 27 patients (90%) and had their PE observed 6 times (out of 27 instances, or 22%). There were no significant differences in frequency of these activities (P > .05, chi-squared) between pre-rounds or new admissions.

Observations on Teaching Strategies

In the written observations, we categorized various methods being used to teach PE. Bedside teaching of PE most often involved teachers simply describing or discussing physical findings (42 mentions in observations) or verifying a student’s reported findings (15 mentions). Teachers were also observed to use bedside teaching to contextualize findings (13 mentions), such as relating the quality of bowel sounds to the patient’s constipation or to discuss expected pupillary light reflexes in a neurologically intact patient. Less commonly, attending physicians narrated steps in their PE technique (9 mentions). Students were infrequently encouraged to practice a specific PE skill again (7 mentions) or allowed to re-examine and reconsider their initial interpretations (5 mentions).

Our written observations also identified factors that may impact clinical instruction of PE as shown in Table 2. In the learning environment, physical space, place, and timing of teaching moments all impacted PE teaching on the wards. Clinical workload and a focus on efficiency appeared to diminish the quality of PE instruction, such as by limiting the number of participants or by leading teams to conduct “sit-down rounds” in workrooms.

DISCUSSION

This observational study of clinical teaching on internal medicine teaching services demonstrates that PE teaching is most likely to occur during bedside rounding. However, even in bedside encounters, most PE instruction is limited to physician team members pointing out significant findings. Although physical findings were mentioned for the majority of patients seen on rounds, attending physicians infrequently verified students’ or residents’ findings, demonstrated technique, or incorporated PE into clinical decision making. We witnessed an alarming dearth of direct observation of students and almost no real-time feedback in performing and teaching PE. Thus, students rarely had opportunities to engage in higher-order learning activities related to PE on the internal medicine rotation.

We posit that the learning environment influenced PE instruction on the internal medicine rotation. To optimize inpatient teaching of PE, attending physicians need to consider the factors we identified in Table 2. Such teaching may be effective with a more limited number of participants and without distraction from technology. Time constraints are one of the major perceived barriers to bedside teaching of PE, and our data support this concern, as teams spent an average of only 7 minutes on each bedside encounter. However, many of the strategies observed to be used in real-time PE instruction, such as validating the learners’ findings or examining patients as a team, naturally fit into clinical routines and generally do not require extra thought or preparation.

One of the key strengths of our study is the use of direct observation of students and their teachers. This study is unique in its exclusive focus on PE and its description of factors affecting PE teaching activities on an internal medicine service. This observational, descriptive study also has obvious limitations. The study was conducted at a single institution during a limited time period. Moreover, the study period June through August, which was chosen based on our observer’s availability, includes the transition to a new academic year (July 1, 2015) when medical students and residents were becoming acclimated to their new roles. Additionally, the data were collected by a single researcher, and observer bias may affect the results of qualitative analysis of journal entries.

In conclusion, this study highlights the infrequency of applied PE skills in the daily clinical and educational workflow of internal medicine teaching teams. These findings may reflect a more widespread problem in clinical education, and replication of our findings at other teaching centers could galvanize faculty development around bedside PE teaching.

 

 

Disclosures

Dr. Bergl has nothing to disclose. Ms. Taylor reports grant support from the Cohen Endowment for Medical Student Research at the Medical College of Wisconsin during the conduct of the study. Mrs. Klumb, Ms. Quirk, Dr. Muntz, and Dr. Fletcher have nothing to disclose.

Funding

This work was funded in part by the Cohen Endowment for Medical Student Research at the Medical College of Wisconsin.

 

1Medical College of Wisconsin Affiliated Hospitals, Milwaukee, Wisconsin. At the time of this study, Dr. Bergl was with the Division of General Internal Medicine, Medical College of Wisconsin, Milwaukee, Wisconsin. 2Medical College of Wisconsin, Milwaukee, Wisconsin.Physical examination (PE) is a core clinical skill in undergraduate medical education.1 Although the optimal approach to teaching clinical skills is debated, robust preclinical curricula should generally be followed by iterative skill development during clinical rotations.2,3

The internal medicine rotation represents a critical time to enhance PE skills. Diagnostic decision making and PE are highly prioritized competencies for the internal medicine clerkship,4 and students will likely utilize many core examination skills1,2 during this time. Bedside teaching of PE during the internal medicine service also provides an opportunity for students to receive feedback based on direct observation,5 a sine qua non of competency-based assessment.

Unfortunately, current internal medicine training environments limit opportunities for workplace-based instruction in PE. Recent studies suggest diminishing time spent on bedside patient care and teaching, with computer-based “indirect patient care” dominating much of the clinical workday of internal medicine services.6-8 However, the literature does not delineate how often medical students are enhancing their PE skills during clinical rotations or describe how the educational environment may influence PE teaching.

We aimed to describe the content and context of PE instruction during the internal medicine clerkship workflow. Specifically, we sought to explore what strategies physician team members used to teach PE to students. We also sought to describe factors in the inpatient learning environment that might explain why physical examination (PE) instruction occurs infrequently.

METHODS

We conducted a prospective mixed-methods study using time motion analysis, checklists on clinical teaching, and daily open-ended observations written by a trained observer from June through August 2015 at a single academic medical center. Subjects were recruited from internal medicine teaching teams and were allowed to opt out. Teaching teams had 2 formats: (1) traditional team with an attending physician (hospitalist or general internist), a senior resident, 2 interns, a fourth-year medical student, and 2 third-year students or (2) hospitalist team in which a third-year student works directly with a hospitalist and advanced practitioner. The proposal was submitted to the Medical College of Wisconsin Institutional Review Board and deemed exempt from further review.

All observations were carried out by a single investigator (A.T.), who was a second-year medical student at the time. To train this observer and to pilot the data collection instruments, our lead investigator (P.B.) directly supervised our observer on 4 separate occasions, totaling over 12 hours of mentored co-observation. Immediately after each training session, both investigators (A.T. and P.B.) debriefed to compare notes, to review checklists on recorded observations, and to discuss areas of uncertainty. During the training period, formal metrics of agreement (eg, kappa coefficients) were not gathered, as data collection instruments were still being refined.

Observation periods were centered on third-year medical students and their interactions with patients and members of the teaching team. Observed activities included pre-rounding, teaching rounds with the attending physician, and new patient admissions during call days. Observations generally occurred between the hours of 7 AM and 6 PM, and we limited periods of observation to 3 consecutive hours to minimize observer fatigue. Observation periods were selected to maximize the number of subjects and teams observed, to adequately capture pre-rounding and new admissions activities, and to account for variations in rounding styles throughout the call cycle. Teams were excluded if a member of the study team was an attending physician on the clinical team or if any member of the patient care team had opted out of the study.

Data were collected on paper checklists that included idealized bedside teaching activities around PE. Teaching activities were identified through a review of relevant literature9,10 and were further informed by our senior investigator’s own experience with faculty development in this area11 and team members’ attendance at bedside teaching workshops. At the end of each day, our observer also wrote brief observations that summarized factors affecting bedside teaching of PE. Checklist data were transferred to an Excel file (Microsoft), and written observations were imported into NVivo 10 (QRS International, Melbourne, Australia) for coding and analysis.

Checklist data were analyzed using simple descriptive statistics. We compared time spent on various types of rounding using ANOVA, and we used a Student two-tailed t-test to compare the amount of time students spent examining patients on pre-rounds versus new admissions. To ascertain differences in the frequency of PE teaching activities by location, we used chi-squared tests. Statistical analysis was performed using embedded statistics functions in Microsoft Excel. A P value of <.05 was used as the cut-off for significance.

We analyzed the written observations using conventional qualitative content analysis. Two investigators (A.T. and P.B.) reviewed the written comments and used open coding to devise a preliminary inductive coding scheme. Codes were refined iteratively, and a schema of categories and nodes was outlined in a codebook that was periodically reviewed by the entire research team. The coding investigators met regularly to ensure consistency in coding, and a third team member remained available to reconcile significant disagreements in code definitions.

 

 

RESULTS

Eighty-one subjects participated in the study: 21 were attending physicians, 12 residents, 21 interns, 11 senior medical students, and 26 junior medical students. We observed 16 distinct inpatient teaching teams and 329 unique patient-related events (discussions and/or patient-clinician encounters), with most events being observed during attending rounds (269/329, or 82%). There were 123 encounters at the bedside, averaging 7 minutes; 43 encounters occurred in the hallway, averaging 8 minutes each; and 163 encounters occurred in a workroom and averaged 7 minutes per patient discussion. We also observed 28 student-patient encounters during pre-round activities and 30 student-patient encounters during new admissions.

Teaching and Direct Observation

During attending rounds at the bedside, the attending physician examined the patient 82 times out of 123 patient encounters (67%). Teaching activities during these PEs were mostly limited to the attending physician or senior resident noting findings (37 instances out of 82 examinations, or 45%). Rarely did the teacher ask students to re-examine the patient before revealing relevant findings (5 instances out of 82 examinations, or 6%), and only during 15% of bedside examinations did the attending physician directly observe students performing a portion of the PE. As demonstrated in Table 1, discussions at the bedside were more likely to reference the PE (P < .001, chi-squared) and more often resulted in specific plans to verify physical findings (P < .001, chi-squared) compared with patient-related discussions in other settings. The location of rounding activities, however, did not affect how often teams incorporated PE into clinical decision-making (P = .82).

During 28 pre-rounding encounters, students usually examined the patient (26 out of 28 instances, 93%) but were observed only 4 times doing so (out of 26 instances, or 15%). During 30 new patient admissions, students examined 27 patients (90%) and had their PE observed 6 times (out of 27 instances, or 22%). There were no significant differences in frequency of these activities (P > .05, chi-squared) between pre-rounds or new admissions.

Observations on Teaching Strategies

In the written observations, we categorized various methods being used to teach PE. Bedside teaching of PE most often involved teachers simply describing or discussing physical findings (42 mentions in observations) or verifying a student’s reported findings (15 mentions). Teachers were also observed to use bedside teaching to contextualize findings (13 mentions), such as relating the quality of bowel sounds to the patient’s constipation or to discuss expected pupillary light reflexes in a neurologically intact patient. Less commonly, attending physicians narrated steps in their PE technique (9 mentions). Students were infrequently encouraged to practice a specific PE skill again (7 mentions) or allowed to re-examine and reconsider their initial interpretations (5 mentions).

Our written observations also identified factors that may impact clinical instruction of PE as shown in Table 2. In the learning environment, physical space, place, and timing of teaching moments all impacted PE teaching on the wards. Clinical workload and a focus on efficiency appeared to diminish the quality of PE instruction, such as by limiting the number of participants or by leading teams to conduct “sit-down rounds” in workrooms.

DISCUSSION

This observational study of clinical teaching on internal medicine teaching services demonstrates that PE teaching is most likely to occur during bedside rounding. However, even in bedside encounters, most PE instruction is limited to physician team members pointing out significant findings. Although physical findings were mentioned for the majority of patients seen on rounds, attending physicians infrequently verified students’ or residents’ findings, demonstrated technique, or incorporated PE into clinical decision making. We witnessed an alarming dearth of direct observation of students and almost no real-time feedback in performing and teaching PE. Thus, students rarely had opportunities to engage in higher-order learning activities related to PE on the internal medicine rotation.

We posit that the learning environment influenced PE instruction on the internal medicine rotation. To optimize inpatient teaching of PE, attending physicians need to consider the factors we identified in Table 2. Such teaching may be effective with a more limited number of participants and without distraction from technology. Time constraints are one of the major perceived barriers to bedside teaching of PE, and our data support this concern, as teams spent an average of only 7 minutes on each bedside encounter. However, many of the strategies observed to be used in real-time PE instruction, such as validating the learners’ findings or examining patients as a team, naturally fit into clinical routines and generally do not require extra thought or preparation.

One of the key strengths of our study is the use of direct observation of students and their teachers. This study is unique in its exclusive focus on PE and its description of factors affecting PE teaching activities on an internal medicine service. This observational, descriptive study also has obvious limitations. The study was conducted at a single institution during a limited time period. Moreover, the study period June through August, which was chosen based on our observer’s availability, includes the transition to a new academic year (July 1, 2015) when medical students and residents were becoming acclimated to their new roles. Additionally, the data were collected by a single researcher, and observer bias may affect the results of qualitative analysis of journal entries.

In conclusion, this study highlights the infrequency of applied PE skills in the daily clinical and educational workflow of internal medicine teaching teams. These findings may reflect a more widespread problem in clinical education, and replication of our findings at other teaching centers could galvanize faculty development around bedside PE teaching.

 

 

Disclosures

Dr. Bergl has nothing to disclose. Ms. Taylor reports grant support from the Cohen Endowment for Medical Student Research at the Medical College of Wisconsin during the conduct of the study. Mrs. Klumb, Ms. Quirk, Dr. Muntz, and Dr. Fletcher have nothing to disclose.

Funding

This work was funded in part by the Cohen Endowment for Medical Student Research at the Medical College of Wisconsin.

References

1. Corbett E, Berkow R, Bernstein L, et al on behalf of the AAMC Task Force on the Preclerkship Clinical Skills Education of Medical Students. Recommendations for clinical skills curricula for undergraduate medical education. Achieving excellence in basic clinical method through clinical skills education: The medical school clinical skills curriculum. Association of American Medical Colleges; 2008. https://www.aamc.org/download/130608/data/clinicalskills_oct09.qxd.pdf.pdf. Accessed July 12, 2017.
2. Gowda D, Blatt B, Fink MJ, Kosowicz LY, Baecker A, Silvestri RC. A core physical exam for medical students: Results of a national survey. Acad Med. 2014;89(3):436-442. PubMed
3. Uchida T, Farnan JM, Schwartz JE, Heiman HL. Teaching the physical examination: A longitudinal strategy for tomorrow’s physicians. Acad Med. 2014;89(3):373-375. PubMed
4. Fazio S, De Fer T, Goroll A . Core Medicine Clerkship Curriculum Guide: A resource for teachers and learners. Clerkship Directors in Internal Medicine and Society of General Internal Medicine; 2006. http://www.im.org/d/do/2285/. Accessed July 12, 2017.
5. Gonzalo J, Heist B, Duffy B, et al. Content and timing of feedback and reflection: A multi-center qualitative study of experienced bedside teachers. BMC Med Educ. 2014;(14):212. doi: 10.1186/1472-6920-14-212. PubMed
6. Stickrath C, Noble M, Prochazka A, et al. Attending rounds in the current era: What is and is not happening. JAMA Intern Med. 2013;173(12):1084-1089. PubMed
7. Block L, Habicht R, Wu AW, et al. In the wake of the 2003 and 2011 duty hours regulations, how do internal medicine interns spend their time? J Gen Intern Med. 2013;28(8):1042-1047. PubMed
8. Wenger N, Méan M, Castioni J, Marques-Vidal P, Waeber G, Garnier A. Allocation of internal medicine resident time in a Swiss Hospital: A time and motion study of day and evening shifts. Ann Intern Med. 2017;166(8):579-586. PubMed
9. Ramani S. Twelve tips for excellent physical examination teaching. Med Teach. 2008;30(9-10):851-856. PubMed
10. Gonzalo JD, Heist BS, Duffy BL, et al. The art of bedside rounds: A multi-center qualitative study of strategies used by experienced bedside teachers. J Gen Intern Med. 2013;28(3):412-420. PubMed
11. Janicik RW, Fletcher KE. Teaching at the bedside: A new model. Med Teach. 2003;25(2):127-130. PubMed

References

1. Corbett E, Berkow R, Bernstein L, et al on behalf of the AAMC Task Force on the Preclerkship Clinical Skills Education of Medical Students. Recommendations for clinical skills curricula for undergraduate medical education. Achieving excellence in basic clinical method through clinical skills education: The medical school clinical skills curriculum. Association of American Medical Colleges; 2008. https://www.aamc.org/download/130608/data/clinicalskills_oct09.qxd.pdf.pdf. Accessed July 12, 2017.
2. Gowda D, Blatt B, Fink MJ, Kosowicz LY, Baecker A, Silvestri RC. A core physical exam for medical students: Results of a national survey. Acad Med. 2014;89(3):436-442. PubMed
3. Uchida T, Farnan JM, Schwartz JE, Heiman HL. Teaching the physical examination: A longitudinal strategy for tomorrow’s physicians. Acad Med. 2014;89(3):373-375. PubMed
4. Fazio S, De Fer T, Goroll A . Core Medicine Clerkship Curriculum Guide: A resource for teachers and learners. Clerkship Directors in Internal Medicine and Society of General Internal Medicine; 2006. http://www.im.org/d/do/2285/. Accessed July 12, 2017.
5. Gonzalo J, Heist B, Duffy B, et al. Content and timing of feedback and reflection: A multi-center qualitative study of experienced bedside teachers. BMC Med Educ. 2014;(14):212. doi: 10.1186/1472-6920-14-212. PubMed
6. Stickrath C, Noble M, Prochazka A, et al. Attending rounds in the current era: What is and is not happening. JAMA Intern Med. 2013;173(12):1084-1089. PubMed
7. Block L, Habicht R, Wu AW, et al. In the wake of the 2003 and 2011 duty hours regulations, how do internal medicine interns spend their time? J Gen Intern Med. 2013;28(8):1042-1047. PubMed
8. Wenger N, Méan M, Castioni J, Marques-Vidal P, Waeber G, Garnier A. Allocation of internal medicine resident time in a Swiss Hospital: A time and motion study of day and evening shifts. Ann Intern Med. 2017;166(8):579-586. PubMed
9. Ramani S. Twelve tips for excellent physical examination teaching. Med Teach. 2008;30(9-10):851-856. PubMed
10. Gonzalo JD, Heist BS, Duffy BL, et al. The art of bedside rounds: A multi-center qualitative study of strategies used by experienced bedside teachers. J Gen Intern Med. 2013;28(3):412-420. PubMed
11. Janicik RW, Fletcher KE. Teaching at the bedside: A new model. Med Teach. 2003;25(2):127-130. PubMed

Issue
Journal of Hospital Medicine 13(6)
Issue
Journal of Hospital Medicine 13(6)
Page Number
399-402
Page Number
399-402
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
"Paul A. Bergl, MD", Medical College of Wisconsin, 9200 W. Wisconsin Ave., 4th floor, Specialty Clinics, Milwaukee, WI 53226; Telephone: 414-955-7040; Fax: 414-955-0175; E-mail: pbergl@mcw.edu
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 07/11/2018 - 05:00
Un-Gate On Date
Wed, 06/13/2018 - 05:00
Use ProPublica
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Intrateam Coverage and Handoffs

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Intrateam coverage is common, intrateam handoffs are not

We have traditionally viewed continuity of care with a particular intern as important for high‐quality inpatient care, but this continuity is difficult to achieve. As we move to a model of team rather than individual continuity, information transfers between team members become critical.

When discontinuity between the primary team and a cross‐covering team occurs, this informational continuity is managed through formal handoffs.[1] Accordingly, there has been ample research on handoffs between different teams,[2, 3, 4, 5] but there has been little published literature to date to describe handoffs between members of the same team. Therefore, we set out (1) to learn how interns view intrateam handoffs and (2) to identify intern‐perceived problems with intrateam handoffs.

MATERIALS AND METHODS

This was a cross‐sectional survey study done at a 500‐bed academic medical center affiliated with a large internal medicine residency program. The survey was developed by the study team and reviewed for content and clarity by our chief residents and by 2 nationally known medical educators outside our institution. Study participants were internal medicine interns. Interns in this program rotate through 3 hospitals and do 7 to 8 ward months. The call schedules are different at each site (see Supporting Information, Appendix A, in the online version of this article). Opportunities for intrateam coverage of 1 intern by another include clinics (1/week), days off (1/week), some overnight periods, and occasional educational conferences. When possible, daily attending rounds include the entire team, but due to clinics, conferences, and days off, it is rare that the entire team is present. Bedside rounds are done at the discretion of the attending. The survey (see Supporting Information, Appendix B, in the online version of this article) included questions regarding situations when the respondent was covering his or her cointern's patients (cointern was defined as another intern on the respondent's same inpatient ward team). We also asked about situations when a cointern was covering the respondent's patients. For those questions, we considered answers of >60% to be a majority. We distributed this anonymous survey on 2 dates (January 2012 and March 2012) during regularly scheduled conferences. We mainly report descriptive findings. We also compared the percentage of study participants reporting problems when covering cointerns' patients to the percentage of study participants reporting problems when cointerns covered their (study participants') patients using 2, with significance set at P<0.05. This study was designated as exempt by the institutional review board.

RESULTS

Thirty‐four interns completed the survey out of a total of 44 interns present at the conferences (response rate=77%). There were 46 interns in the program, including categorical, medicine‐pediatrics, and preliminary interns. The mean age was 28 (standard deviation 2.8). Two‐thirds of respondents were female, and 65% were categorical.

Difference Between Intra‐ and Interteam Handoffs

Eighty‐eight percent felt that a handoff to a cointern was different than a handoff to an overnight cross‐cover intern; many interns said they assumed their cointerns had at least some knowledge of their patients, and therefore put less time and detail into their handoffs. When covering for their cointern, 47% reported feeling the same amount of responsibility as for their own patients, whereas 38% of interns reported feeling much or somewhat less responsible for their cointerns' patients and the remainder (15%) felt somewhat or much more responsible.

Knowledge of Cointern's Patients

Most (65%) interns reported at least 3 days in their last inpatient ward month when they covered a cointern's patient that had not been formally handed off to them. Forty‐five percent of respondents reported seldom or never receiving a written sign‐out on their cointern's patients.

Respondents were asked to think about times before they had covered their cointern's patients. Sixty‐eight percent of respondents reported knowing the number 1 problem for the majority of their cointern's patients. Twenty‐four percent reported having ever actually seen the majority of their cointern's patients. Only 3% of respondents said they had ever examined the majority of their cointern's patients prior to providing coverage.

Perceived Problems With Intrateam Coverage

While covering a cointern's patients, nearly half reported missing changes in patients' exams and forgetting to order labs or imaging. More than half reported unexpected family meetings or phone calls. In contrast, respondents noted more problems when their cointern had covered for them (Table 1). Seventy‐nine percent felt that patient care was at least sometimes delayed because of incomplete knowledge due to intrateam coverage.

Percentage of Interns Reporting Problems With Cross‐Coverage by Their Cointern or While They Were Covering for Their Cointern
What Problems Have You Noticed
While Respondent Covers a Cointern's Patient? After Respondent's Patients Were Covered by Cointern?
  • P<0.05.

Missed labs 18% 33%
Missed consult recommendations 21% 30%
Missed exam changes 42% 27%
Forgot to follow‐up imaging 27% 30%
Forgot to order labs or imaging 42%a 70%a
Failure to adjust meds 27% 27%
Unexpected family meeting/phone calls 61%a 30%a
Did not understand the plan from cointern's notes 45% 27%

DISCUSSION

In our program, interns commonly cover for each other. This intrateam coverage frequently occurs without a formal handoff, and interns do not always know key information about their cointern's patients. Interns reported frequent problems with intrateam coverage such as missed lab results, consult recommendations, and changes in the physical exam. These missed items could result in delayed diagnoses and delayed treatment. These problems have been identified in interteam handoffs as well.[6, 7] Even in optimized interteam handoffs, receivers fail to identify the most important piece of information about 60% of the patients,[8] and our results mirror this finding.

The finding that fewer than a quarter of the respondents have ever seen the majority of their cointerns' patients is certainly of concern. This likely arises from several inter‐related factors: reduced hours for housestaff, schedules built to accommodate the reduced hours (eg, overlapping rather than simultaneous shifts), and the choice of some attendings to not take the entire team around to see every patient. In institutions where bedside rounds as a team are the norm, this finding will be less applicable, but others across the country have noticed this trend[9, 10] and have tried to counteract it.[11] This situation has both patient care and educational implications. The main patient care implication is that the other team members may be less able to seamlessly assume care when the primary intern is away or busy. Therefore, intrateam coverage becomes much more like traditional cross‐coverage of another team's patients, during which there is no expectation that the covering person will have ever seen the patients for whom they are assuming care. The main educational implication of not seeing the cointerns' patients is that the interns are seeing only half the patients that they could otherwise see. Learning medicine is experiential, and limiting opportunities for seeing and examining patients is unwise in this era of reduced time spent in the hospital.

Limitations of this study include being conducted in a single program. It will be important for other sites to assess their own practices with respect to intrateam handoffs. Another limitation is that it was a cross‐sectional survey subject to recall bias. We may have obtained more detailed information if we had conducted interviews. We also did not quantify the frequency of missed labs, consult recommendations, and physical examination changes that occurred during intrateam coverage. Finally, we did not independently verify the problems identified by the interns.

Some possible strategies to address this issue include (1) treating intrateam handoffs like interteam handoffs by implementing a formal system, (2) better utilizing senior residents/faculty when interns are covering for each other, (3) using bedside attending rounds to increase the exposure of all team members to the team's patients, (4) block scheduling to avoid absences due to clinics,[12] and (5) better communication and teamwork training to increase team awareness of all patients.[13]

Disclosures

Disclosures: There was no external funding for this work. However, this material is the result of work supported with resources and the use of facilities at the Clement J. Zablocki VA Medical Center, Milwaukee, WI. This work was presented in poster format at the national Society of Hospital Medicine meeting in National Harbor, Maryland in May 2013. The authors have no conflicts of interest to report.

Files
References
  1. Riesenberg LA, Leitzsch J, Massucci JL, et al. Residents' and attending physicians' handoffs: a systematic review of the literature. Acad Med. 2009;84(12):17751787.
  2. Salerno SM, Arnett MV, Domanski JP. Standardized sign‐out reduces intern perception of medical errors on the general internal medicine ward. Teach Learn Med. 2009;21(2):121126.
  3. Bump GM, Bost JE, Buranosky R, Elnicki M. Faculty member review and feedback using a sign‐out checklist: improving intern written sign‐out. Acad Med. 2012;87(8):11251131.
  4. Petersen LA, Orav EJ, Teich JM, O'Neil AC, Brennan TA. Using a computerized sign‐out program to improve continuity of inpatient care and prevent adverse events. Jt Comm J Qual Improv. 1998;24(2):7787.
  5. Horwitz LI, Krumholz HM, Green ML, Huot SJ. Transfers of patient care between house staff on internal medicine wards. Arch Intern Med. 2006;166:11731177.
  6. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care 2005;14(6):401407.
  7. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):17551760.
  8. Chang VY, Arora VM, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491496.
  9. Verghese A. Culture shock—patient as icon, icon as patient. N Engl J Med. 2008;359(26):27482751.
  10. Gonzalo JD, Masters PA, Simons RJ, Chuang CH. Attending rounds and bedside case presentations: medical student and medicine resident experiences and attitudes. Teach Learn Med. 2009;21(2):105110.
  11. Gonzalo J, Chuang C, Huang G, Smith C. The return of bedside rounds: an educational intervention. J Gen Intern Med. 2010;25(8):792798.
  12. Warm EJ, Schauer DP, Diers T, et al. The ambulatory long‐block: an accreditation council for graduate medical education (ACGME) educational innovations project (EIP). J Gen Intern Med. 2008;23(7):921926.
  13. AHRQ. TeamSTEPPS: National Implementation. Available at: http://teamstepps.ahrq.gov/. Accessed June 19, 2014.
Article PDF
Issue
Journal of Hospital Medicine - 9(11)
Publications
Page Number
734-736
Sections
Files
Files
Article PDF
Article PDF

We have traditionally viewed continuity of care with a particular intern as important for high‐quality inpatient care, but this continuity is difficult to achieve. As we move to a model of team rather than individual continuity, information transfers between team members become critical.

When discontinuity between the primary team and a cross‐covering team occurs, this informational continuity is managed through formal handoffs.[1] Accordingly, there has been ample research on handoffs between different teams,[2, 3, 4, 5] but there has been little published literature to date to describe handoffs between members of the same team. Therefore, we set out (1) to learn how interns view intrateam handoffs and (2) to identify intern‐perceived problems with intrateam handoffs.

MATERIALS AND METHODS

This was a cross‐sectional survey study done at a 500‐bed academic medical center affiliated with a large internal medicine residency program. The survey was developed by the study team and reviewed for content and clarity by our chief residents and by 2 nationally known medical educators outside our institution. Study participants were internal medicine interns. Interns in this program rotate through 3 hospitals and do 7 to 8 ward months. The call schedules are different at each site (see Supporting Information, Appendix A, in the online version of this article). Opportunities for intrateam coverage of 1 intern by another include clinics (1/week), days off (1/week), some overnight periods, and occasional educational conferences. When possible, daily attending rounds include the entire team, but due to clinics, conferences, and days off, it is rare that the entire team is present. Bedside rounds are done at the discretion of the attending. The survey (see Supporting Information, Appendix B, in the online version of this article) included questions regarding situations when the respondent was covering his or her cointern's patients (cointern was defined as another intern on the respondent's same inpatient ward team). We also asked about situations when a cointern was covering the respondent's patients. For those questions, we considered answers of >60% to be a majority. We distributed this anonymous survey on 2 dates (January 2012 and March 2012) during regularly scheduled conferences. We mainly report descriptive findings. We also compared the percentage of study participants reporting problems when covering cointerns' patients to the percentage of study participants reporting problems when cointerns covered their (study participants') patients using 2, with significance set at P<0.05. This study was designated as exempt by the institutional review board.

RESULTS

Thirty‐four interns completed the survey out of a total of 44 interns present at the conferences (response rate=77%). There were 46 interns in the program, including categorical, medicine‐pediatrics, and preliminary interns. The mean age was 28 (standard deviation 2.8). Two‐thirds of respondents were female, and 65% were categorical.

Difference Between Intra‐ and Interteam Handoffs

Eighty‐eight percent felt that a handoff to a cointern was different than a handoff to an overnight cross‐cover intern; many interns said they assumed their cointerns had at least some knowledge of their patients, and therefore put less time and detail into their handoffs. When covering for their cointern, 47% reported feeling the same amount of responsibility as for their own patients, whereas 38% of interns reported feeling much or somewhat less responsible for their cointerns' patients and the remainder (15%) felt somewhat or much more responsible.

Knowledge of Cointern's Patients

Most (65%) interns reported at least 3 days in their last inpatient ward month when they covered a cointern's patient that had not been formally handed off to them. Forty‐five percent of respondents reported seldom or never receiving a written sign‐out on their cointern's patients.

Respondents were asked to think about times before they had covered their cointern's patients. Sixty‐eight percent of respondents reported knowing the number 1 problem for the majority of their cointern's patients. Twenty‐four percent reported having ever actually seen the majority of their cointern's patients. Only 3% of respondents said they had ever examined the majority of their cointern's patients prior to providing coverage.

Perceived Problems With Intrateam Coverage

While covering a cointern's patients, nearly half reported missing changes in patients' exams and forgetting to order labs or imaging. More than half reported unexpected family meetings or phone calls. In contrast, respondents noted more problems when their cointern had covered for them (Table 1). Seventy‐nine percent felt that patient care was at least sometimes delayed because of incomplete knowledge due to intrateam coverage.

Percentage of Interns Reporting Problems With Cross‐Coverage by Their Cointern or While They Were Covering for Their Cointern
What Problems Have You Noticed
While Respondent Covers a Cointern's Patient? After Respondent's Patients Were Covered by Cointern?
  • P<0.05.

Missed labs 18% 33%
Missed consult recommendations 21% 30%
Missed exam changes 42% 27%
Forgot to follow‐up imaging 27% 30%
Forgot to order labs or imaging 42%a 70%a
Failure to adjust meds 27% 27%
Unexpected family meeting/phone calls 61%a 30%a
Did not understand the plan from cointern's notes 45% 27%

DISCUSSION

In our program, interns commonly cover for each other. This intrateam coverage frequently occurs without a formal handoff, and interns do not always know key information about their cointern's patients. Interns reported frequent problems with intrateam coverage such as missed lab results, consult recommendations, and changes in the physical exam. These missed items could result in delayed diagnoses and delayed treatment. These problems have been identified in interteam handoffs as well.[6, 7] Even in optimized interteam handoffs, receivers fail to identify the most important piece of information about 60% of the patients,[8] and our results mirror this finding.

The finding that fewer than a quarter of the respondents have ever seen the majority of their cointerns' patients is certainly of concern. This likely arises from several inter‐related factors: reduced hours for housestaff, schedules built to accommodate the reduced hours (eg, overlapping rather than simultaneous shifts), and the choice of some attendings to not take the entire team around to see every patient. In institutions where bedside rounds as a team are the norm, this finding will be less applicable, but others across the country have noticed this trend[9, 10] and have tried to counteract it.[11] This situation has both patient care and educational implications. The main patient care implication is that the other team members may be less able to seamlessly assume care when the primary intern is away or busy. Therefore, intrateam coverage becomes much more like traditional cross‐coverage of another team's patients, during which there is no expectation that the covering person will have ever seen the patients for whom they are assuming care. The main educational implication of not seeing the cointerns' patients is that the interns are seeing only half the patients that they could otherwise see. Learning medicine is experiential, and limiting opportunities for seeing and examining patients is unwise in this era of reduced time spent in the hospital.

Limitations of this study include being conducted in a single program. It will be important for other sites to assess their own practices with respect to intrateam handoffs. Another limitation is that it was a cross‐sectional survey subject to recall bias. We may have obtained more detailed information if we had conducted interviews. We also did not quantify the frequency of missed labs, consult recommendations, and physical examination changes that occurred during intrateam coverage. Finally, we did not independently verify the problems identified by the interns.

Some possible strategies to address this issue include (1) treating intrateam handoffs like interteam handoffs by implementing a formal system, (2) better utilizing senior residents/faculty when interns are covering for each other, (3) using bedside attending rounds to increase the exposure of all team members to the team's patients, (4) block scheduling to avoid absences due to clinics,[12] and (5) better communication and teamwork training to increase team awareness of all patients.[13]

Disclosures

Disclosures: There was no external funding for this work. However, this material is the result of work supported with resources and the use of facilities at the Clement J. Zablocki VA Medical Center, Milwaukee, WI. This work was presented in poster format at the national Society of Hospital Medicine meeting in National Harbor, Maryland in May 2013. The authors have no conflicts of interest to report.

We have traditionally viewed continuity of care with a particular intern as important for high‐quality inpatient care, but this continuity is difficult to achieve. As we move to a model of team rather than individual continuity, information transfers between team members become critical.

When discontinuity between the primary team and a cross‐covering team occurs, this informational continuity is managed through formal handoffs.[1] Accordingly, there has been ample research on handoffs between different teams,[2, 3, 4, 5] but there has been little published literature to date to describe handoffs between members of the same team. Therefore, we set out (1) to learn how interns view intrateam handoffs and (2) to identify intern‐perceived problems with intrateam handoffs.

MATERIALS AND METHODS

This was a cross‐sectional survey study done at a 500‐bed academic medical center affiliated with a large internal medicine residency program. The survey was developed by the study team and reviewed for content and clarity by our chief residents and by 2 nationally known medical educators outside our institution. Study participants were internal medicine interns. Interns in this program rotate through 3 hospitals and do 7 to 8 ward months. The call schedules are different at each site (see Supporting Information, Appendix A, in the online version of this article). Opportunities for intrateam coverage of 1 intern by another include clinics (1/week), days off (1/week), some overnight periods, and occasional educational conferences. When possible, daily attending rounds include the entire team, but due to clinics, conferences, and days off, it is rare that the entire team is present. Bedside rounds are done at the discretion of the attending. The survey (see Supporting Information, Appendix B, in the online version of this article) included questions regarding situations when the respondent was covering his or her cointern's patients (cointern was defined as another intern on the respondent's same inpatient ward team). We also asked about situations when a cointern was covering the respondent's patients. For those questions, we considered answers of >60% to be a majority. We distributed this anonymous survey on 2 dates (January 2012 and March 2012) during regularly scheduled conferences. We mainly report descriptive findings. We also compared the percentage of study participants reporting problems when covering cointerns' patients to the percentage of study participants reporting problems when cointerns covered their (study participants') patients using 2, with significance set at P<0.05. This study was designated as exempt by the institutional review board.

RESULTS

Thirty‐four interns completed the survey out of a total of 44 interns present at the conferences (response rate=77%). There were 46 interns in the program, including categorical, medicine‐pediatrics, and preliminary interns. The mean age was 28 (standard deviation 2.8). Two‐thirds of respondents were female, and 65% were categorical.

Difference Between Intra‐ and Interteam Handoffs

Eighty‐eight percent felt that a handoff to a cointern was different than a handoff to an overnight cross‐cover intern; many interns said they assumed their cointerns had at least some knowledge of their patients, and therefore put less time and detail into their handoffs. When covering for their cointern, 47% reported feeling the same amount of responsibility as for their own patients, whereas 38% of interns reported feeling much or somewhat less responsible for their cointerns' patients and the remainder (15%) felt somewhat or much more responsible.

Knowledge of Cointern's Patients

Most (65%) interns reported at least 3 days in their last inpatient ward month when they covered a cointern's patient that had not been formally handed off to them. Forty‐five percent of respondents reported seldom or never receiving a written sign‐out on their cointern's patients.

Respondents were asked to think about times before they had covered their cointern's patients. Sixty‐eight percent of respondents reported knowing the number 1 problem for the majority of their cointern's patients. Twenty‐four percent reported having ever actually seen the majority of their cointern's patients. Only 3% of respondents said they had ever examined the majority of their cointern's patients prior to providing coverage.

Perceived Problems With Intrateam Coverage

While covering a cointern's patients, nearly half reported missing changes in patients' exams and forgetting to order labs or imaging. More than half reported unexpected family meetings or phone calls. In contrast, respondents noted more problems when their cointern had covered for them (Table 1). Seventy‐nine percent felt that patient care was at least sometimes delayed because of incomplete knowledge due to intrateam coverage.

Percentage of Interns Reporting Problems With Cross‐Coverage by Their Cointern or While They Were Covering for Their Cointern
What Problems Have You Noticed
While Respondent Covers a Cointern's Patient? After Respondent's Patients Were Covered by Cointern?
  • P<0.05.

Missed labs 18% 33%
Missed consult recommendations 21% 30%
Missed exam changes 42% 27%
Forgot to follow‐up imaging 27% 30%
Forgot to order labs or imaging 42%a 70%a
Failure to adjust meds 27% 27%
Unexpected family meeting/phone calls 61%a 30%a
Did not understand the plan from cointern's notes 45% 27%

DISCUSSION

In our program, interns commonly cover for each other. This intrateam coverage frequently occurs without a formal handoff, and interns do not always know key information about their cointern's patients. Interns reported frequent problems with intrateam coverage such as missed lab results, consult recommendations, and changes in the physical exam. These missed items could result in delayed diagnoses and delayed treatment. These problems have been identified in interteam handoffs as well.[6, 7] Even in optimized interteam handoffs, receivers fail to identify the most important piece of information about 60% of the patients,[8] and our results mirror this finding.

The finding that fewer than a quarter of the respondents have ever seen the majority of their cointerns' patients is certainly of concern. This likely arises from several inter‐related factors: reduced hours for housestaff, schedules built to accommodate the reduced hours (eg, overlapping rather than simultaneous shifts), and the choice of some attendings to not take the entire team around to see every patient. In institutions where bedside rounds as a team are the norm, this finding will be less applicable, but others across the country have noticed this trend[9, 10] and have tried to counteract it.[11] This situation has both patient care and educational implications. The main patient care implication is that the other team members may be less able to seamlessly assume care when the primary intern is away or busy. Therefore, intrateam coverage becomes much more like traditional cross‐coverage of another team's patients, during which there is no expectation that the covering person will have ever seen the patients for whom they are assuming care. The main educational implication of not seeing the cointerns' patients is that the interns are seeing only half the patients that they could otherwise see. Learning medicine is experiential, and limiting opportunities for seeing and examining patients is unwise in this era of reduced time spent in the hospital.

Limitations of this study include being conducted in a single program. It will be important for other sites to assess their own practices with respect to intrateam handoffs. Another limitation is that it was a cross‐sectional survey subject to recall bias. We may have obtained more detailed information if we had conducted interviews. We also did not quantify the frequency of missed labs, consult recommendations, and physical examination changes that occurred during intrateam coverage. Finally, we did not independently verify the problems identified by the interns.

Some possible strategies to address this issue include (1) treating intrateam handoffs like interteam handoffs by implementing a formal system, (2) better utilizing senior residents/faculty when interns are covering for each other, (3) using bedside attending rounds to increase the exposure of all team members to the team's patients, (4) block scheduling to avoid absences due to clinics,[12] and (5) better communication and teamwork training to increase team awareness of all patients.[13]

Disclosures

Disclosures: There was no external funding for this work. However, this material is the result of work supported with resources and the use of facilities at the Clement J. Zablocki VA Medical Center, Milwaukee, WI. This work was presented in poster format at the national Society of Hospital Medicine meeting in National Harbor, Maryland in May 2013. The authors have no conflicts of interest to report.

References
  1. Riesenberg LA, Leitzsch J, Massucci JL, et al. Residents' and attending physicians' handoffs: a systematic review of the literature. Acad Med. 2009;84(12):17751787.
  2. Salerno SM, Arnett MV, Domanski JP. Standardized sign‐out reduces intern perception of medical errors on the general internal medicine ward. Teach Learn Med. 2009;21(2):121126.
  3. Bump GM, Bost JE, Buranosky R, Elnicki M. Faculty member review and feedback using a sign‐out checklist: improving intern written sign‐out. Acad Med. 2012;87(8):11251131.
  4. Petersen LA, Orav EJ, Teich JM, O'Neil AC, Brennan TA. Using a computerized sign‐out program to improve continuity of inpatient care and prevent adverse events. Jt Comm J Qual Improv. 1998;24(2):7787.
  5. Horwitz LI, Krumholz HM, Green ML, Huot SJ. Transfers of patient care between house staff on internal medicine wards. Arch Intern Med. 2006;166:11731177.
  6. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care 2005;14(6):401407.
  7. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):17551760.
  8. Chang VY, Arora VM, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491496.
  9. Verghese A. Culture shock—patient as icon, icon as patient. N Engl J Med. 2008;359(26):27482751.
  10. Gonzalo JD, Masters PA, Simons RJ, Chuang CH. Attending rounds and bedside case presentations: medical student and medicine resident experiences and attitudes. Teach Learn Med. 2009;21(2):105110.
  11. Gonzalo J, Chuang C, Huang G, Smith C. The return of bedside rounds: an educational intervention. J Gen Intern Med. 2010;25(8):792798.
  12. Warm EJ, Schauer DP, Diers T, et al. The ambulatory long‐block: an accreditation council for graduate medical education (ACGME) educational innovations project (EIP). J Gen Intern Med. 2008;23(7):921926.
  13. AHRQ. TeamSTEPPS: National Implementation. Available at: http://teamstepps.ahrq.gov/. Accessed June 19, 2014.
References
  1. Riesenberg LA, Leitzsch J, Massucci JL, et al. Residents' and attending physicians' handoffs: a systematic review of the literature. Acad Med. 2009;84(12):17751787.
  2. Salerno SM, Arnett MV, Domanski JP. Standardized sign‐out reduces intern perception of medical errors on the general internal medicine ward. Teach Learn Med. 2009;21(2):121126.
  3. Bump GM, Bost JE, Buranosky R, Elnicki M. Faculty member review and feedback using a sign‐out checklist: improving intern written sign‐out. Acad Med. 2012;87(8):11251131.
  4. Petersen LA, Orav EJ, Teich JM, O'Neil AC, Brennan TA. Using a computerized sign‐out program to improve continuity of inpatient care and prevent adverse events. Jt Comm J Qual Improv. 1998;24(2):7787.
  5. Horwitz LI, Krumholz HM, Green ML, Huot SJ. Transfers of patient care between house staff on internal medicine wards. Arch Intern Med. 2006;166:11731177.
  6. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care 2005;14(6):401407.
  7. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):17551760.
  8. Chang VY, Arora VM, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491496.
  9. Verghese A. Culture shock—patient as icon, icon as patient. N Engl J Med. 2008;359(26):27482751.
  10. Gonzalo JD, Masters PA, Simons RJ, Chuang CH. Attending rounds and bedside case presentations: medical student and medicine resident experiences and attitudes. Teach Learn Med. 2009;21(2):105110.
  11. Gonzalo J, Chuang C, Huang G, Smith C. The return of bedside rounds: an educational intervention. J Gen Intern Med. 2010;25(8):792798.
  12. Warm EJ, Schauer DP, Diers T, et al. The ambulatory long‐block: an accreditation council for graduate medical education (ACGME) educational innovations project (EIP). J Gen Intern Med. 2008;23(7):921926.
  13. AHRQ. TeamSTEPPS: National Implementation. Available at: http://teamstepps.ahrq.gov/. Accessed June 19, 2014.
Issue
Journal of Hospital Medicine - 9(11)
Issue
Journal of Hospital Medicine - 9(11)
Page Number
734-736
Page Number
734-736
Publications
Publications
Article Type
Display Headline
Intrateam coverage is common, intrateam handoffs are not
Display Headline
Intrateam coverage is common, intrateam handoffs are not
Sections
Article Source
Published 2014. This article is a US Government work and, as such, is in the public domain of the United States of America.
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Kathlyn E. Fletcher, MD, 5000 W. National Ave., Milwaukee, WI 53295; Telephone: 414–955‐8024; Fax: 414‐955‐6689; E‐mail: kfletche@mcw.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Localizing General Medical Teams

Article Type
Changed
Mon, 05/22/2017 - 18:37
Display Headline
Impact of localizing general medical teams to a single nursing unit

Localizing inpatient general medical teams to nursing units has high intuitive validity for improving physician productivity, hospital efficiency, and patient outcomes. Motion or the moving of personnel between tasksso prominent if teams are not localizedis 1 of the 7 wastes in lean thinking.1 In a timemotion study, where hospitalists cared for patients on up to 5 different wards, O'Leary et al2 have reported large parts of hospitalists' workdays spent in indirect patient care (69%), paging (13%), and travel (3%). Localization could increase the amount of time available for direct patient care, decrease time spent for (and interruptions due to) paging, and decrease travel time, all leading to greater productivity.

O'Leary et al3 have also reported the beneficial effects of localization of medical inpatients on communication between nurses and physicians, who could identify each other more often, and reported greater communication (specifically face‐to‐face communication) with each other following localization. This improvement in communication and effective multidisciplinary rounds could lead to safer care4 and better outcomes.

Further investigations about the effect of localization are limited. Roy et al5 have compared the outcomes of patients localized to 2 inpatient pods medically staffed by hospitalists and physician assistants (PAs) to geographically dispersed, but structurally different, house staff teams. They noticed significantly lower costs, slight but nonsignificant increase in length of stay, and no difference in mortality or readmissions, but it is impossible to tease out the affect of localization versus the affect of team composition. In a before‐and‐after study, Findlay et al6 have reported a decrease in mortality and complication rates in clinically homogenous surgical patients (proximal hip fractures) when cared for by junior trainee physicians localized to a unit, but their experience cannot be extrapolated to the much more diverse general medical population.

In our hospital, each general medical team could admit patients dispersed over 14 different units. An internal group, commissioned to evaluate our hospitalist practice, recommended reducing this dispersal to improve physician productivity, hospital efficiency, and outcomes of care. We therefore conducted a project to evaluate the impact of localizing general medical inpatient teams to a single nursing unit.

METHODS

Setting

We conducted our project at a 490 bed, urban academic medical center in the midwestern United States where of the 10 total general medical teams, 6 were traditional resident‐based teams and 4 consisted of a hospitalist paired with a PA (H‐PA teams). We focused our study on the 4 H‐PA teams. The hospitalists could be assigned to any H‐PA team and staffed them for 2 weeks (including weekends). The PAs were always assigned to the same team but took weekends off. An in‐house hospitalist provided overnight cross‐coverage for the H‐PA teams. Prior to our intervention, these teams could admit patients to any of the 14 nursing units at our hospital. They admitted patients from 7 AM to 3 PM, and also accepted care of patients admitted overnight after the resident teams had reached their admission limits (overflow). A Faculty Admitting Medical Officer (AMO) balanced the existing workload of the teams against the number and complexity of incoming patients to decide team assignment for the patients. The AMO was given guidelines (soft caps) to limit total admissions to H‐PA teams to 5 per team per day (3 on a weekend), and to not exceed a total patient census of 16 for an H‐PA team.

Intervention

Starting April 1, 2010, until July 15, 2010, we localized patients admitted to 2 of our 4 H‐PA teams on a single 32‐bed nursing unit. The patients of the other 2 H‐PA teams remained dispersed throughout the hospital.

Transition

April 1, 2010 was a scheduled switch day for the hospitalists on the H‐PA teams. We took advantage of this switch day and reassigned all patients cared for by H‐PA teams on our localized unit to the 2 localized teams. Similarly, all patients on nonlocalized units cared for by H‐PA teams were reassigned to the 2 nonlocalized teams. All patients cared for by resident teams on the localized unit, that were anticipated to be discharged soon, stayed until discharge; those that had a longer stay anticipated were transferred to a nonlocalized unit.

Patient Assignment

The 4 H‐PA teams continued to accept patients between 7 AM and 3 PM, as well as overflow patients. Patients with sickle cell crises were admitted exclusively to the nonlocalized teams, as they were cared for on a specialized nursing unit. No other patient characteristic was used to decide team assignment.

The AMO balanced the existing workload of the teams against the number and complexity of incoming patients to decide team assignment for the patients, but if these factors were equivocal, the AMO was now asked to preferentially admit to the localized teams. The admission soft cap for the H‐PA teams remained the same (5 on weekdays and 3 on weekends). The soft cap on the total census of 16 patients for the nonlocalized teams remained, but we imposed hard caps on the total census for the localized teams. These hard caps were 16 for each localized team for the month of April (to fill a 32‐bed unit), then decreased to 12 for the month of May, as informal feedback from the teams suggested a need to decrease workload, and then rebalanced to 14 for the remaining study period.

Evaluation

Clinical Outcomes

Using both concurrent and historical controls, we evaluated the impact of localization on the following clinical outcome measures: length of stay (LOS), charges, and 30‐day readmission rates.

Inclusion Criteria

We included all patients assigned to localized and nonlocalized teams between the period April 1, 2010 to July 15, 2010, and discharged before July 16, 2010, in our intervention group and concurrent control group, respectively. We included all patients assigned to any of the 4 H‐PA teams during the period January 1, 2010 and March 31, 2010 in the historical control group.

Exclusion Criteria

From the historical control group, we excluded patients assigned to one particular H‐PA team during the period January 1, 2010 to February 28, 2010, during which the PA assigned to that team was on leave. We excluded, from all groups, patients with a diagnosis of sickle cell disease and hospitalizations that straddled the start of the intervention. Further, we excluded repeat admissions for each patient.

Data Collection

We used admission logs to determine team assignment and linked them to our hospital's discharge abstract database to get patient level data. We grouped the principal diagnosis, International Classification of Diseases, Ninth Revision, Clinical Modification (ICD‐9‐CM) codes into clinically relevant categories using the Healthcare Cost and Utilization Project Clinical Classification Software for ICD‐9‐CM (Rockville, MD, www.hcup‐us.ahrq.gov/toolssoftware/ccs/ccs.jsp). We created comorbidity measures using Healthcare Cost and Utilization Project Comorbidity Software, version 3.4 (Rockville, MD, www.hcup‐us.ahrq.gov/toolssoftware/comorbidity/comorbidity.jsp).

We calculated LOS by subtracting the discharge day and time from the admission day and time. We summed all charges accrued during the entire hospital stay, but did not include professional fees. The LOS and charges included time spent and charges accrued in the intensive care unit (ICU). As ICU care was not under the control of the general medical teams and could have a significant impact on outcomes reflecting resource utilization, we compared LOS and charges only for 2 subsets of patients: patients not initially admitted to ICU before care by medical teams, and patients never requiring ICU care. We considered any repeat hospitalization to our hospital within 30 days following a discharge to be a readmission, except those for a planned procedure or for inpatient rehabilitation. We compared readmission rates for all patients irrespective of ICU stay, as discharge planning for all patients was under the direct control of the general medical teams.

Data Analysis

We performed unadjusted descriptive statistics using medians and interquartile ranges for continuous variables, and frequencies and percentages for categorical variables. We used chi‐square tests of association, and KruskalWallis analysis of variance, to compare baseline characteristics of patients assigned to localized and control teams.

We used regression models with random effects to risk adjust for a wide variety of variables. We included age, gender, race, insurance, admission source, time, day of week, discharge time, and total number of comorbidities as fixed effects in all models. We then added individual comorbidity measures one by one as fixed effects, including them only if significant at P < 0.01. We always added a variable identifying the admitting physician as a random effect, to account for dependence between admissions to the same physician. We log transformed LOS and charges because they were extremely skewed in nature. We analyzed readmissions after excluding patients who died. We evaluated the affect of our intervention on clinical outcomes using both historical and concurrent controls. We report P values for both overall 3‐way comparisons, as well as each of the 2‐way comparisonsintervention versus historical control and intervention versus concurrent control.

Productivity and Workflow Measures

We also evaluated the impact of localization on the following productivity and workflow measures: number of pages received, number of patient encounters, relative value units (RVUs) generated, and steps walked by PAs.

Data Collection

We queried our in‐house paging systems for the number of pages received by intervention and concurrent control teams between 7 AM and 6 PM (usual workday). We queried our professional billing data to determine the number of encounters per day and RVUs generated by the intervention, as well as historical and concurrent control teams, as a measure of productivity.

During the last 15 days of our intervention (July 1 July 15, 2010), we received 4 pedometers and we asked the PAs to record the number of steps taken during their workday. We chose PAs, rather than physicians, as the PAs had purely clinical duties and their walking activity would reflect activity for solely clinical purposes.

Data Analysis

For productivity and workflow measures, we adjusted for the day of the week and used random effects models to adjust for clustering of data by physician and physician assistant.

Statistical Software

We performed the statistical analysis using R software, versions 2.9.0 (The R Project for Statistical Computing, Vienna, Austria, http://www.R‐project.org).

Ethical Concerns

The study protocol was approved by our institutional review board.

RESULTS

Study Population

There were 2431 hospitalizations to the 4 H‐PA teams during the study period. Data from 37 hospitalizations was excluded because of missing data. After applying all exclusion criteria, our final study sample consisted of a total of 1826 first hospitalizations for patients: 783 historical controls, 478 concurrent controls, and 565 localized patients.

Patients in the control groups and intervention group were similar in age, gender, race, and insurance status. Patients in the intervention group were more likely to be admitted over the weekend, but had similar probability of being discharged over the weekend or having had an ICU stay. Historical controls were admitted more often between 6 AM and 12 noon, while during the intervention period, patients were more likely to be admitted between midnight and 6 AM. The discharge time was similar across all groups. The 5 most common diagnoses were similar across the groups (Table 1).

Characteristics of Patients Admitted to Localized Teams and Control Groups
 Historical ControlIntervention Localized TeamsConcurrent ControlP Value
  • Abbreviations: COPD, chronic obstructive pulmonary disease; ED, emergency department; ICU, intensive care unit; IQR, interquartile range; n, number; n/a, not applicable; UTI, urinary tract infection; w/cm, with complications.

Patients783565478 
Age median (IQR)57 (4575)57 (4573)56 (4470)0.186
Age groups, n (%)    
<3065 (8.3)37 (6.6)46 (9.6) 
303976 (9.7)62 (11.0)47 (9.8) 
4049114 (14.6)85 (15.0)68 (14.2) 
5059162 (20.7)124 (22.0)118 (24.7)0.145
6069119 (15.2)84 (14.9)76 (16.0) 
7079100 (12.8)62 (11.0)58 (12.1) 
8089113 (14.4)95 (16.8)51 (10.7) 
>8934 (4.3)16 (2.88)14 (2.9) 
Female gender, n (%)434 (55.4)327 (57.9)264 (55.2)0.602
Race: Black, n (%)285 (36.4)229 (40.5)200 (41.8)0.111
Observation status, n (%)165 (21.1)108 (19.1)108 (22.6)0.380
Insurance, n (%)    
Commercial171 (21.8)101 (17.9)101 (21.1) 
Medicare376 (48.0)278 (49.2)218 (45.6)0.225
Medicaid179 (22.8)126 (22.3)117 (24.5) 
Uninsured54 (7.3)60 (10.6)42 (8.8) 
Weekend admission, n (%)137 (17.5)116 (20.5)65 (13.6)0.013
Weekend discharge, n (%)132 (16.9)107 (18.9)91 (19.0)0.505
Source of admission    
ED, n (%)654 (83.5)450 (79.7)370 (77.4)0.022
No ICU stay, n (%)600 (76.6)440 (77.9)383 (80.1)0.348
Admission time, n (%)    
00000559239 (30.5)208 (36.8)172 (36.0) 
06001159296 (37.8)157 (27.8)154 (32.2)0.007
12001759183 (23.4)147 (26.0)105 (22.0) 
1800235965 (8.3)53 (9.4)47 (9.8) 
Discharge time, n (%)    
0000115967 (8.6)45 (8.0)43 (9.0) 
12001759590 (75.4)417 (73.8)364 (76.2)0.658
18002359126 (16.1)103 (18.2)71 (14.9) 
Inpatient deaths, n13136 
Top 5 primary diagnoses (%)    
1Chest pain (11.5)Chest pain (13.3)Chest pain (11.9) 
2Septicemia (6.4)Septicemia (5.1)Septicemia (3.8) 
3Diabetes w/cm (4.6)Pneumonia (4.9)Diabetes w/cm (3.3)n/a
4Pneumonia (2.8)Diabetes w/cm (4.1)Pneumonia (3.3) 
5UTI (2.7)COPD (3.2)UTI (2.9) 

Clinical Outcomes

Unadjusted Analyses

The risk of 30‐day readmission was no different between the intervention and control groups. In patients without an initial ICU stay, and without any ICU stay, charges incurred and LOS were no different between the intervention and control groups (Table 2).

Unadjusted Comparisons of Clinical Outcomes Between Localized Teams and Control Groups
 Historical ControlIntervention Localized TeamsConcurrent ControlP Value
  • Abbreviations: ICU, intensive care unit; IQR, interquartile range; n, number; $, United States dollars.

30‐day readmissions n (%)118 (15.3)69 (12.5)66 (14.0)0.346
Charges: excluding patients initially admitted to ICU    
Median (IQR) in $9346 (621614,520)9724 (665715,390)9902 (661115,670)0.393
Charges: excluding all patients with an ICU stay    
Median (IQR) in $9270 (618713,990)9509 (660114,940)9846 (658015,400)0.283
Length of stay: excluding patients initially admitted to ICU    
Median (IQR) in days1.81 (1.223.35)2.16 (1.214.02)1.89 (1.193.50)0.214
Length of stay: excluding all patients with an ICU stay    
Median (IQR) in days1.75 (1.203.26)2.12 (1.203.74)1.84 (1.193.42)0.236

Adjusted Analysis

The risk of 30‐day readmission was no different between the intervention and control groups. In patients without an initial ICU stay, and without any ICU stay, charges incurred were no different between the intervention and control groups; LOS was about 11% higher in the localized group as compared to historical controls, and about 9% higher as compared to the concurrent control group. The difference in LOS was not statistically significant on an overall 3‐way comparison (Table 3).

Adjusted Comparisons of Clinical Outcomes Between Localized Teams and Control Groups
 Localized Teams in Comparison to 
 Historical ControlConcurrent ControlOverall P Value
  • Abbreviations: CI, confidence interval; ICU, intensive care unit; OR, odds ratio.

30‐day risk of readmission OR (CI)0.85 (0.611.19)0.94 (0.651.37)0.630
P value0.3510.751 
Charges: excluding patients initially admitted to ICU   
% change2% higher4% lower0.367
(CI)(6% lower to 11% higher)(12% lower to 5%higher) 
P value0.5720.427 
Charges: excluding all patients with an ICU stay   
% change2% higher5% lower0.314
(CI)(6% lower to 10% higher)(13% lower to 4% higher) 
P value0.6950.261 
Length of stay: excluding patients initially admitted to ICU   
% change11% higher9% higher0.105
(CI)(1% to 22% higher)(3% lower to 21% higher) 
P value0.0380.138 
Length of stay: excluding all patients with an ICU stay   
% change10% higher8% higher0.133
(CI)(0% to 22% higher)(3% lower to 20% higher) 
P value0.0470.171 

Productivity and Workflow Measures

Unadjusted Analyses

The localized teams received fewer pages as compared to concurrently nonlocalized teams. Localized teams had more patient encounters per day and generated more RVUs per day as compared to both historical and concurrent control groups. Physician assistants on localized teams took fewer steps during their work day (Table 4).

Unadjusted Comparisons of Productivity and Workflow Measures Between Localized Teams and Control Groups
 Historical ControlIntervention Localized TeamsConcurrent ControlP Value
  • Abbreviations: IQR, interquartile range; RVU, relative value unit; SD, standard deviation.

Pages received/day (7 AM6 PM) Median (IQR)No data15 (921)28 (12.540)<0.001
Total encounters/day Median (IQR)10 (813)12 (1013)11 (913)<0.001
RVU/day    
Mean (SD)19.9 (6.76)22.6 (5.6)21.2 (6.7)<0.001
Steps/day Median (IQR)No data4661 (3922 5166)5554 (50606544)<0.001

Adjusted Analysis

On adjusting for clustering by physician and day of week, the significant differences in pages received, total patient encounters, and RVUs generated persisted, while the difference in steps walked by PAs was attenuated to a statistically nonsignificant level (Table 5). The increase in RVU productivity was sustained through various periods of hard caps (data not shown).

Adjusted Comparisons of Productivity and Workflow Outcomes Between Localized Teams and Control Groups
 Localized Teams in Comparison to 
 Historical ControlConcurrent ControlOverall P Value
  • Abbreviations: CI 95%, confidence interval; N, number; RVU, relative value units.

Pages received (7 AM 6 PM) %(CI)No data51% fewer (4854) 
P value P < 0.001 
Total encounters0.89 more1.02 more 
N (CI)(0.371.41)(0.461.58) 
P valueP < 0.001P < 0.001P < 0.001
RVU/day2.20 more1.36 more 
N (CI)(1.103.29)(0.172.55) 
P valueP < 0.001P = 0.024P < 0.001
Steps/day 1186 fewer (791 more to 
N (CI)No data3164 fewer) 
P value P = 0.240 

DISCUSSION

We found that general medical patients admitted to H‐PA teams and localized to a single nursing unit had similar risk of 30‐day readmission and charges, but may have had a higher length of stay compared to historical and concurrent controls. The localized teams received far fewer pages, had more patient encounters, generated more RVUs, and walked less during their work day. Taken together, these findings imply that in our study, localization led to greater team productivity and a possible decrease in hospital efficiency, with no significant impact on readmissions or charges incurred.

The higher productivity was likely mediated by the preferential assignments of more patients to the localized teams, and improvements in workflow (such as fewer pages and fewer steps walked), which allowed them to provide more care with the same resources as the control teams. Kocher and Sahni7 recently pointed out that the healthcare sector has experienced no gains in labor productivity in the past 20 years. Our intervention fits their prescription for redesigning healthcare delivery models to achieve higher productivity.

The possibility of a higher LOS associated with localization was a counterintuitive finding, and similar to that reported by Roy et al.5 We propose 3 hypotheses to explain this:

  • Selection bias: Higher workload of the localized teams led to compromised efficiency and a higher length of stay (eg, localized teams had fewer observation admissions, more hospitalizations with an ICU stay, and the AMO was asked to preferentially admit patients to localized teams).

  • Localization provided teams the opportunity to spend more time with their patients (by decreasing nonvalue‐added tasks) and to consequently address more issues before transitioning to outpatient care, or to provide higher quality of care.

  • Gaming: By having a hard cap on total number of occupied beds, we provided a perverse incentive to the localized teams to retain patients longer to keep assigned beds occupied, thereby delaying new admissions to avoid higher workload.

 

Our study cannot tell us which of these hypotheses represents the dominant phenomenon that led to this surprising finding. Hypothesis 3 is most worrying, and we suggest that others looking to localize their medical teams consider the possibility of unintended perverse incentives.

Differences were more pronounced between the historical control group and the intervention group, as opposed to the intervention group and concurrent controls. This may have occurred if we contaminated the concurrent control by decreasing the number of units they had to go to, by sequestering 1 unit for the intervention team.

Our report has limitations. It is a nonrandomized, quasi‐experimental investigation using a single institution's administrative databases. Our intervention was small in scale (localizing 2 out of 10 general medical teams on 1 out of 14 nursing units). What impact a wider implementation of localization may have on emergency department throughput and hospital occupancy remains to be studied. Nevertheless, our research is the first report, to our knowledge, investigating a wide variety of outcomes of localizing inpatient medical teams, and adds significantly to the limited research on this topic. It also provides significant operational details for other institutions to use when localizing medical teams.

We conclude that our intervention of localization of medical teams to a single nursing unit led to higher productivity and better workflow, but did not impact readmissions or charges incurred. We caution others designing similar localization interventions to protect against possible perverse incentives for inefficient care.

Acknowledgements

Disclosure: Nothing to report.

Files
References
  1. Bush RW. Reducing waste in US health care systems. JAMA. 2007;297(8):871874.
  2. O'Leary KJ, Liebovitz DM, Baker DW. How hospitalists spend their time: insights on efficiency and safety. J Hosp Med. 2006;1(2):8893.
  3. O'Leary K, Wayne D, Landler M, et al. Impact of localizing physicians to hospital units on nurse–physician communication and agreement on the plan of care. J Gen Intern Med. 2009;24(11):12231227.
  4. O'Leary KJ, Buck R, Fligiel HM, et al. Structured interdisciplinary rounds in a medical teaching unit: improving patient safety. Arch Intern Med. 2011;171(7):678684.
  5. Roy CL, Liang CL, Lund M, et al. Implementation of a physician assistant/hospitalist service in an academic medical center: impact on efficiency and patient outcomes. J Hosp Med. 2008;3(5):361368.
  6. Findlay JM, Keogh MJ, Boulton C, Forward DP, Moran CG. Ward‐based rather than team‐based junior surgical doctors reduce mortality for patients with a fracture of the proximal femur: results from a two‐year observational study. J Bone Joint Surg Br. 2011;93‐B(3):393398.
  7. Kocher R, Sahni NR. Rethinking health care labor. N Engl J Med. 2011;365(15):13701372.
Article PDF
Issue
Journal of Hospital Medicine - 7(7)
Publications
Page Number
551-556
Sections
Files
Files
Article PDF
Article PDF

Localizing inpatient general medical teams to nursing units has high intuitive validity for improving physician productivity, hospital efficiency, and patient outcomes. Motion or the moving of personnel between tasksso prominent if teams are not localizedis 1 of the 7 wastes in lean thinking.1 In a timemotion study, where hospitalists cared for patients on up to 5 different wards, O'Leary et al2 have reported large parts of hospitalists' workdays spent in indirect patient care (69%), paging (13%), and travel (3%). Localization could increase the amount of time available for direct patient care, decrease time spent for (and interruptions due to) paging, and decrease travel time, all leading to greater productivity.

O'Leary et al3 have also reported the beneficial effects of localization of medical inpatients on communication between nurses and physicians, who could identify each other more often, and reported greater communication (specifically face‐to‐face communication) with each other following localization. This improvement in communication and effective multidisciplinary rounds could lead to safer care4 and better outcomes.

Further investigations about the effect of localization are limited. Roy et al5 have compared the outcomes of patients localized to 2 inpatient pods medically staffed by hospitalists and physician assistants (PAs) to geographically dispersed, but structurally different, house staff teams. They noticed significantly lower costs, slight but nonsignificant increase in length of stay, and no difference in mortality or readmissions, but it is impossible to tease out the affect of localization versus the affect of team composition. In a before‐and‐after study, Findlay et al6 have reported a decrease in mortality and complication rates in clinically homogenous surgical patients (proximal hip fractures) when cared for by junior trainee physicians localized to a unit, but their experience cannot be extrapolated to the much more diverse general medical population.

In our hospital, each general medical team could admit patients dispersed over 14 different units. An internal group, commissioned to evaluate our hospitalist practice, recommended reducing this dispersal to improve physician productivity, hospital efficiency, and outcomes of care. We therefore conducted a project to evaluate the impact of localizing general medical inpatient teams to a single nursing unit.

METHODS

Setting

We conducted our project at a 490 bed, urban academic medical center in the midwestern United States where of the 10 total general medical teams, 6 were traditional resident‐based teams and 4 consisted of a hospitalist paired with a PA (H‐PA teams). We focused our study on the 4 H‐PA teams. The hospitalists could be assigned to any H‐PA team and staffed them for 2 weeks (including weekends). The PAs were always assigned to the same team but took weekends off. An in‐house hospitalist provided overnight cross‐coverage for the H‐PA teams. Prior to our intervention, these teams could admit patients to any of the 14 nursing units at our hospital. They admitted patients from 7 AM to 3 PM, and also accepted care of patients admitted overnight after the resident teams had reached their admission limits (overflow). A Faculty Admitting Medical Officer (AMO) balanced the existing workload of the teams against the number and complexity of incoming patients to decide team assignment for the patients. The AMO was given guidelines (soft caps) to limit total admissions to H‐PA teams to 5 per team per day (3 on a weekend), and to not exceed a total patient census of 16 for an H‐PA team.

Intervention

Starting April 1, 2010, until July 15, 2010, we localized patients admitted to 2 of our 4 H‐PA teams on a single 32‐bed nursing unit. The patients of the other 2 H‐PA teams remained dispersed throughout the hospital.

Transition

April 1, 2010 was a scheduled switch day for the hospitalists on the H‐PA teams. We took advantage of this switch day and reassigned all patients cared for by H‐PA teams on our localized unit to the 2 localized teams. Similarly, all patients on nonlocalized units cared for by H‐PA teams were reassigned to the 2 nonlocalized teams. All patients cared for by resident teams on the localized unit, that were anticipated to be discharged soon, stayed until discharge; those that had a longer stay anticipated were transferred to a nonlocalized unit.

Patient Assignment

The 4 H‐PA teams continued to accept patients between 7 AM and 3 PM, as well as overflow patients. Patients with sickle cell crises were admitted exclusively to the nonlocalized teams, as they were cared for on a specialized nursing unit. No other patient characteristic was used to decide team assignment.

The AMO balanced the existing workload of the teams against the number and complexity of incoming patients to decide team assignment for the patients, but if these factors were equivocal, the AMO was now asked to preferentially admit to the localized teams. The admission soft cap for the H‐PA teams remained the same (5 on weekdays and 3 on weekends). The soft cap on the total census of 16 patients for the nonlocalized teams remained, but we imposed hard caps on the total census for the localized teams. These hard caps were 16 for each localized team for the month of April (to fill a 32‐bed unit), then decreased to 12 for the month of May, as informal feedback from the teams suggested a need to decrease workload, and then rebalanced to 14 for the remaining study period.

Evaluation

Clinical Outcomes

Using both concurrent and historical controls, we evaluated the impact of localization on the following clinical outcome measures: length of stay (LOS), charges, and 30‐day readmission rates.

Inclusion Criteria

We included all patients assigned to localized and nonlocalized teams between the period April 1, 2010 to July 15, 2010, and discharged before July 16, 2010, in our intervention group and concurrent control group, respectively. We included all patients assigned to any of the 4 H‐PA teams during the period January 1, 2010 and March 31, 2010 in the historical control group.

Exclusion Criteria

From the historical control group, we excluded patients assigned to one particular H‐PA team during the period January 1, 2010 to February 28, 2010, during which the PA assigned to that team was on leave. We excluded, from all groups, patients with a diagnosis of sickle cell disease and hospitalizations that straddled the start of the intervention. Further, we excluded repeat admissions for each patient.

Data Collection

We used admission logs to determine team assignment and linked them to our hospital's discharge abstract database to get patient level data. We grouped the principal diagnosis, International Classification of Diseases, Ninth Revision, Clinical Modification (ICD‐9‐CM) codes into clinically relevant categories using the Healthcare Cost and Utilization Project Clinical Classification Software for ICD‐9‐CM (Rockville, MD, www.hcup‐us.ahrq.gov/toolssoftware/ccs/ccs.jsp). We created comorbidity measures using Healthcare Cost and Utilization Project Comorbidity Software, version 3.4 (Rockville, MD, www.hcup‐us.ahrq.gov/toolssoftware/comorbidity/comorbidity.jsp).

We calculated LOS by subtracting the discharge day and time from the admission day and time. We summed all charges accrued during the entire hospital stay, but did not include professional fees. The LOS and charges included time spent and charges accrued in the intensive care unit (ICU). As ICU care was not under the control of the general medical teams and could have a significant impact on outcomes reflecting resource utilization, we compared LOS and charges only for 2 subsets of patients: patients not initially admitted to ICU before care by medical teams, and patients never requiring ICU care. We considered any repeat hospitalization to our hospital within 30 days following a discharge to be a readmission, except those for a planned procedure or for inpatient rehabilitation. We compared readmission rates for all patients irrespective of ICU stay, as discharge planning for all patients was under the direct control of the general medical teams.

Data Analysis

We performed unadjusted descriptive statistics using medians and interquartile ranges for continuous variables, and frequencies and percentages for categorical variables. We used chi‐square tests of association, and KruskalWallis analysis of variance, to compare baseline characteristics of patients assigned to localized and control teams.

We used regression models with random effects to risk adjust for a wide variety of variables. We included age, gender, race, insurance, admission source, time, day of week, discharge time, and total number of comorbidities as fixed effects in all models. We then added individual comorbidity measures one by one as fixed effects, including them only if significant at P < 0.01. We always added a variable identifying the admitting physician as a random effect, to account for dependence between admissions to the same physician. We log transformed LOS and charges because they were extremely skewed in nature. We analyzed readmissions after excluding patients who died. We evaluated the affect of our intervention on clinical outcomes using both historical and concurrent controls. We report P values for both overall 3‐way comparisons, as well as each of the 2‐way comparisonsintervention versus historical control and intervention versus concurrent control.

Productivity and Workflow Measures

We also evaluated the impact of localization on the following productivity and workflow measures: number of pages received, number of patient encounters, relative value units (RVUs) generated, and steps walked by PAs.

Data Collection

We queried our in‐house paging systems for the number of pages received by intervention and concurrent control teams between 7 AM and 6 PM (usual workday). We queried our professional billing data to determine the number of encounters per day and RVUs generated by the intervention, as well as historical and concurrent control teams, as a measure of productivity.

During the last 15 days of our intervention (July 1 July 15, 2010), we received 4 pedometers and we asked the PAs to record the number of steps taken during their workday. We chose PAs, rather than physicians, as the PAs had purely clinical duties and their walking activity would reflect activity for solely clinical purposes.

Data Analysis

For productivity and workflow measures, we adjusted for the day of the week and used random effects models to adjust for clustering of data by physician and physician assistant.

Statistical Software

We performed the statistical analysis using R software, versions 2.9.0 (The R Project for Statistical Computing, Vienna, Austria, http://www.R‐project.org).

Ethical Concerns

The study protocol was approved by our institutional review board.

RESULTS

Study Population

There were 2431 hospitalizations to the 4 H‐PA teams during the study period. Data from 37 hospitalizations was excluded because of missing data. After applying all exclusion criteria, our final study sample consisted of a total of 1826 first hospitalizations for patients: 783 historical controls, 478 concurrent controls, and 565 localized patients.

Patients in the control groups and intervention group were similar in age, gender, race, and insurance status. Patients in the intervention group were more likely to be admitted over the weekend, but had similar probability of being discharged over the weekend or having had an ICU stay. Historical controls were admitted more often between 6 AM and 12 noon, while during the intervention period, patients were more likely to be admitted between midnight and 6 AM. The discharge time was similar across all groups. The 5 most common diagnoses were similar across the groups (Table 1).

Characteristics of Patients Admitted to Localized Teams and Control Groups
 Historical ControlIntervention Localized TeamsConcurrent ControlP Value
  • Abbreviations: COPD, chronic obstructive pulmonary disease; ED, emergency department; ICU, intensive care unit; IQR, interquartile range; n, number; n/a, not applicable; UTI, urinary tract infection; w/cm, with complications.

Patients783565478 
Age median (IQR)57 (4575)57 (4573)56 (4470)0.186
Age groups, n (%)    
<3065 (8.3)37 (6.6)46 (9.6) 
303976 (9.7)62 (11.0)47 (9.8) 
4049114 (14.6)85 (15.0)68 (14.2) 
5059162 (20.7)124 (22.0)118 (24.7)0.145
6069119 (15.2)84 (14.9)76 (16.0) 
7079100 (12.8)62 (11.0)58 (12.1) 
8089113 (14.4)95 (16.8)51 (10.7) 
>8934 (4.3)16 (2.88)14 (2.9) 
Female gender, n (%)434 (55.4)327 (57.9)264 (55.2)0.602
Race: Black, n (%)285 (36.4)229 (40.5)200 (41.8)0.111
Observation status, n (%)165 (21.1)108 (19.1)108 (22.6)0.380
Insurance, n (%)    
Commercial171 (21.8)101 (17.9)101 (21.1) 
Medicare376 (48.0)278 (49.2)218 (45.6)0.225
Medicaid179 (22.8)126 (22.3)117 (24.5) 
Uninsured54 (7.3)60 (10.6)42 (8.8) 
Weekend admission, n (%)137 (17.5)116 (20.5)65 (13.6)0.013
Weekend discharge, n (%)132 (16.9)107 (18.9)91 (19.0)0.505
Source of admission    
ED, n (%)654 (83.5)450 (79.7)370 (77.4)0.022
No ICU stay, n (%)600 (76.6)440 (77.9)383 (80.1)0.348
Admission time, n (%)    
00000559239 (30.5)208 (36.8)172 (36.0) 
06001159296 (37.8)157 (27.8)154 (32.2)0.007
12001759183 (23.4)147 (26.0)105 (22.0) 
1800235965 (8.3)53 (9.4)47 (9.8) 
Discharge time, n (%)    
0000115967 (8.6)45 (8.0)43 (9.0) 
12001759590 (75.4)417 (73.8)364 (76.2)0.658
18002359126 (16.1)103 (18.2)71 (14.9) 
Inpatient deaths, n13136 
Top 5 primary diagnoses (%)    
1Chest pain (11.5)Chest pain (13.3)Chest pain (11.9) 
2Septicemia (6.4)Septicemia (5.1)Septicemia (3.8) 
3Diabetes w/cm (4.6)Pneumonia (4.9)Diabetes w/cm (3.3)n/a
4Pneumonia (2.8)Diabetes w/cm (4.1)Pneumonia (3.3) 
5UTI (2.7)COPD (3.2)UTI (2.9) 

Clinical Outcomes

Unadjusted Analyses

The risk of 30‐day readmission was no different between the intervention and control groups. In patients without an initial ICU stay, and without any ICU stay, charges incurred and LOS were no different between the intervention and control groups (Table 2).

Unadjusted Comparisons of Clinical Outcomes Between Localized Teams and Control Groups
 Historical ControlIntervention Localized TeamsConcurrent ControlP Value
  • Abbreviations: ICU, intensive care unit; IQR, interquartile range; n, number; $, United States dollars.

30‐day readmissions n (%)118 (15.3)69 (12.5)66 (14.0)0.346
Charges: excluding patients initially admitted to ICU    
Median (IQR) in $9346 (621614,520)9724 (665715,390)9902 (661115,670)0.393
Charges: excluding all patients with an ICU stay    
Median (IQR) in $9270 (618713,990)9509 (660114,940)9846 (658015,400)0.283
Length of stay: excluding patients initially admitted to ICU    
Median (IQR) in days1.81 (1.223.35)2.16 (1.214.02)1.89 (1.193.50)0.214
Length of stay: excluding all patients with an ICU stay    
Median (IQR) in days1.75 (1.203.26)2.12 (1.203.74)1.84 (1.193.42)0.236

Adjusted Analysis

The risk of 30‐day readmission was no different between the intervention and control groups. In patients without an initial ICU stay, and without any ICU stay, charges incurred were no different between the intervention and control groups; LOS was about 11% higher in the localized group as compared to historical controls, and about 9% higher as compared to the concurrent control group. The difference in LOS was not statistically significant on an overall 3‐way comparison (Table 3).

Adjusted Comparisons of Clinical Outcomes Between Localized Teams and Control Groups
 Localized Teams in Comparison to 
 Historical ControlConcurrent ControlOverall P Value
  • Abbreviations: CI, confidence interval; ICU, intensive care unit; OR, odds ratio.

30‐day risk of readmission OR (CI)0.85 (0.611.19)0.94 (0.651.37)0.630
P value0.3510.751 
Charges: excluding patients initially admitted to ICU   
% change2% higher4% lower0.367
(CI)(6% lower to 11% higher)(12% lower to 5%higher) 
P value0.5720.427 
Charges: excluding all patients with an ICU stay   
% change2% higher5% lower0.314
(CI)(6% lower to 10% higher)(13% lower to 4% higher) 
P value0.6950.261 
Length of stay: excluding patients initially admitted to ICU   
% change11% higher9% higher0.105
(CI)(1% to 22% higher)(3% lower to 21% higher) 
P value0.0380.138 
Length of stay: excluding all patients with an ICU stay   
% change10% higher8% higher0.133
(CI)(0% to 22% higher)(3% lower to 20% higher) 
P value0.0470.171 

Productivity and Workflow Measures

Unadjusted Analyses

The localized teams received fewer pages as compared to concurrently nonlocalized teams. Localized teams had more patient encounters per day and generated more RVUs per day as compared to both historical and concurrent control groups. Physician assistants on localized teams took fewer steps during their work day (Table 4).

Unadjusted Comparisons of Productivity and Workflow Measures Between Localized Teams and Control Groups
 Historical ControlIntervention Localized TeamsConcurrent ControlP Value
  • Abbreviations: IQR, interquartile range; RVU, relative value unit; SD, standard deviation.

Pages received/day (7 AM6 PM) Median (IQR)No data15 (921)28 (12.540)<0.001
Total encounters/day Median (IQR)10 (813)12 (1013)11 (913)<0.001
RVU/day    
Mean (SD)19.9 (6.76)22.6 (5.6)21.2 (6.7)<0.001
Steps/day Median (IQR)No data4661 (3922 5166)5554 (50606544)<0.001

Adjusted Analysis

On adjusting for clustering by physician and day of week, the significant differences in pages received, total patient encounters, and RVUs generated persisted, while the difference in steps walked by PAs was attenuated to a statistically nonsignificant level (Table 5). The increase in RVU productivity was sustained through various periods of hard caps (data not shown).

Adjusted Comparisons of Productivity and Workflow Outcomes Between Localized Teams and Control Groups
 Localized Teams in Comparison to 
 Historical ControlConcurrent ControlOverall P Value
  • Abbreviations: CI 95%, confidence interval; N, number; RVU, relative value units.

Pages received (7 AM 6 PM) %(CI)No data51% fewer (4854) 
P value P < 0.001 
Total encounters0.89 more1.02 more 
N (CI)(0.371.41)(0.461.58) 
P valueP < 0.001P < 0.001P < 0.001
RVU/day2.20 more1.36 more 
N (CI)(1.103.29)(0.172.55) 
P valueP < 0.001P = 0.024P < 0.001
Steps/day 1186 fewer (791 more to 
N (CI)No data3164 fewer) 
P value P = 0.240 

DISCUSSION

We found that general medical patients admitted to H‐PA teams and localized to a single nursing unit had similar risk of 30‐day readmission and charges, but may have had a higher length of stay compared to historical and concurrent controls. The localized teams received far fewer pages, had more patient encounters, generated more RVUs, and walked less during their work day. Taken together, these findings imply that in our study, localization led to greater team productivity and a possible decrease in hospital efficiency, with no significant impact on readmissions or charges incurred.

The higher productivity was likely mediated by the preferential assignments of more patients to the localized teams, and improvements in workflow (such as fewer pages and fewer steps walked), which allowed them to provide more care with the same resources as the control teams. Kocher and Sahni7 recently pointed out that the healthcare sector has experienced no gains in labor productivity in the past 20 years. Our intervention fits their prescription for redesigning healthcare delivery models to achieve higher productivity.

The possibility of a higher LOS associated with localization was a counterintuitive finding, and similar to that reported by Roy et al.5 We propose 3 hypotheses to explain this:

  • Selection bias: Higher workload of the localized teams led to compromised efficiency and a higher length of stay (eg, localized teams had fewer observation admissions, more hospitalizations with an ICU stay, and the AMO was asked to preferentially admit patients to localized teams).

  • Localization provided teams the opportunity to spend more time with their patients (by decreasing nonvalue‐added tasks) and to consequently address more issues before transitioning to outpatient care, or to provide higher quality of care.

  • Gaming: By having a hard cap on total number of occupied beds, we provided a perverse incentive to the localized teams to retain patients longer to keep assigned beds occupied, thereby delaying new admissions to avoid higher workload.

 

Our study cannot tell us which of these hypotheses represents the dominant phenomenon that led to this surprising finding. Hypothesis 3 is most worrying, and we suggest that others looking to localize their medical teams consider the possibility of unintended perverse incentives.

Differences were more pronounced between the historical control group and the intervention group, as opposed to the intervention group and concurrent controls. This may have occurred if we contaminated the concurrent control by decreasing the number of units they had to go to, by sequestering 1 unit for the intervention team.

Our report has limitations. It is a nonrandomized, quasi‐experimental investigation using a single institution's administrative databases. Our intervention was small in scale (localizing 2 out of 10 general medical teams on 1 out of 14 nursing units). What impact a wider implementation of localization may have on emergency department throughput and hospital occupancy remains to be studied. Nevertheless, our research is the first report, to our knowledge, investigating a wide variety of outcomes of localizing inpatient medical teams, and adds significantly to the limited research on this topic. It also provides significant operational details for other institutions to use when localizing medical teams.

We conclude that our intervention of localization of medical teams to a single nursing unit led to higher productivity and better workflow, but did not impact readmissions or charges incurred. We caution others designing similar localization interventions to protect against possible perverse incentives for inefficient care.

Acknowledgements

Disclosure: Nothing to report.

Localizing inpatient general medical teams to nursing units has high intuitive validity for improving physician productivity, hospital efficiency, and patient outcomes. Motion or the moving of personnel between tasksso prominent if teams are not localizedis 1 of the 7 wastes in lean thinking.1 In a timemotion study, where hospitalists cared for patients on up to 5 different wards, O'Leary et al2 have reported large parts of hospitalists' workdays spent in indirect patient care (69%), paging (13%), and travel (3%). Localization could increase the amount of time available for direct patient care, decrease time spent for (and interruptions due to) paging, and decrease travel time, all leading to greater productivity.

O'Leary et al3 have also reported the beneficial effects of localization of medical inpatients on communication between nurses and physicians, who could identify each other more often, and reported greater communication (specifically face‐to‐face communication) with each other following localization. This improvement in communication and effective multidisciplinary rounds could lead to safer care4 and better outcomes.

Further investigations about the effect of localization are limited. Roy et al5 have compared the outcomes of patients localized to 2 inpatient pods medically staffed by hospitalists and physician assistants (PAs) to geographically dispersed, but structurally different, house staff teams. They noticed significantly lower costs, slight but nonsignificant increase in length of stay, and no difference in mortality or readmissions, but it is impossible to tease out the affect of localization versus the affect of team composition. In a before‐and‐after study, Findlay et al6 have reported a decrease in mortality and complication rates in clinically homogenous surgical patients (proximal hip fractures) when cared for by junior trainee physicians localized to a unit, but their experience cannot be extrapolated to the much more diverse general medical population.

In our hospital, each general medical team could admit patients dispersed over 14 different units. An internal group, commissioned to evaluate our hospitalist practice, recommended reducing this dispersal to improve physician productivity, hospital efficiency, and outcomes of care. We therefore conducted a project to evaluate the impact of localizing general medical inpatient teams to a single nursing unit.

METHODS

Setting

We conducted our project at a 490 bed, urban academic medical center in the midwestern United States where of the 10 total general medical teams, 6 were traditional resident‐based teams and 4 consisted of a hospitalist paired with a PA (H‐PA teams). We focused our study on the 4 H‐PA teams. The hospitalists could be assigned to any H‐PA team and staffed them for 2 weeks (including weekends). The PAs were always assigned to the same team but took weekends off. An in‐house hospitalist provided overnight cross‐coverage for the H‐PA teams. Prior to our intervention, these teams could admit patients to any of the 14 nursing units at our hospital. They admitted patients from 7 AM to 3 PM, and also accepted care of patients admitted overnight after the resident teams had reached their admission limits (overflow). A Faculty Admitting Medical Officer (AMO) balanced the existing workload of the teams against the number and complexity of incoming patients to decide team assignment for the patients. The AMO was given guidelines (soft caps) to limit total admissions to H‐PA teams to 5 per team per day (3 on a weekend), and to not exceed a total patient census of 16 for an H‐PA team.

Intervention

Starting April 1, 2010, until July 15, 2010, we localized patients admitted to 2 of our 4 H‐PA teams on a single 32‐bed nursing unit. The patients of the other 2 H‐PA teams remained dispersed throughout the hospital.

Transition

April 1, 2010 was a scheduled switch day for the hospitalists on the H‐PA teams. We took advantage of this switch day and reassigned all patients cared for by H‐PA teams on our localized unit to the 2 localized teams. Similarly, all patients on nonlocalized units cared for by H‐PA teams were reassigned to the 2 nonlocalized teams. All patients cared for by resident teams on the localized unit, that were anticipated to be discharged soon, stayed until discharge; those that had a longer stay anticipated were transferred to a nonlocalized unit.

Patient Assignment

The 4 H‐PA teams continued to accept patients between 7 AM and 3 PM, as well as overflow patients. Patients with sickle cell crises were admitted exclusively to the nonlocalized teams, as they were cared for on a specialized nursing unit. No other patient characteristic was used to decide team assignment.

The AMO balanced the existing workload of the teams against the number and complexity of incoming patients to decide team assignment for the patients, but if these factors were equivocal, the AMO was now asked to preferentially admit to the localized teams. The admission soft cap for the H‐PA teams remained the same (5 on weekdays and 3 on weekends). The soft cap on the total census of 16 patients for the nonlocalized teams remained, but we imposed hard caps on the total census for the localized teams. These hard caps were 16 for each localized team for the month of April (to fill a 32‐bed unit), then decreased to 12 for the month of May, as informal feedback from the teams suggested a need to decrease workload, and then rebalanced to 14 for the remaining study period.

Evaluation

Clinical Outcomes

Using both concurrent and historical controls, we evaluated the impact of localization on the following clinical outcome measures: length of stay (LOS), charges, and 30‐day readmission rates.

Inclusion Criteria

We included all patients assigned to localized and nonlocalized teams between the period April 1, 2010 to July 15, 2010, and discharged before July 16, 2010, in our intervention group and concurrent control group, respectively. We included all patients assigned to any of the 4 H‐PA teams during the period January 1, 2010 and March 31, 2010 in the historical control group.

Exclusion Criteria

From the historical control group, we excluded patients assigned to one particular H‐PA team during the period January 1, 2010 to February 28, 2010, during which the PA assigned to that team was on leave. We excluded, from all groups, patients with a diagnosis of sickle cell disease and hospitalizations that straddled the start of the intervention. Further, we excluded repeat admissions for each patient.

Data Collection

We used admission logs to determine team assignment and linked them to our hospital's discharge abstract database to get patient level data. We grouped the principal diagnosis, International Classification of Diseases, Ninth Revision, Clinical Modification (ICD‐9‐CM) codes into clinically relevant categories using the Healthcare Cost and Utilization Project Clinical Classification Software for ICD‐9‐CM (Rockville, MD, www.hcup‐us.ahrq.gov/toolssoftware/ccs/ccs.jsp). We created comorbidity measures using Healthcare Cost and Utilization Project Comorbidity Software, version 3.4 (Rockville, MD, www.hcup‐us.ahrq.gov/toolssoftware/comorbidity/comorbidity.jsp).

We calculated LOS by subtracting the discharge day and time from the admission day and time. We summed all charges accrued during the entire hospital stay, but did not include professional fees. The LOS and charges included time spent and charges accrued in the intensive care unit (ICU). As ICU care was not under the control of the general medical teams and could have a significant impact on outcomes reflecting resource utilization, we compared LOS and charges only for 2 subsets of patients: patients not initially admitted to ICU before care by medical teams, and patients never requiring ICU care. We considered any repeat hospitalization to our hospital within 30 days following a discharge to be a readmission, except those for a planned procedure or for inpatient rehabilitation. We compared readmission rates for all patients irrespective of ICU stay, as discharge planning for all patients was under the direct control of the general medical teams.

Data Analysis

We performed unadjusted descriptive statistics using medians and interquartile ranges for continuous variables, and frequencies and percentages for categorical variables. We used chi‐square tests of association, and KruskalWallis analysis of variance, to compare baseline characteristics of patients assigned to localized and control teams.

We used regression models with random effects to risk adjust for a wide variety of variables. We included age, gender, race, insurance, admission source, time, day of week, discharge time, and total number of comorbidities as fixed effects in all models. We then added individual comorbidity measures one by one as fixed effects, including them only if significant at P < 0.01. We always added a variable identifying the admitting physician as a random effect, to account for dependence between admissions to the same physician. We log transformed LOS and charges because they were extremely skewed in nature. We analyzed readmissions after excluding patients who died. We evaluated the affect of our intervention on clinical outcomes using both historical and concurrent controls. We report P values for both overall 3‐way comparisons, as well as each of the 2‐way comparisonsintervention versus historical control and intervention versus concurrent control.

Productivity and Workflow Measures

We also evaluated the impact of localization on the following productivity and workflow measures: number of pages received, number of patient encounters, relative value units (RVUs) generated, and steps walked by PAs.

Data Collection

We queried our in‐house paging systems for the number of pages received by intervention and concurrent control teams between 7 AM and 6 PM (usual workday). We queried our professional billing data to determine the number of encounters per day and RVUs generated by the intervention, as well as historical and concurrent control teams, as a measure of productivity.

During the last 15 days of our intervention (July 1 July 15, 2010), we received 4 pedometers and we asked the PAs to record the number of steps taken during their workday. We chose PAs, rather than physicians, as the PAs had purely clinical duties and their walking activity would reflect activity for solely clinical purposes.

Data Analysis

For productivity and workflow measures, we adjusted for the day of the week and used random effects models to adjust for clustering of data by physician and physician assistant.

Statistical Software

We performed the statistical analysis using R software, versions 2.9.0 (The R Project for Statistical Computing, Vienna, Austria, http://www.R‐project.org).

Ethical Concerns

The study protocol was approved by our institutional review board.

RESULTS

Study Population

There were 2431 hospitalizations to the 4 H‐PA teams during the study period. Data from 37 hospitalizations was excluded because of missing data. After applying all exclusion criteria, our final study sample consisted of a total of 1826 first hospitalizations for patients: 783 historical controls, 478 concurrent controls, and 565 localized patients.

Patients in the control groups and intervention group were similar in age, gender, race, and insurance status. Patients in the intervention group were more likely to be admitted over the weekend, but had similar probability of being discharged over the weekend or having had an ICU stay. Historical controls were admitted more often between 6 AM and 12 noon, while during the intervention period, patients were more likely to be admitted between midnight and 6 AM. The discharge time was similar across all groups. The 5 most common diagnoses were similar across the groups (Table 1).

Characteristics of Patients Admitted to Localized Teams and Control Groups
 Historical ControlIntervention Localized TeamsConcurrent ControlP Value
  • Abbreviations: COPD, chronic obstructive pulmonary disease; ED, emergency department; ICU, intensive care unit; IQR, interquartile range; n, number; n/a, not applicable; UTI, urinary tract infection; w/cm, with complications.

Patients783565478 
Age median (IQR)57 (4575)57 (4573)56 (4470)0.186
Age groups, n (%)    
<3065 (8.3)37 (6.6)46 (9.6) 
303976 (9.7)62 (11.0)47 (9.8) 
4049114 (14.6)85 (15.0)68 (14.2) 
5059162 (20.7)124 (22.0)118 (24.7)0.145
6069119 (15.2)84 (14.9)76 (16.0) 
7079100 (12.8)62 (11.0)58 (12.1) 
8089113 (14.4)95 (16.8)51 (10.7) 
>8934 (4.3)16 (2.88)14 (2.9) 
Female gender, n (%)434 (55.4)327 (57.9)264 (55.2)0.602
Race: Black, n (%)285 (36.4)229 (40.5)200 (41.8)0.111
Observation status, n (%)165 (21.1)108 (19.1)108 (22.6)0.380
Insurance, n (%)    
Commercial171 (21.8)101 (17.9)101 (21.1) 
Medicare376 (48.0)278 (49.2)218 (45.6)0.225
Medicaid179 (22.8)126 (22.3)117 (24.5) 
Uninsured54 (7.3)60 (10.6)42 (8.8) 
Weekend admission, n (%)137 (17.5)116 (20.5)65 (13.6)0.013
Weekend discharge, n (%)132 (16.9)107 (18.9)91 (19.0)0.505
Source of admission    
ED, n (%)654 (83.5)450 (79.7)370 (77.4)0.022
No ICU stay, n (%)600 (76.6)440 (77.9)383 (80.1)0.348
Admission time, n (%)    
00000559239 (30.5)208 (36.8)172 (36.0) 
06001159296 (37.8)157 (27.8)154 (32.2)0.007
12001759183 (23.4)147 (26.0)105 (22.0) 
1800235965 (8.3)53 (9.4)47 (9.8) 
Discharge time, n (%)    
0000115967 (8.6)45 (8.0)43 (9.0) 
12001759590 (75.4)417 (73.8)364 (76.2)0.658
18002359126 (16.1)103 (18.2)71 (14.9) 
Inpatient deaths, n13136 
Top 5 primary diagnoses (%)    
1Chest pain (11.5)Chest pain (13.3)Chest pain (11.9) 
2Septicemia (6.4)Septicemia (5.1)Septicemia (3.8) 
3Diabetes w/cm (4.6)Pneumonia (4.9)Diabetes w/cm (3.3)n/a
4Pneumonia (2.8)Diabetes w/cm (4.1)Pneumonia (3.3) 
5UTI (2.7)COPD (3.2)UTI (2.9) 

Clinical Outcomes

Unadjusted Analyses

The risk of 30‐day readmission was no different between the intervention and control groups. In patients without an initial ICU stay, and without any ICU stay, charges incurred and LOS were no different between the intervention and control groups (Table 2).

Unadjusted Comparisons of Clinical Outcomes Between Localized Teams and Control Groups
 Historical ControlIntervention Localized TeamsConcurrent ControlP Value
  • Abbreviations: ICU, intensive care unit; IQR, interquartile range; n, number; $, United States dollars.

30‐day readmissions n (%)118 (15.3)69 (12.5)66 (14.0)0.346
Charges: excluding patients initially admitted to ICU    
Median (IQR) in $9346 (621614,520)9724 (665715,390)9902 (661115,670)0.393
Charges: excluding all patients with an ICU stay    
Median (IQR) in $9270 (618713,990)9509 (660114,940)9846 (658015,400)0.283
Length of stay: excluding patients initially admitted to ICU    
Median (IQR) in days1.81 (1.223.35)2.16 (1.214.02)1.89 (1.193.50)0.214
Length of stay: excluding all patients with an ICU stay    
Median (IQR) in days1.75 (1.203.26)2.12 (1.203.74)1.84 (1.193.42)0.236

Adjusted Analysis

The risk of 30‐day readmission was no different between the intervention and control groups. In patients without an initial ICU stay, and without any ICU stay, charges incurred were no different between the intervention and control groups; LOS was about 11% higher in the localized group as compared to historical controls, and about 9% higher as compared to the concurrent control group. The difference in LOS was not statistically significant on an overall 3‐way comparison (Table 3).

Adjusted Comparisons of Clinical Outcomes Between Localized Teams and Control Groups
 Localized Teams in Comparison to 
 Historical ControlConcurrent ControlOverall P Value
  • Abbreviations: CI, confidence interval; ICU, intensive care unit; OR, odds ratio.

30‐day risk of readmission OR (CI)0.85 (0.611.19)0.94 (0.651.37)0.630
P value0.3510.751 
Charges: excluding patients initially admitted to ICU   
% change2% higher4% lower0.367
(CI)(6% lower to 11% higher)(12% lower to 5%higher) 
P value0.5720.427 
Charges: excluding all patients with an ICU stay   
% change2% higher5% lower0.314
(CI)(6% lower to 10% higher)(13% lower to 4% higher) 
P value0.6950.261 
Length of stay: excluding patients initially admitted to ICU   
% change11% higher9% higher0.105
(CI)(1% to 22% higher)(3% lower to 21% higher) 
P value0.0380.138 
Length of stay: excluding all patients with an ICU stay   
% change10% higher8% higher0.133
(CI)(0% to 22% higher)(3% lower to 20% higher) 
P value0.0470.171 

Productivity and Workflow Measures

Unadjusted Analyses

The localized teams received fewer pages as compared to concurrently nonlocalized teams. Localized teams had more patient encounters per day and generated more RVUs per day as compared to both historical and concurrent control groups. Physician assistants on localized teams took fewer steps during their work day (Table 4).

Unadjusted Comparisons of Productivity and Workflow Measures Between Localized Teams and Control Groups
 Historical ControlIntervention Localized TeamsConcurrent ControlP Value
  • Abbreviations: IQR, interquartile range; RVU, relative value unit; SD, standard deviation.

Pages received/day (7 AM6 PM) Median (IQR)No data15 (921)28 (12.540)<0.001
Total encounters/day Median (IQR)10 (813)12 (1013)11 (913)<0.001
RVU/day    
Mean (SD)19.9 (6.76)22.6 (5.6)21.2 (6.7)<0.001
Steps/day Median (IQR)No data4661 (3922 5166)5554 (50606544)<0.001

Adjusted Analysis

On adjusting for clustering by physician and day of week, the significant differences in pages received, total patient encounters, and RVUs generated persisted, while the difference in steps walked by PAs was attenuated to a statistically nonsignificant level (Table 5). The increase in RVU productivity was sustained through various periods of hard caps (data not shown).

Adjusted Comparisons of Productivity and Workflow Outcomes Between Localized Teams and Control Groups
 Localized Teams in Comparison to 
 Historical ControlConcurrent ControlOverall P Value
  • Abbreviations: CI 95%, confidence interval; N, number; RVU, relative value units.

Pages received (7 AM 6 PM) %(CI)No data51% fewer (4854) 
P value P < 0.001 
Total encounters0.89 more1.02 more 
N (CI)(0.371.41)(0.461.58) 
P valueP < 0.001P < 0.001P < 0.001
RVU/day2.20 more1.36 more 
N (CI)(1.103.29)(0.172.55) 
P valueP < 0.001P = 0.024P < 0.001
Steps/day 1186 fewer (791 more to 
N (CI)No data3164 fewer) 
P value P = 0.240 

DISCUSSION

We found that general medical patients admitted to H‐PA teams and localized to a single nursing unit had similar risk of 30‐day readmission and charges, but may have had a higher length of stay compared to historical and concurrent controls. The localized teams received far fewer pages, had more patient encounters, generated more RVUs, and walked less during their work day. Taken together, these findings imply that in our study, localization led to greater team productivity and a possible decrease in hospital efficiency, with no significant impact on readmissions or charges incurred.

The higher productivity was likely mediated by the preferential assignments of more patients to the localized teams, and improvements in workflow (such as fewer pages and fewer steps walked), which allowed them to provide more care with the same resources as the control teams. Kocher and Sahni7 recently pointed out that the healthcare sector has experienced no gains in labor productivity in the past 20 years. Our intervention fits their prescription for redesigning healthcare delivery models to achieve higher productivity.

The possibility of a higher LOS associated with localization was a counterintuitive finding, and similar to that reported by Roy et al.5 We propose 3 hypotheses to explain this:

  • Selection bias: Higher workload of the localized teams led to compromised efficiency and a higher length of stay (eg, localized teams had fewer observation admissions, more hospitalizations with an ICU stay, and the AMO was asked to preferentially admit patients to localized teams).

  • Localization provided teams the opportunity to spend more time with their patients (by decreasing nonvalue‐added tasks) and to consequently address more issues before transitioning to outpatient care, or to provide higher quality of care.

  • Gaming: By having a hard cap on total number of occupied beds, we provided a perverse incentive to the localized teams to retain patients longer to keep assigned beds occupied, thereby delaying new admissions to avoid higher workload.

 

Our study cannot tell us which of these hypotheses represents the dominant phenomenon that led to this surprising finding. Hypothesis 3 is most worrying, and we suggest that others looking to localize their medical teams consider the possibility of unintended perverse incentives.

Differences were more pronounced between the historical control group and the intervention group, as opposed to the intervention group and concurrent controls. This may have occurred if we contaminated the concurrent control by decreasing the number of units they had to go to, by sequestering 1 unit for the intervention team.

Our report has limitations. It is a nonrandomized, quasi‐experimental investigation using a single institution's administrative databases. Our intervention was small in scale (localizing 2 out of 10 general medical teams on 1 out of 14 nursing units). What impact a wider implementation of localization may have on emergency department throughput and hospital occupancy remains to be studied. Nevertheless, our research is the first report, to our knowledge, investigating a wide variety of outcomes of localizing inpatient medical teams, and adds significantly to the limited research on this topic. It also provides significant operational details for other institutions to use when localizing medical teams.

We conclude that our intervention of localization of medical teams to a single nursing unit led to higher productivity and better workflow, but did not impact readmissions or charges incurred. We caution others designing similar localization interventions to protect against possible perverse incentives for inefficient care.

Acknowledgements

Disclosure: Nothing to report.

References
  1. Bush RW. Reducing waste in US health care systems. JAMA. 2007;297(8):871874.
  2. O'Leary KJ, Liebovitz DM, Baker DW. How hospitalists spend their time: insights on efficiency and safety. J Hosp Med. 2006;1(2):8893.
  3. O'Leary K, Wayne D, Landler M, et al. Impact of localizing physicians to hospital units on nurse–physician communication and agreement on the plan of care. J Gen Intern Med. 2009;24(11):12231227.
  4. O'Leary KJ, Buck R, Fligiel HM, et al. Structured interdisciplinary rounds in a medical teaching unit: improving patient safety. Arch Intern Med. 2011;171(7):678684.
  5. Roy CL, Liang CL, Lund M, et al. Implementation of a physician assistant/hospitalist service in an academic medical center: impact on efficiency and patient outcomes. J Hosp Med. 2008;3(5):361368.
  6. Findlay JM, Keogh MJ, Boulton C, Forward DP, Moran CG. Ward‐based rather than team‐based junior surgical doctors reduce mortality for patients with a fracture of the proximal femur: results from a two‐year observational study. J Bone Joint Surg Br. 2011;93‐B(3):393398.
  7. Kocher R, Sahni NR. Rethinking health care labor. N Engl J Med. 2011;365(15):13701372.
References
  1. Bush RW. Reducing waste in US health care systems. JAMA. 2007;297(8):871874.
  2. O'Leary KJ, Liebovitz DM, Baker DW. How hospitalists spend their time: insights on efficiency and safety. J Hosp Med. 2006;1(2):8893.
  3. O'Leary K, Wayne D, Landler M, et al. Impact of localizing physicians to hospital units on nurse–physician communication and agreement on the plan of care. J Gen Intern Med. 2009;24(11):12231227.
  4. O'Leary KJ, Buck R, Fligiel HM, et al. Structured interdisciplinary rounds in a medical teaching unit: improving patient safety. Arch Intern Med. 2011;171(7):678684.
  5. Roy CL, Liang CL, Lund M, et al. Implementation of a physician assistant/hospitalist service in an academic medical center: impact on efficiency and patient outcomes. J Hosp Med. 2008;3(5):361368.
  6. Findlay JM, Keogh MJ, Boulton C, Forward DP, Moran CG. Ward‐based rather than team‐based junior surgical doctors reduce mortality for patients with a fracture of the proximal femur: results from a two‐year observational study. J Bone Joint Surg Br. 2011;93‐B(3):393398.
  7. Kocher R, Sahni NR. Rethinking health care labor. N Engl J Med. 2011;365(15):13701372.
Issue
Journal of Hospital Medicine - 7(7)
Issue
Journal of Hospital Medicine - 7(7)
Page Number
551-556
Page Number
551-556
Publications
Publications
Article Type
Display Headline
Impact of localizing general medical teams to a single nursing unit
Display Headline
Impact of localizing general medical teams to a single nursing unit
Sections
Article Source

Copyright © 2012 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Medical College Physicians, The Medical College of Wisconsin, 9200 West Wisconsin Ave, Milwaukee, WI 53226
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Effort of Inpatient Work

Article Type
Changed
Mon, 05/22/2017 - 18:50
Display Headline
Defining and measuring the effort needed for inpatient medicine work

In internal medicine residency training, the most commonly used metric for measuring workload of physicians is the number of patients being followed or the number being admitted. There are data to support the importance of these census numbers. One study conducted at an academic medical center demonstrated that for patients admitted to medical services, the number of patients admitted on a call night was positively associated with mortality, even after adjustment in multivariable models.1

The problem with a census is that it is only a rough indicator of the amount of work that a given intern or resident will have. In a focus group study that our group conducted with internal medicine residents, several contributors to patient care errors were identified. Workload was identified as a major factor contributing to patient care mistakes.2 In describing workload, residents noted not only census but the complexity of the patient as contributing factors to workload.

A more comprehensive method than relying on census data has been used in anesthesia.3, 4 In 2 studies, anesthesiologists were asked to rate the effort or intensity associated with the tasks that they performed in the operating room.4, 5 In subsequent studies, this group used a trained observer to record the tasks anesthesiologists performed during a case.6, 7 Work density was calculated by multiplying the duration of each task by the previously developed task intensity score. In this way, work per unit of time can be calculated as can a cumulative workload score for a certain duration of time.

These methods provide the background for the work that we conducted in this study. The purpose of this study was to assign a task effort score to the tasks performed during periods that include admitting patients to the hospital.

METHODS

Study Site

A single 500‐bed Midwest academic institution. Residents rotate through 3 hospitals (a private community hospital, a Veterans hospital, and an academic medical center) during a typical 3‐year internal medicine residency program.

Study Design and Subjects

A cross‐sectional survey was conducted. Subjects recruited for the survey included internal medicine interns and residents, internal medicine ward attending physicians and hospitalists. Attending physicians had to have been on the wards in the past year. The survey was conducted in November, when all eligible house staff should have had at least 1 ward month. Nearly every hospitalist recruited had spent time on both teaching and nonteaching services.

Task List Compilation and Survey Development

An expert panel was convened consisting of 10 physicians representing 3 hospitals, including residents and faculty, some of which were hospitalists. During the session, the participants developed a task list and discussed the work intensity associated with some of the tasks. The task list was reviewed by the study team and organized into categories. The final list included 99 tasks divided into 6 categories: (1) direct patient care, (2) indirect patient care, (3) search for/finding things, (4) educational/academic activities, (5) personal/downtime activities, and (6) other. Table 1 gives examples of items found in each category. We used the terminology that the study participants used to describe their work (eg, they used the term eyeballing a patient to describe the process of making an initial assessment of the patient's status). This list of 99 items was formatted into a survey to allow study participants to rate each task across 3 domains: physical effort, mental effort, and psychological effort, based on previous studies in anesthesia4 (see Supporting Information). The term mental refers to cognitive effort, whereas psychological refers to emotional effort. We used the same scales with the same anchors as described in the anesthesia literature,4 but substituted the internal medicine specific tasks. Each item was rated on a 7‐point Likert‐type scale (1 = almost no stress or effort; 7 = most effort). The survey also included demographic information regarding the respondent and instructions. The instructions directed respondents to rate each item based on their average experience in performing each task. They were further instructed not to rate tasks they had never performed.

Categories of Inpatient Internal Medicine Tasks and Examples
Categories of TasksExamples
  • Abbreviation: H&P, history and physical.

Direct patient careConducting the physical examination, hand washing, putting on isolation gear
Indirect patient careWriting H&P, writing orders, ordering additional labs or tests
Searching for/finding thingsFinding a computer, finding materials for procedures, finding the patient
Personal/downtime activitiesEating dinner, sleep, socializing, calling family members
Educational/academic activitiesLiterature search, teaching medical students, preparing a talk
OtherTransporting patients, traveling from place to place, billing

Survey Process

The potential survey participants were notified via e‐mail that they would be asked to complete the survey during a regularly scheduled meeting. The interns, residents, and faculty met during separate time slots. Data from residents and interns were obtained from teaching sessions they were required to attend (as long as their schedule permitted them to). Survey data for attending physicians were obtained from a general internal medicine meeting and a hospitalist meeting. Because of the type of meeting, subspecialists were less likely to have been included. The objectives of the study and its voluntary nature were presented to the groups, and the survey was given to all attendees at the meetings. Due to the anonymous nature of the survey, a waiver of written informed consent was granted. Time was reserved during the course of the meeting to complete the survey. Before distributing the survey, we counted the total number of people in the room so that a participation rate could be calculated. Respondents were instructed to place the survey in a designated envelope after completing it or to return a blank survey if they did not wish to complete it. There was no time limit for completion of the survey. At all of these sessions, this survey was one part of the meeting agenda.

Data Analysis

Surveys were entered into a Microsoft Excel (Redmond, WA) spreadsheet and then transferred into Stata version 8.0 (College Station, TX), which was used for analysis. Our analysis focused on (1) the description of the effort associated with individual tasks, (2) the description of the effort associated with task categories and comparisons across key categories, and (3) a comparison of effort across the task categories' physical, mental, and psychological domains.

Each task had 3 individual domain scores associated with it: physical, mental (ie, cognitive work), and psychological (ie, emotional work). A composite task effort score was calculated for each task by determining the mean of the 3 domain scores for that task.

An overall effort score was calculated for each of the 6 task categories by determining the mean of the composite task effort scores within each category. We used the composite effort score for each task to calculate the Cronbach's value for each category except other. We compared the overall category effort scores for direct versus indirect patient care using 2‐tailed paired t tests with a significance level of P < 0.05. We further evaluated differences in overall category effort scores for direct patient care between physicians of different genders and between house staff and faculty, using 2‐tailed unpaired t tests, with a significance level of P < 0.05.

Finally, we compared the physical, mental, and psychological domain scores for direct versus indirect patient care categories, using paired t tests.

Ethics

This study was approved by the Institutional Review Board at the Medical College of Wisconsin.

RESULTS

The study participation rate was 69% (59/85). The sample consisted of 31 (52%) women and 40 (68%) house staff (see Table 2). The mean age was 34 years. This participation rate represents approximately 1/3 of the internal medicine house staff and a smaller percentage of the faculty that would have been eligible.

Demographics of Survey Respondents (n = 59)
DemographicValue
  • Abbreviation: SD, standard deviation.

Age, y, mean (SD)34 (8.8)
Female gender, no. (%)31 (52)
Physician description, no. (%) 
Intern7 (12)
Resident33 (56)
Hospitalist4 (7)
Nonhospitalist faculty15 (25)

Individual Task Effort

The mean composite effort score of all 99 tasks is provided in the Supporting Information Table. Overall, the most difficult task was going to codes (in the direct patient care category), with a mean composite rating of 5.37 (standard deviation [SD] 1.5); this was also the most difficult psychological task (5.78 [SD 1.65]). The most difficult mental task was transferring an unstable patient to the intensive care unit (5.47 [SD 1.53]). The most difficult physical task was placing a central line (5.02 [SD 1.63]). The easiest task was using the Internet (in the personal/downtime activities category), with a mean composite rating of 1.41 (SD 0.74); this was also the easiest mental (1.52 [SD 1.01]), psychological (1.3 [SD 0.68]), and physical (1.42 [SD 0.76]) task.

Analysis of Task Categories

The overall and domain characteristics of each task category are given in Table 3. Categories contained between 5 and 41 tasks. The Cronbach's ranged from 0.83 for the personal/downtime activities category to 0.98 for the direct patient care category. The mean overall effort ranged from least difficult for the personal/downtime category (1.72 [SD 0.76]) to most difficult for the education category (3.61 [SD 1.06]).

Overall Effort Stratified by Task Category
CategoryNo. of ItemsCronbach'sEffort Score, Mean (SD)*
Composite EffortPhysical EffortMental EffortPsychological Effort
  • Abbreviation: NC, not calculated.

  • Measured on a scale of 17, where 1 = least effort and 7 = most effort.

Direct patient care320.973.55 (0.91)3.22 (1.06)3.89 (0.99)3.52 (1.04)
Indirect patient care410.983.21 (0.92)2.71 (1.09)3.80 (1.02)3.20 (1.08)
Education80.923.61 (1.06)3.12 (1.26)4.27 (1.17)3.43 (1.30)
Finding things50.852.94 (0.91)3.59 (1.23)2.43 (1.05)2.79 (1.13)
Personal70.831.72 (0.76)1.86 (0.92)1.69 (0.85)1.63 (0.72)
Other6NCNCNCNCNC

Using paired t tests, we determined that the direct patient care category was more difficult than the indirect patient care category overall (3.58 versus 3.21, P < 0.001). Direct patient care was statistically significantly more challenging than indirect patient care on the physical (3.23 vs 2.71; P < 0.001), mental (3.90 vs 3.84; P < 0.05), and psychological domains (3.57 vs 3.20; P < 0.001) as well. There were no significant differences between men and women or between house staff and faculty on the difficulty of direct patient care. We found a trend toward increased difficulty of indirect patient care for house staff versus faculty (3.36 vs 2.92; P 0.10), but no differences by gender.

DISCUSSION

In this study, we used a comprehensive list of tasks performed by internal medicine doctors while admitting patients and produced a numeric assessment of the effort associated with each. The list was generated by an expert panel and comprised 6 categories and 99 items. Residents and attending physicians then rated each task based on level of difficulty, specifically looking at the mental, psychological, and physical effort required by each.

Indirect patient care was the task category in our study that had the most tasks associated with it (41 out of 99). Direct patient care included 32 items, but 10 of these were procedures (eg, lumbar puncture), some of which are uncommonly performed. Several time‐motion studies have been performed to document the work done by residents815 and hospitalists.16, 17 Although our study did not assess the time spent on each task, the distribution of tasks across categories is consistent with these time‐motion studies, which show that the amount of time spent in direct patient care is a small fraction of the amount of time spent in the hospital,12 and that work such as interprofessional communication10 and documentation16 consume the majority of time.

This project allowed us to consider the effort required for inpatient internal medicine work on a more granular level than has been described previously. Although the difficulty of tasks associated with anesthesia and surgical work has been described,3, 4, 7, 1820 our study is a unique contribution to the internal medicine literature. Understanding the difficulty of tasks performed by inpatient physicians is an important step toward better management of workload. With concerns about burnout in hospitalists21, 22 and residents,2325 it seems wise to take the difficulty of the work they do into consideration in a more proactive manner. In addition, understanding workload may have patient safety applications. In one study of mistakes made by house staff, 51% of the survey respondents identified workload as a contributing factor.26

We assessed effort for inpatient work by generating a task list and then measuring 3 domains of each task: physical, mental, and psychological. As a result, we were able to further quantify the difficulty of work completed by physicians. Recent work from outside of medicine suggests that individuals have a finite capacity for mental workload, and when this is breached, decision‐making quality is impaired.27 This suggests that it is important to take work intensity into account when assigning work to individuals. For example, a detailed assessment of workload at the task level combined with the amount of time spent on each task would allow us to know how much effort is typically involved with admitting a new patient. This information would allow for more equal distribution of workload across admitting teams. In addition, these methods could be expanded to understand how much effort is involved in the discharge process. This could be taken into account at the beginning of a day when allocating work such as admissions and discharges between members of a team.

This methodology has the potential to be used in other ways to help quantify the effort required for the work that physicians do. Many departments are struggling to develop a system for giving credit to faculty for the time they spend on nonpatient care activities. Perhaps these methods could be used to develop effort scores associated with administrative tasks, and administrative relative value units could be calculated accordingly. Similar techniques have been used with educational relative value units.28

We know from the nursing literature that workload is related to both burnout and patient safety. Burnout is a process related to the emotional work of providing care to people.29 Our methods clearly incorporate the psychological stress of work into the workload assessment. Evaluating the amount of time spent on tasks with high psychological scores may be helpful in identifying work patterns that are more likely to produce burnout in physicians and nurses.

With respect to patient safety, higher patient‐to‐nurse ratios are associated with failure to rescue30 and nosocomial infections.31 Furthermore, researchers have demonstrated that systems issues can add substantially to nursing workload.32 Methods such as those described in our study take into account both patient‐related and systems‐related tasks, and therefore could result in more detailed workload assessments. With more detailed information about contributors to workload, better predictions about optimal staffing could be made, which would ultimately lead to fewer adverse patient events.

Our study has limitations. First, the initial task list was based on the compilation efforts from only 10 physicians. However, this group of physicians represented 3 hospitals and included both resident and attending physicians. Second, the survey data were gathered from a single institution. Although we included trainees and faculty, more participants would be needed to answer questions about how experience and setting/environmental factors affect these assessments. However, participants were instructed to reflect on their whole experience with each task, which presumably includes multiple institutions and training levels. Third, the sample size is fairly small, with more house staff than faculty (hospitalists and nonhospitalists) represented. Regardless, this study is the first attempt to define and quantify workload for internal medicine physicians using these methods. In future studies, we will expand the number of institutions and levels of experience to validate our current data. Finally, the difficulty of the tasks is clearly a subjective assessment. Although this methodology has face validity, further work needs to be done to validate these findings against other measurements of workload, such as census, or more general subjective workload assessments, such as the NASA task load index.33

In conclusion, we have described the tasks performed by inpatient physicians and the difficulty associated with them. Moreover, we have described a methodology that could be replicated at other centers for the purpose of validating our findings or quantifying workload of other types of tasks. We believe that this is the first step toward a more comprehensive understanding of the workload encountered by inpatient physicians. Because workload has implications for physician burnout and patient safety, it is essential that we fully understand the contributors to workload, including the innate difficulty of the tasks that comprise it.

Acknowledgements

The authors Alexis Visotcky, MS, and Sergey Tarima, PhD, for their assistance with statistics.

This work was presented in poster form at the Society of Hospital Medicine Annual Meeting in April 2010, the Society of General Internal Medicine Annual Meeting in May 2010, and the Society of General Internal Medicine regional meeting in September 2010.

Funding Source: The study team was supported by the following funds during this work: VA grants PPO 0925901 (Marilyn M. Schapira and Kathlyn E. Fletcher) and IIR 07201 (Marilyn M. Schapira, Siddhartha Singh, and Kathlyn E. Fletcher).

Files
References
  1. Ong M,Bostrom A,Vidyarthi A,McCulloch C,Auerbach A.House staff team workload and organization effects on patient outcomes in an academic general internal medicine inpatient service.Arch Intern Med.2007;167:4752.
  2. Fletcher KE,Parekh V,Halasyamani L, et al.The work hour rules and contributors to patient care mistakes: A focus group study with internal medicine residentsJ Hosp Med.2008;3:228237.
  3. Weinger MB,Reddy SB,Slagle JM.Multiple measures of anesthesia workload during teaching and nonteaching cases.Anesth Analg.2004;98:14191425.
  4. Vredenburgh AG,Weinger MB,Williams KJ,Kalsher MJ,Macario A.Developing a technique to measure anesthesiologists' real‐time workload.Proceedings of the Human Factors and Ergonomics Society Annual Meeting.2000;44:241244.
  5. Weinger MB,Vredenburgh AG,Schumann CM, et al.Quantitative description of the workload associated with airway management procedures.J Clin Anesth.2000;12:273282.
  6. Weinger MB,Herndon OW,Zornow MH,Paulus MP,Gaba DM,Dallen LT.An objective methodology for task analysis and workload assessment in anesthesia providers.Anesthesiology.1994;80:7792.
  7. Slagle JM,Weinger MB.Effects of intraoperative reading on vigilance and workload during anesthesia care in an academic medical center.Anesthesiology.2009;110:275283.
  8. Brasel KJ,Pierre AL,Weigelt JA.Resident work hours: what they are really doing.Arch Surg.2004;139:490493; discussion, 493–494.
  9. Dresselhaus TR,Luck J,Wright BC,Spragg RG,Lee ML,Bozzette SA.Analyzing the time and value of housestaff inpatient work.J Gen Intern Med.1998;13:534540.
  10. Westbrook JI,Ampt A,Kearney L,Rob MI.All in a day's work: an observational study to quantify how and with whom doctors on hospital wards spend their time.[see comment].Med J Aust.2008;188:506509.
  11. Lurie N,Rank B,Parenti C,Woolley T,Snoke W.How do house officers spend their nights? A time study of internal medicine house staff on call.N Engl J Med.1989;320:16731677.
  12. Tipping MD,Forth VE,Magill DB,Englert K,Williams MV.Systematic review of time studies evaluating physicians in the hospital setting.J Hosp Med.2010;5:353359.
  13. Guarisco S,Oddone E,Simel D.Time analysis of a general medicine service: results from a random work sampling study.J Gen Intern Med.1994;9:272277.
  14. Hayward RS,Rockwood K,Sheehan GJ,Bass EB.A phenomenology of scut.Ann Intern Med.1991;115:372376.
  15. Nerenz D,Rosman H,Newcomb C, et al.The on‐call experience of interns in internal medicine. Medical Education Task Force of Henry Ford Hospital.Arch Intern Med.1990;150:22942297.
  16. Tipping MD,Forth VE,O'Leary KJ, et al.Where did the day go? A time‐motion study of hospitalists.J Hosp Med.2010;5:323328.
  17. O'Leary KJ,Liebovitz DM,Baker DW.How hospitalists spend their time: insights on efficiency and safety.J Hosp Med.2006;1:8893.
  18. Cao CG,Weinger MB,Slagle J, et al.Differences in day and night shift clinical performance in anesthesiology.Hum Factors.2008;50:276290.
  19. Slagle J,Weinger MB,Dinh MT,Brumer VV,Williams K.Assessment of the intrarater and interrater reliability of an established clinical task analysis methodology.Anesthesiology.2002;96:11291139.
  20. Weinger MB,Herndon OW,Gaba DM.The effect of electronic record keeping and transesophageal echocardiography on task distribution, workload, and vigilance during cardiac anesthesia.Anesthesiology.1997;87:144155.
  21. Shaw G.Fight burnout while fostering experience: investing in hospitalist programs now can fight burnout later.ACP Hospitalist. July2008.
  22. Jerrard J.Hospitalist burnout: recognize it in yourself and others, and avoid or eliminate it.The Hospitalist. March2006.
  23. Gopal R,Glasheen JJ,Miyoshi TJ,Prochazka AV.Burnout and internal medicine resident work‐hour restrictions.Arch Intern Med.2005;165:25952600.
  24. Goitein L,Shanafelt TD,Wipf JE,Slatore CG,Back AL.The effects of work‐hour limitations on resident well‐being, patient care, and education in an internal medicine residency program.Arch Intern Med.2005;165:26012606.
  25. Shanafelt TD,Bradley KA,Wipf JE,Back AL.Burnout and self‐reported patient care in an internal medicine residency program.Ann Intern Med.2002;136:358367.
  26. Wu AW,Folkman S,McPhee SJ,Lo B.Do house officers learn from their mistakes?Qual Saf Health Care.2003;12:221226; discussion, 227–228.
  27. Danziger S,Levav J,Avnaim‐Pesso L.Extraneous factors in judicial decisions.Proc Natl Acad Sci U S A.2011;108:68896892.
  28. Yeh M,Cahill D.Quantifying physician teaching productivity using clinical relative value units.J Gen Intern Med.1999;14:617621.
  29. Maslach C JS.Maslach Burnout Inventory Manual.3rd ed.Palo Alto, CA:Consulting Psychology Press;1986.
  30. Aiken LH,Clarke SP,Sloane DM,Sochalski J,Silber JH.Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction.JAMA.2002;288:19871993.
  31. Archibald LK,Manning ML,Bell LM,Banerjee S,Jarvis WR.Patient density, nurse‐to‐patient ratio and nosocomial infection risk in a pediatric cardiac intensive care unit.Pediatr Infect Dis J.1997;16:10451048.
  32. Tucker AL,Spear SJ.Operational failures and interruptions in hospital nursing.Health Serv Res.2006;41:643662.
  33. Hart SG,Staveland LE.Development of NASA‐TLX (Task Load Index): results of empirical and theoretical research. In: Hancock PA, Meshkati N, eds.Human Mental Workload.Amsterdam, Netherlands:North Holland Press;1988:239250.
Article PDF
Issue
Journal of Hospital Medicine - 7(5)
Publications
Page Number
426-430
Sections
Files
Files
Article PDF
Article PDF

In internal medicine residency training, the most commonly used metric for measuring workload of physicians is the number of patients being followed or the number being admitted. There are data to support the importance of these census numbers. One study conducted at an academic medical center demonstrated that for patients admitted to medical services, the number of patients admitted on a call night was positively associated with mortality, even after adjustment in multivariable models.1

The problem with a census is that it is only a rough indicator of the amount of work that a given intern or resident will have. In a focus group study that our group conducted with internal medicine residents, several contributors to patient care errors were identified. Workload was identified as a major factor contributing to patient care mistakes.2 In describing workload, residents noted not only census but the complexity of the patient as contributing factors to workload.

A more comprehensive method than relying on census data has been used in anesthesia.3, 4 In 2 studies, anesthesiologists were asked to rate the effort or intensity associated with the tasks that they performed in the operating room.4, 5 In subsequent studies, this group used a trained observer to record the tasks anesthesiologists performed during a case.6, 7 Work density was calculated by multiplying the duration of each task by the previously developed task intensity score. In this way, work per unit of time can be calculated as can a cumulative workload score for a certain duration of time.

These methods provide the background for the work that we conducted in this study. The purpose of this study was to assign a task effort score to the tasks performed during periods that include admitting patients to the hospital.

METHODS

Study Site

A single 500‐bed Midwest academic institution. Residents rotate through 3 hospitals (a private community hospital, a Veterans hospital, and an academic medical center) during a typical 3‐year internal medicine residency program.

Study Design and Subjects

A cross‐sectional survey was conducted. Subjects recruited for the survey included internal medicine interns and residents, internal medicine ward attending physicians and hospitalists. Attending physicians had to have been on the wards in the past year. The survey was conducted in November, when all eligible house staff should have had at least 1 ward month. Nearly every hospitalist recruited had spent time on both teaching and nonteaching services.

Task List Compilation and Survey Development

An expert panel was convened consisting of 10 physicians representing 3 hospitals, including residents and faculty, some of which were hospitalists. During the session, the participants developed a task list and discussed the work intensity associated with some of the tasks. The task list was reviewed by the study team and organized into categories. The final list included 99 tasks divided into 6 categories: (1) direct patient care, (2) indirect patient care, (3) search for/finding things, (4) educational/academic activities, (5) personal/downtime activities, and (6) other. Table 1 gives examples of items found in each category. We used the terminology that the study participants used to describe their work (eg, they used the term eyeballing a patient to describe the process of making an initial assessment of the patient's status). This list of 99 items was formatted into a survey to allow study participants to rate each task across 3 domains: physical effort, mental effort, and psychological effort, based on previous studies in anesthesia4 (see Supporting Information). The term mental refers to cognitive effort, whereas psychological refers to emotional effort. We used the same scales with the same anchors as described in the anesthesia literature,4 but substituted the internal medicine specific tasks. Each item was rated on a 7‐point Likert‐type scale (1 = almost no stress or effort; 7 = most effort). The survey also included demographic information regarding the respondent and instructions. The instructions directed respondents to rate each item based on their average experience in performing each task. They were further instructed not to rate tasks they had never performed.

Categories of Inpatient Internal Medicine Tasks and Examples
Categories of TasksExamples
  • Abbreviation: H&P, history and physical.

Direct patient careConducting the physical examination, hand washing, putting on isolation gear
Indirect patient careWriting H&P, writing orders, ordering additional labs or tests
Searching for/finding thingsFinding a computer, finding materials for procedures, finding the patient
Personal/downtime activitiesEating dinner, sleep, socializing, calling family members
Educational/academic activitiesLiterature search, teaching medical students, preparing a talk
OtherTransporting patients, traveling from place to place, billing

Survey Process

The potential survey participants were notified via e‐mail that they would be asked to complete the survey during a regularly scheduled meeting. The interns, residents, and faculty met during separate time slots. Data from residents and interns were obtained from teaching sessions they were required to attend (as long as their schedule permitted them to). Survey data for attending physicians were obtained from a general internal medicine meeting and a hospitalist meeting. Because of the type of meeting, subspecialists were less likely to have been included. The objectives of the study and its voluntary nature were presented to the groups, and the survey was given to all attendees at the meetings. Due to the anonymous nature of the survey, a waiver of written informed consent was granted. Time was reserved during the course of the meeting to complete the survey. Before distributing the survey, we counted the total number of people in the room so that a participation rate could be calculated. Respondents were instructed to place the survey in a designated envelope after completing it or to return a blank survey if they did not wish to complete it. There was no time limit for completion of the survey. At all of these sessions, this survey was one part of the meeting agenda.

Data Analysis

Surveys were entered into a Microsoft Excel (Redmond, WA) spreadsheet and then transferred into Stata version 8.0 (College Station, TX), which was used for analysis. Our analysis focused on (1) the description of the effort associated with individual tasks, (2) the description of the effort associated with task categories and comparisons across key categories, and (3) a comparison of effort across the task categories' physical, mental, and psychological domains.

Each task had 3 individual domain scores associated with it: physical, mental (ie, cognitive work), and psychological (ie, emotional work). A composite task effort score was calculated for each task by determining the mean of the 3 domain scores for that task.

An overall effort score was calculated for each of the 6 task categories by determining the mean of the composite task effort scores within each category. We used the composite effort score for each task to calculate the Cronbach's value for each category except other. We compared the overall category effort scores for direct versus indirect patient care using 2‐tailed paired t tests with a significance level of P < 0.05. We further evaluated differences in overall category effort scores for direct patient care between physicians of different genders and between house staff and faculty, using 2‐tailed unpaired t tests, with a significance level of P < 0.05.

Finally, we compared the physical, mental, and psychological domain scores for direct versus indirect patient care categories, using paired t tests.

Ethics

This study was approved by the Institutional Review Board at the Medical College of Wisconsin.

RESULTS

The study participation rate was 69% (59/85). The sample consisted of 31 (52%) women and 40 (68%) house staff (see Table 2). The mean age was 34 years. This participation rate represents approximately 1/3 of the internal medicine house staff and a smaller percentage of the faculty that would have been eligible.

Demographics of Survey Respondents (n = 59)
DemographicValue
  • Abbreviation: SD, standard deviation.

Age, y, mean (SD)34 (8.8)
Female gender, no. (%)31 (52)
Physician description, no. (%) 
Intern7 (12)
Resident33 (56)
Hospitalist4 (7)
Nonhospitalist faculty15 (25)

Individual Task Effort

The mean composite effort score of all 99 tasks is provided in the Supporting Information Table. Overall, the most difficult task was going to codes (in the direct patient care category), with a mean composite rating of 5.37 (standard deviation [SD] 1.5); this was also the most difficult psychological task (5.78 [SD 1.65]). The most difficult mental task was transferring an unstable patient to the intensive care unit (5.47 [SD 1.53]). The most difficult physical task was placing a central line (5.02 [SD 1.63]). The easiest task was using the Internet (in the personal/downtime activities category), with a mean composite rating of 1.41 (SD 0.74); this was also the easiest mental (1.52 [SD 1.01]), psychological (1.3 [SD 0.68]), and physical (1.42 [SD 0.76]) task.

Analysis of Task Categories

The overall and domain characteristics of each task category are given in Table 3. Categories contained between 5 and 41 tasks. The Cronbach's ranged from 0.83 for the personal/downtime activities category to 0.98 for the direct patient care category. The mean overall effort ranged from least difficult for the personal/downtime category (1.72 [SD 0.76]) to most difficult for the education category (3.61 [SD 1.06]).

Overall Effort Stratified by Task Category
CategoryNo. of ItemsCronbach'sEffort Score, Mean (SD)*
Composite EffortPhysical EffortMental EffortPsychological Effort
  • Abbreviation: NC, not calculated.

  • Measured on a scale of 17, where 1 = least effort and 7 = most effort.

Direct patient care320.973.55 (0.91)3.22 (1.06)3.89 (0.99)3.52 (1.04)
Indirect patient care410.983.21 (0.92)2.71 (1.09)3.80 (1.02)3.20 (1.08)
Education80.923.61 (1.06)3.12 (1.26)4.27 (1.17)3.43 (1.30)
Finding things50.852.94 (0.91)3.59 (1.23)2.43 (1.05)2.79 (1.13)
Personal70.831.72 (0.76)1.86 (0.92)1.69 (0.85)1.63 (0.72)
Other6NCNCNCNCNC

Using paired t tests, we determined that the direct patient care category was more difficult than the indirect patient care category overall (3.58 versus 3.21, P < 0.001). Direct patient care was statistically significantly more challenging than indirect patient care on the physical (3.23 vs 2.71; P < 0.001), mental (3.90 vs 3.84; P < 0.05), and psychological domains (3.57 vs 3.20; P < 0.001) as well. There were no significant differences between men and women or between house staff and faculty on the difficulty of direct patient care. We found a trend toward increased difficulty of indirect patient care for house staff versus faculty (3.36 vs 2.92; P 0.10), but no differences by gender.

DISCUSSION

In this study, we used a comprehensive list of tasks performed by internal medicine doctors while admitting patients and produced a numeric assessment of the effort associated with each. The list was generated by an expert panel and comprised 6 categories and 99 items. Residents and attending physicians then rated each task based on level of difficulty, specifically looking at the mental, psychological, and physical effort required by each.

Indirect patient care was the task category in our study that had the most tasks associated with it (41 out of 99). Direct patient care included 32 items, but 10 of these were procedures (eg, lumbar puncture), some of which are uncommonly performed. Several time‐motion studies have been performed to document the work done by residents815 and hospitalists.16, 17 Although our study did not assess the time spent on each task, the distribution of tasks across categories is consistent with these time‐motion studies, which show that the amount of time spent in direct patient care is a small fraction of the amount of time spent in the hospital,12 and that work such as interprofessional communication10 and documentation16 consume the majority of time.

This project allowed us to consider the effort required for inpatient internal medicine work on a more granular level than has been described previously. Although the difficulty of tasks associated with anesthesia and surgical work has been described,3, 4, 7, 1820 our study is a unique contribution to the internal medicine literature. Understanding the difficulty of tasks performed by inpatient physicians is an important step toward better management of workload. With concerns about burnout in hospitalists21, 22 and residents,2325 it seems wise to take the difficulty of the work they do into consideration in a more proactive manner. In addition, understanding workload may have patient safety applications. In one study of mistakes made by house staff, 51% of the survey respondents identified workload as a contributing factor.26

We assessed effort for inpatient work by generating a task list and then measuring 3 domains of each task: physical, mental, and psychological. As a result, we were able to further quantify the difficulty of work completed by physicians. Recent work from outside of medicine suggests that individuals have a finite capacity for mental workload, and when this is breached, decision‐making quality is impaired.27 This suggests that it is important to take work intensity into account when assigning work to individuals. For example, a detailed assessment of workload at the task level combined with the amount of time spent on each task would allow us to know how much effort is typically involved with admitting a new patient. This information would allow for more equal distribution of workload across admitting teams. In addition, these methods could be expanded to understand how much effort is involved in the discharge process. This could be taken into account at the beginning of a day when allocating work such as admissions and discharges between members of a team.

This methodology has the potential to be used in other ways to help quantify the effort required for the work that physicians do. Many departments are struggling to develop a system for giving credit to faculty for the time they spend on nonpatient care activities. Perhaps these methods could be used to develop effort scores associated with administrative tasks, and administrative relative value units could be calculated accordingly. Similar techniques have been used with educational relative value units.28

We know from the nursing literature that workload is related to both burnout and patient safety. Burnout is a process related to the emotional work of providing care to people.29 Our methods clearly incorporate the psychological stress of work into the workload assessment. Evaluating the amount of time spent on tasks with high psychological scores may be helpful in identifying work patterns that are more likely to produce burnout in physicians and nurses.

With respect to patient safety, higher patient‐to‐nurse ratios are associated with failure to rescue30 and nosocomial infections.31 Furthermore, researchers have demonstrated that systems issues can add substantially to nursing workload.32 Methods such as those described in our study take into account both patient‐related and systems‐related tasks, and therefore could result in more detailed workload assessments. With more detailed information about contributors to workload, better predictions about optimal staffing could be made, which would ultimately lead to fewer adverse patient events.

Our study has limitations. First, the initial task list was based on the compilation efforts from only 10 physicians. However, this group of physicians represented 3 hospitals and included both resident and attending physicians. Second, the survey data were gathered from a single institution. Although we included trainees and faculty, more participants would be needed to answer questions about how experience and setting/environmental factors affect these assessments. However, participants were instructed to reflect on their whole experience with each task, which presumably includes multiple institutions and training levels. Third, the sample size is fairly small, with more house staff than faculty (hospitalists and nonhospitalists) represented. Regardless, this study is the first attempt to define and quantify workload for internal medicine physicians using these methods. In future studies, we will expand the number of institutions and levels of experience to validate our current data. Finally, the difficulty of the tasks is clearly a subjective assessment. Although this methodology has face validity, further work needs to be done to validate these findings against other measurements of workload, such as census, or more general subjective workload assessments, such as the NASA task load index.33

In conclusion, we have described the tasks performed by inpatient physicians and the difficulty associated with them. Moreover, we have described a methodology that could be replicated at other centers for the purpose of validating our findings or quantifying workload of other types of tasks. We believe that this is the first step toward a more comprehensive understanding of the workload encountered by inpatient physicians. Because workload has implications for physician burnout and patient safety, it is essential that we fully understand the contributors to workload, including the innate difficulty of the tasks that comprise it.

Acknowledgements

The authors Alexis Visotcky, MS, and Sergey Tarima, PhD, for their assistance with statistics.

This work was presented in poster form at the Society of Hospital Medicine Annual Meeting in April 2010, the Society of General Internal Medicine Annual Meeting in May 2010, and the Society of General Internal Medicine regional meeting in September 2010.

Funding Source: The study team was supported by the following funds during this work: VA grants PPO 0925901 (Marilyn M. Schapira and Kathlyn E. Fletcher) and IIR 07201 (Marilyn M. Schapira, Siddhartha Singh, and Kathlyn E. Fletcher).

In internal medicine residency training, the most commonly used metric for measuring workload of physicians is the number of patients being followed or the number being admitted. There are data to support the importance of these census numbers. One study conducted at an academic medical center demonstrated that for patients admitted to medical services, the number of patients admitted on a call night was positively associated with mortality, even after adjustment in multivariable models.1

The problem with a census is that it is only a rough indicator of the amount of work that a given intern or resident will have. In a focus group study that our group conducted with internal medicine residents, several contributors to patient care errors were identified. Workload was identified as a major factor contributing to patient care mistakes.2 In describing workload, residents noted not only census but the complexity of the patient as contributing factors to workload.

A more comprehensive method than relying on census data has been used in anesthesia.3, 4 In 2 studies, anesthesiologists were asked to rate the effort or intensity associated with the tasks that they performed in the operating room.4, 5 In subsequent studies, this group used a trained observer to record the tasks anesthesiologists performed during a case.6, 7 Work density was calculated by multiplying the duration of each task by the previously developed task intensity score. In this way, work per unit of time can be calculated as can a cumulative workload score for a certain duration of time.

These methods provide the background for the work that we conducted in this study. The purpose of this study was to assign a task effort score to the tasks performed during periods that include admitting patients to the hospital.

METHODS

Study Site

A single 500‐bed Midwest academic institution. Residents rotate through 3 hospitals (a private community hospital, a Veterans hospital, and an academic medical center) during a typical 3‐year internal medicine residency program.

Study Design and Subjects

A cross‐sectional survey was conducted. Subjects recruited for the survey included internal medicine interns and residents, internal medicine ward attending physicians and hospitalists. Attending physicians had to have been on the wards in the past year. The survey was conducted in November, when all eligible house staff should have had at least 1 ward month. Nearly every hospitalist recruited had spent time on both teaching and nonteaching services.

Task List Compilation and Survey Development

An expert panel was convened consisting of 10 physicians representing 3 hospitals, including residents and faculty, some of which were hospitalists. During the session, the participants developed a task list and discussed the work intensity associated with some of the tasks. The task list was reviewed by the study team and organized into categories. The final list included 99 tasks divided into 6 categories: (1) direct patient care, (2) indirect patient care, (3) search for/finding things, (4) educational/academic activities, (5) personal/downtime activities, and (6) other. Table 1 gives examples of items found in each category. We used the terminology that the study participants used to describe their work (eg, they used the term eyeballing a patient to describe the process of making an initial assessment of the patient's status). This list of 99 items was formatted into a survey to allow study participants to rate each task across 3 domains: physical effort, mental effort, and psychological effort, based on previous studies in anesthesia4 (see Supporting Information). The term mental refers to cognitive effort, whereas psychological refers to emotional effort. We used the same scales with the same anchors as described in the anesthesia literature,4 but substituted the internal medicine specific tasks. Each item was rated on a 7‐point Likert‐type scale (1 = almost no stress or effort; 7 = most effort). The survey also included demographic information regarding the respondent and instructions. The instructions directed respondents to rate each item based on their average experience in performing each task. They were further instructed not to rate tasks they had never performed.

Categories of Inpatient Internal Medicine Tasks and Examples
Categories of TasksExamples
  • Abbreviation: H&P, history and physical.

Direct patient careConducting the physical examination, hand washing, putting on isolation gear
Indirect patient careWriting H&P, writing orders, ordering additional labs or tests
Searching for/finding thingsFinding a computer, finding materials for procedures, finding the patient
Personal/downtime activitiesEating dinner, sleep, socializing, calling family members
Educational/academic activitiesLiterature search, teaching medical students, preparing a talk
OtherTransporting patients, traveling from place to place, billing

Survey Process

The potential survey participants were notified via e‐mail that they would be asked to complete the survey during a regularly scheduled meeting. The interns, residents, and faculty met during separate time slots. Data from residents and interns were obtained from teaching sessions they were required to attend (as long as their schedule permitted them to). Survey data for attending physicians were obtained from a general internal medicine meeting and a hospitalist meeting. Because of the type of meeting, subspecialists were less likely to have been included. The objectives of the study and its voluntary nature were presented to the groups, and the survey was given to all attendees at the meetings. Due to the anonymous nature of the survey, a waiver of written informed consent was granted. Time was reserved during the course of the meeting to complete the survey. Before distributing the survey, we counted the total number of people in the room so that a participation rate could be calculated. Respondents were instructed to place the survey in a designated envelope after completing it or to return a blank survey if they did not wish to complete it. There was no time limit for completion of the survey. At all of these sessions, this survey was one part of the meeting agenda.

Data Analysis

Surveys were entered into a Microsoft Excel (Redmond, WA) spreadsheet and then transferred into Stata version 8.0 (College Station, TX), which was used for analysis. Our analysis focused on (1) the description of the effort associated with individual tasks, (2) the description of the effort associated with task categories and comparisons across key categories, and (3) a comparison of effort across the task categories' physical, mental, and psychological domains.

Each task had 3 individual domain scores associated with it: physical, mental (ie, cognitive work), and psychological (ie, emotional work). A composite task effort score was calculated for each task by determining the mean of the 3 domain scores for that task.

An overall effort score was calculated for each of the 6 task categories by determining the mean of the composite task effort scores within each category. We used the composite effort score for each task to calculate the Cronbach's value for each category except other. We compared the overall category effort scores for direct versus indirect patient care using 2‐tailed paired t tests with a significance level of P < 0.05. We further evaluated differences in overall category effort scores for direct patient care between physicians of different genders and between house staff and faculty, using 2‐tailed unpaired t tests, with a significance level of P < 0.05.

Finally, we compared the physical, mental, and psychological domain scores for direct versus indirect patient care categories, using paired t tests.

Ethics

This study was approved by the Institutional Review Board at the Medical College of Wisconsin.

RESULTS

The study participation rate was 69% (59/85). The sample consisted of 31 (52%) women and 40 (68%) house staff (see Table 2). The mean age was 34 years. This participation rate represents approximately 1/3 of the internal medicine house staff and a smaller percentage of the faculty that would have been eligible.

Demographics of Survey Respondents (n = 59)
DemographicValue
  • Abbreviation: SD, standard deviation.

Age, y, mean (SD)34 (8.8)
Female gender, no. (%)31 (52)
Physician description, no. (%) 
Intern7 (12)
Resident33 (56)
Hospitalist4 (7)
Nonhospitalist faculty15 (25)

Individual Task Effort

The mean composite effort score of all 99 tasks is provided in the Supporting Information Table. Overall, the most difficult task was going to codes (in the direct patient care category), with a mean composite rating of 5.37 (standard deviation [SD] 1.5); this was also the most difficult psychological task (5.78 [SD 1.65]). The most difficult mental task was transferring an unstable patient to the intensive care unit (5.47 [SD 1.53]). The most difficult physical task was placing a central line (5.02 [SD 1.63]). The easiest task was using the Internet (in the personal/downtime activities category), with a mean composite rating of 1.41 (SD 0.74); this was also the easiest mental (1.52 [SD 1.01]), psychological (1.3 [SD 0.68]), and physical (1.42 [SD 0.76]) task.

Analysis of Task Categories

The overall and domain characteristics of each task category are given in Table 3. Categories contained between 5 and 41 tasks. The Cronbach's ranged from 0.83 for the personal/downtime activities category to 0.98 for the direct patient care category. The mean overall effort ranged from least difficult for the personal/downtime category (1.72 [SD 0.76]) to most difficult for the education category (3.61 [SD 1.06]).

Overall Effort Stratified by Task Category
CategoryNo. of ItemsCronbach'sEffort Score, Mean (SD)*
Composite EffortPhysical EffortMental EffortPsychological Effort
  • Abbreviation: NC, not calculated.

  • Measured on a scale of 17, where 1 = least effort and 7 = most effort.

Direct patient care320.973.55 (0.91)3.22 (1.06)3.89 (0.99)3.52 (1.04)
Indirect patient care410.983.21 (0.92)2.71 (1.09)3.80 (1.02)3.20 (1.08)
Education80.923.61 (1.06)3.12 (1.26)4.27 (1.17)3.43 (1.30)
Finding things50.852.94 (0.91)3.59 (1.23)2.43 (1.05)2.79 (1.13)
Personal70.831.72 (0.76)1.86 (0.92)1.69 (0.85)1.63 (0.72)
Other6NCNCNCNCNC

Using paired t tests, we determined that the direct patient care category was more difficult than the indirect patient care category overall (3.58 versus 3.21, P < 0.001). Direct patient care was statistically significantly more challenging than indirect patient care on the physical (3.23 vs 2.71; P < 0.001), mental (3.90 vs 3.84; P < 0.05), and psychological domains (3.57 vs 3.20; P < 0.001) as well. There were no significant differences between men and women or between house staff and faculty on the difficulty of direct patient care. We found a trend toward increased difficulty of indirect patient care for house staff versus faculty (3.36 vs 2.92; P 0.10), but no differences by gender.

DISCUSSION

In this study, we used a comprehensive list of tasks performed by internal medicine doctors while admitting patients and produced a numeric assessment of the effort associated with each. The list was generated by an expert panel and comprised 6 categories and 99 items. Residents and attending physicians then rated each task based on level of difficulty, specifically looking at the mental, psychological, and physical effort required by each.

Indirect patient care was the task category in our study that had the most tasks associated with it (41 out of 99). Direct patient care included 32 items, but 10 of these were procedures (eg, lumbar puncture), some of which are uncommonly performed. Several time‐motion studies have been performed to document the work done by residents815 and hospitalists.16, 17 Although our study did not assess the time spent on each task, the distribution of tasks across categories is consistent with these time‐motion studies, which show that the amount of time spent in direct patient care is a small fraction of the amount of time spent in the hospital,12 and that work such as interprofessional communication10 and documentation16 consume the majority of time.

This project allowed us to consider the effort required for inpatient internal medicine work on a more granular level than has been described previously. Although the difficulty of tasks associated with anesthesia and surgical work has been described,3, 4, 7, 1820 our study is a unique contribution to the internal medicine literature. Understanding the difficulty of tasks performed by inpatient physicians is an important step toward better management of workload. With concerns about burnout in hospitalists21, 22 and residents,2325 it seems wise to take the difficulty of the work they do into consideration in a more proactive manner. In addition, understanding workload may have patient safety applications. In one study of mistakes made by house staff, 51% of the survey respondents identified workload as a contributing factor.26

We assessed effort for inpatient work by generating a task list and then measuring 3 domains of each task: physical, mental, and psychological. As a result, we were able to further quantify the difficulty of work completed by physicians. Recent work from outside of medicine suggests that individuals have a finite capacity for mental workload, and when this is breached, decision‐making quality is impaired.27 This suggests that it is important to take work intensity into account when assigning work to individuals. For example, a detailed assessment of workload at the task level combined with the amount of time spent on each task would allow us to know how much effort is typically involved with admitting a new patient. This information would allow for more equal distribution of workload across admitting teams. In addition, these methods could be expanded to understand how much effort is involved in the discharge process. This could be taken into account at the beginning of a day when allocating work such as admissions and discharges between members of a team.

This methodology has the potential to be used in other ways to help quantify the effort required for the work that physicians do. Many departments are struggling to develop a system for giving credit to faculty for the time they spend on nonpatient care activities. Perhaps these methods could be used to develop effort scores associated with administrative tasks, and administrative relative value units could be calculated accordingly. Similar techniques have been used with educational relative value units.28

We know from the nursing literature that workload is related to both burnout and patient safety. Burnout is a process related to the emotional work of providing care to people.29 Our methods clearly incorporate the psychological stress of work into the workload assessment. Evaluating the amount of time spent on tasks with high psychological scores may be helpful in identifying work patterns that are more likely to produce burnout in physicians and nurses.

With respect to patient safety, higher patient‐to‐nurse ratios are associated with failure to rescue30 and nosocomial infections.31 Furthermore, researchers have demonstrated that systems issues can add substantially to nursing workload.32 Methods such as those described in our study take into account both patient‐related and systems‐related tasks, and therefore could result in more detailed workload assessments. With more detailed information about contributors to workload, better predictions about optimal staffing could be made, which would ultimately lead to fewer adverse patient events.

Our study has limitations. First, the initial task list was based on the compilation efforts from only 10 physicians. However, this group of physicians represented 3 hospitals and included both resident and attending physicians. Second, the survey data were gathered from a single institution. Although we included trainees and faculty, more participants would be needed to answer questions about how experience and setting/environmental factors affect these assessments. However, participants were instructed to reflect on their whole experience with each task, which presumably includes multiple institutions and training levels. Third, the sample size is fairly small, with more house staff than faculty (hospitalists and nonhospitalists) represented. Regardless, this study is the first attempt to define and quantify workload for internal medicine physicians using these methods. In future studies, we will expand the number of institutions and levels of experience to validate our current data. Finally, the difficulty of the tasks is clearly a subjective assessment. Although this methodology has face validity, further work needs to be done to validate these findings against other measurements of workload, such as census, or more general subjective workload assessments, such as the NASA task load index.33

In conclusion, we have described the tasks performed by inpatient physicians and the difficulty associated with them. Moreover, we have described a methodology that could be replicated at other centers for the purpose of validating our findings or quantifying workload of other types of tasks. We believe that this is the first step toward a more comprehensive understanding of the workload encountered by inpatient physicians. Because workload has implications for physician burnout and patient safety, it is essential that we fully understand the contributors to workload, including the innate difficulty of the tasks that comprise it.

Acknowledgements

The authors Alexis Visotcky, MS, and Sergey Tarima, PhD, for their assistance with statistics.

This work was presented in poster form at the Society of Hospital Medicine Annual Meeting in April 2010, the Society of General Internal Medicine Annual Meeting in May 2010, and the Society of General Internal Medicine regional meeting in September 2010.

Funding Source: The study team was supported by the following funds during this work: VA grants PPO 0925901 (Marilyn M. Schapira and Kathlyn E. Fletcher) and IIR 07201 (Marilyn M. Schapira, Siddhartha Singh, and Kathlyn E. Fletcher).

References
  1. Ong M,Bostrom A,Vidyarthi A,McCulloch C,Auerbach A.House staff team workload and organization effects on patient outcomes in an academic general internal medicine inpatient service.Arch Intern Med.2007;167:4752.
  2. Fletcher KE,Parekh V,Halasyamani L, et al.The work hour rules and contributors to patient care mistakes: A focus group study with internal medicine residentsJ Hosp Med.2008;3:228237.
  3. Weinger MB,Reddy SB,Slagle JM.Multiple measures of anesthesia workload during teaching and nonteaching cases.Anesth Analg.2004;98:14191425.
  4. Vredenburgh AG,Weinger MB,Williams KJ,Kalsher MJ,Macario A.Developing a technique to measure anesthesiologists' real‐time workload.Proceedings of the Human Factors and Ergonomics Society Annual Meeting.2000;44:241244.
  5. Weinger MB,Vredenburgh AG,Schumann CM, et al.Quantitative description of the workload associated with airway management procedures.J Clin Anesth.2000;12:273282.
  6. Weinger MB,Herndon OW,Zornow MH,Paulus MP,Gaba DM,Dallen LT.An objective methodology for task analysis and workload assessment in anesthesia providers.Anesthesiology.1994;80:7792.
  7. Slagle JM,Weinger MB.Effects of intraoperative reading on vigilance and workload during anesthesia care in an academic medical center.Anesthesiology.2009;110:275283.
  8. Brasel KJ,Pierre AL,Weigelt JA.Resident work hours: what they are really doing.Arch Surg.2004;139:490493; discussion, 493–494.
  9. Dresselhaus TR,Luck J,Wright BC,Spragg RG,Lee ML,Bozzette SA.Analyzing the time and value of housestaff inpatient work.J Gen Intern Med.1998;13:534540.
  10. Westbrook JI,Ampt A,Kearney L,Rob MI.All in a day's work: an observational study to quantify how and with whom doctors on hospital wards spend their time.[see comment].Med J Aust.2008;188:506509.
  11. Lurie N,Rank B,Parenti C,Woolley T,Snoke W.How do house officers spend their nights? A time study of internal medicine house staff on call.N Engl J Med.1989;320:16731677.
  12. Tipping MD,Forth VE,Magill DB,Englert K,Williams MV.Systematic review of time studies evaluating physicians in the hospital setting.J Hosp Med.2010;5:353359.
  13. Guarisco S,Oddone E,Simel D.Time analysis of a general medicine service: results from a random work sampling study.J Gen Intern Med.1994;9:272277.
  14. Hayward RS,Rockwood K,Sheehan GJ,Bass EB.A phenomenology of scut.Ann Intern Med.1991;115:372376.
  15. Nerenz D,Rosman H,Newcomb C, et al.The on‐call experience of interns in internal medicine. Medical Education Task Force of Henry Ford Hospital.Arch Intern Med.1990;150:22942297.
  16. Tipping MD,Forth VE,O'Leary KJ, et al.Where did the day go? A time‐motion study of hospitalists.J Hosp Med.2010;5:323328.
  17. O'Leary KJ,Liebovitz DM,Baker DW.How hospitalists spend their time: insights on efficiency and safety.J Hosp Med.2006;1:8893.
  18. Cao CG,Weinger MB,Slagle J, et al.Differences in day and night shift clinical performance in anesthesiology.Hum Factors.2008;50:276290.
  19. Slagle J,Weinger MB,Dinh MT,Brumer VV,Williams K.Assessment of the intrarater and interrater reliability of an established clinical task analysis methodology.Anesthesiology.2002;96:11291139.
  20. Weinger MB,Herndon OW,Gaba DM.The effect of electronic record keeping and transesophageal echocardiography on task distribution, workload, and vigilance during cardiac anesthesia.Anesthesiology.1997;87:144155.
  21. Shaw G.Fight burnout while fostering experience: investing in hospitalist programs now can fight burnout later.ACP Hospitalist. July2008.
  22. Jerrard J.Hospitalist burnout: recognize it in yourself and others, and avoid or eliminate it.The Hospitalist. March2006.
  23. Gopal R,Glasheen JJ,Miyoshi TJ,Prochazka AV.Burnout and internal medicine resident work‐hour restrictions.Arch Intern Med.2005;165:25952600.
  24. Goitein L,Shanafelt TD,Wipf JE,Slatore CG,Back AL.The effects of work‐hour limitations on resident well‐being, patient care, and education in an internal medicine residency program.Arch Intern Med.2005;165:26012606.
  25. Shanafelt TD,Bradley KA,Wipf JE,Back AL.Burnout and self‐reported patient care in an internal medicine residency program.Ann Intern Med.2002;136:358367.
  26. Wu AW,Folkman S,McPhee SJ,Lo B.Do house officers learn from their mistakes?Qual Saf Health Care.2003;12:221226; discussion, 227–228.
  27. Danziger S,Levav J,Avnaim‐Pesso L.Extraneous factors in judicial decisions.Proc Natl Acad Sci U S A.2011;108:68896892.
  28. Yeh M,Cahill D.Quantifying physician teaching productivity using clinical relative value units.J Gen Intern Med.1999;14:617621.
  29. Maslach C JS.Maslach Burnout Inventory Manual.3rd ed.Palo Alto, CA:Consulting Psychology Press;1986.
  30. Aiken LH,Clarke SP,Sloane DM,Sochalski J,Silber JH.Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction.JAMA.2002;288:19871993.
  31. Archibald LK,Manning ML,Bell LM,Banerjee S,Jarvis WR.Patient density, nurse‐to‐patient ratio and nosocomial infection risk in a pediatric cardiac intensive care unit.Pediatr Infect Dis J.1997;16:10451048.
  32. Tucker AL,Spear SJ.Operational failures and interruptions in hospital nursing.Health Serv Res.2006;41:643662.
  33. Hart SG,Staveland LE.Development of NASA‐TLX (Task Load Index): results of empirical and theoretical research. In: Hancock PA, Meshkati N, eds.Human Mental Workload.Amsterdam, Netherlands:North Holland Press;1988:239250.
References
  1. Ong M,Bostrom A,Vidyarthi A,McCulloch C,Auerbach A.House staff team workload and organization effects on patient outcomes in an academic general internal medicine inpatient service.Arch Intern Med.2007;167:4752.
  2. Fletcher KE,Parekh V,Halasyamani L, et al.The work hour rules and contributors to patient care mistakes: A focus group study with internal medicine residentsJ Hosp Med.2008;3:228237.
  3. Weinger MB,Reddy SB,Slagle JM.Multiple measures of anesthesia workload during teaching and nonteaching cases.Anesth Analg.2004;98:14191425.
  4. Vredenburgh AG,Weinger MB,Williams KJ,Kalsher MJ,Macario A.Developing a technique to measure anesthesiologists' real‐time workload.Proceedings of the Human Factors and Ergonomics Society Annual Meeting.2000;44:241244.
  5. Weinger MB,Vredenburgh AG,Schumann CM, et al.Quantitative description of the workload associated with airway management procedures.J Clin Anesth.2000;12:273282.
  6. Weinger MB,Herndon OW,Zornow MH,Paulus MP,Gaba DM,Dallen LT.An objective methodology for task analysis and workload assessment in anesthesia providers.Anesthesiology.1994;80:7792.
  7. Slagle JM,Weinger MB.Effects of intraoperative reading on vigilance and workload during anesthesia care in an academic medical center.Anesthesiology.2009;110:275283.
  8. Brasel KJ,Pierre AL,Weigelt JA.Resident work hours: what they are really doing.Arch Surg.2004;139:490493; discussion, 493–494.
  9. Dresselhaus TR,Luck J,Wright BC,Spragg RG,Lee ML,Bozzette SA.Analyzing the time and value of housestaff inpatient work.J Gen Intern Med.1998;13:534540.
  10. Westbrook JI,Ampt A,Kearney L,Rob MI.All in a day's work: an observational study to quantify how and with whom doctors on hospital wards spend their time.[see comment].Med J Aust.2008;188:506509.
  11. Lurie N,Rank B,Parenti C,Woolley T,Snoke W.How do house officers spend their nights? A time study of internal medicine house staff on call.N Engl J Med.1989;320:16731677.
  12. Tipping MD,Forth VE,Magill DB,Englert K,Williams MV.Systematic review of time studies evaluating physicians in the hospital setting.J Hosp Med.2010;5:353359.
  13. Guarisco S,Oddone E,Simel D.Time analysis of a general medicine service: results from a random work sampling study.J Gen Intern Med.1994;9:272277.
  14. Hayward RS,Rockwood K,Sheehan GJ,Bass EB.A phenomenology of scut.Ann Intern Med.1991;115:372376.
  15. Nerenz D,Rosman H,Newcomb C, et al.The on‐call experience of interns in internal medicine. Medical Education Task Force of Henry Ford Hospital.Arch Intern Med.1990;150:22942297.
  16. Tipping MD,Forth VE,O'Leary KJ, et al.Where did the day go? A time‐motion study of hospitalists.J Hosp Med.2010;5:323328.
  17. O'Leary KJ,Liebovitz DM,Baker DW.How hospitalists spend their time: insights on efficiency and safety.J Hosp Med.2006;1:8893.
  18. Cao CG,Weinger MB,Slagle J, et al.Differences in day and night shift clinical performance in anesthesiology.Hum Factors.2008;50:276290.
  19. Slagle J,Weinger MB,Dinh MT,Brumer VV,Williams K.Assessment of the intrarater and interrater reliability of an established clinical task analysis methodology.Anesthesiology.2002;96:11291139.
  20. Weinger MB,Herndon OW,Gaba DM.The effect of electronic record keeping and transesophageal echocardiography on task distribution, workload, and vigilance during cardiac anesthesia.Anesthesiology.1997;87:144155.
  21. Shaw G.Fight burnout while fostering experience: investing in hospitalist programs now can fight burnout later.ACP Hospitalist. July2008.
  22. Jerrard J.Hospitalist burnout: recognize it in yourself and others, and avoid or eliminate it.The Hospitalist. March2006.
  23. Gopal R,Glasheen JJ,Miyoshi TJ,Prochazka AV.Burnout and internal medicine resident work‐hour restrictions.Arch Intern Med.2005;165:25952600.
  24. Goitein L,Shanafelt TD,Wipf JE,Slatore CG,Back AL.The effects of work‐hour limitations on resident well‐being, patient care, and education in an internal medicine residency program.Arch Intern Med.2005;165:26012606.
  25. Shanafelt TD,Bradley KA,Wipf JE,Back AL.Burnout and self‐reported patient care in an internal medicine residency program.Ann Intern Med.2002;136:358367.
  26. Wu AW,Folkman S,McPhee SJ,Lo B.Do house officers learn from their mistakes?Qual Saf Health Care.2003;12:221226; discussion, 227–228.
  27. Danziger S,Levav J,Avnaim‐Pesso L.Extraneous factors in judicial decisions.Proc Natl Acad Sci U S A.2011;108:68896892.
  28. Yeh M,Cahill D.Quantifying physician teaching productivity using clinical relative value units.J Gen Intern Med.1999;14:617621.
  29. Maslach C JS.Maslach Burnout Inventory Manual.3rd ed.Palo Alto, CA:Consulting Psychology Press;1986.
  30. Aiken LH,Clarke SP,Sloane DM,Sochalski J,Silber JH.Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction.JAMA.2002;288:19871993.
  31. Archibald LK,Manning ML,Bell LM,Banerjee S,Jarvis WR.Patient density, nurse‐to‐patient ratio and nosocomial infection risk in a pediatric cardiac intensive care unit.Pediatr Infect Dis J.1997;16:10451048.
  32. Tucker AL,Spear SJ.Operational failures and interruptions in hospital nursing.Health Serv Res.2006;41:643662.
  33. Hart SG,Staveland LE.Development of NASA‐TLX (Task Load Index): results of empirical and theoretical research. In: Hancock PA, Meshkati N, eds.Human Mental Workload.Amsterdam, Netherlands:North Holland Press;1988:239250.
Issue
Journal of Hospital Medicine - 7(5)
Issue
Journal of Hospital Medicine - 7(5)
Page Number
426-430
Page Number
426-430
Publications
Publications
Article Type
Display Headline
Defining and measuring the effort needed for inpatient medicine work
Display Headline
Defining and measuring the effort needed for inpatient medicine work
Sections
Article Source

Copyright © 2012 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
5000 W. National Ave., PC Division, Milwaukee, WI 53295===
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Trends in Inpatient Continuity of Care

Article Type
Changed
Mon, 05/22/2017 - 21:26
Display Headline
Trends in inpatient continuity of care for a cohort of Medicare patients 1996–2006

Continuity of care is considered by many physicians to be of critical importance in providing high‐quality patient care. Most of the research to date has focused on continuity in outpatient primary care. Research on outpatient continuity of care has been facilitated by the fact that a number of measurement tools for outpatient continuity exist.1 Outpatient continuity of care has been linked to better quality of life scores,2 lower costs,3 and less emergency room use.4 As hospital medicine has taken on more and more of the responsibility of inpatient care, primary care doctors have voiced concerns about the impact of hospitalists on overall continuity of care5 and the quality of the doctorpatient relationship.6

Recently, continuity of care in the hospital setting has also received attention. When the Accreditation Council for Graduate Medical Education (ACGME) first proposed restrictions to resident duty hours, the importance of continuity of inpatient care began to be debated in earnest in large part because of the increase in hand‐offs which accompanies discontinuity.7, 8 A recent study of hospitalist communication documented that as many as 13% of hand‐offs at the time of service changes are judged as incomplete by the receiving physician. These incomplete hand‐offs were more likely to be associated with uncertainty regarding the plan of care, as well as perceived near misses or adverse events.9 In addition, several case reports and studies suggest that systems with less continuity may have poorer outcomes.7, 1015

Continuity in the hospital setting is likely to be important for several reasons. First, the acuity of a patient's problem during a hospitalization is likely greater than during an outpatient visit. Thus the complexity of information to be transferred between physicians during a hospital stay is correspondingly greater. Second, the diagnostic uncertainty surrounding many admissions leads to complex thought processes that may be difficult to recreate when handing off patient care to another physician. Finally, knowledge of a patient's hospital course and the likely trajectory of care is facilitated by firsthand knowledge of where the patient has been. All this information can be difficult to distill into a brief sign‐out to another physician who assumes care of the patient.

In the current study, we sought to examine the trends over time in continuity of inpatient care. We chose patients likely to be cared for by general internists: those hospitalized for chronic obstructive pulmonary disease (COPD), pneumonia, and congestive heart failure (CHF). The general internists caring for patients in the hospital could be the patient's primary care physician (PCP), a physician covering for the patient's PCP, a physician assigned at admission by the hospital, or a hospitalist. Our goals were to describe the current level of continuity of care in the hospital setting, to examine whether continuity has changed over time, and to determine factors affecting continuity of care.

Methods

We used a 5% national sample of claims data from Medicare beneficiaries for the years 19962006.16 This included Medicare enrollment files, Medicare Provider Analysis and Review (MEDPAR) files, Medicare Carrier files, and Provider of Services (POS) files.17, 18

Establishment of the Study Cohort

Hospital admissions for COPD (Diagnosis Related Group [DRG] 088), pneumonia (DRG 089, 090), and CHF (DRG 127) from 1996 to 2006 for patients older than 66 years in MEDPAR were selected (n = 781,348). We excluded admissions for patients enrolled in health maintenance organizations (HMOs) or who did not have Medicare Parts A and B for the entire year prior to admission (n = 57,558). Admissions with a length of stay >18 days (n = 10,688) were considered outliers (exceeding the 99th percentile) and were excluded. Only admissions cared for by a general internist, family physician, general practitioner, or geriatrician were included (n = 528,453).

Measures

We categorized patients by age, gender, and ethnicity using Medicare enrollment files. We used the Medicaid indicator in the Medicare file as a proxy of low socioeconomic status. We used MEDPAR files to determine the origin of the admission (via the emergency department vs other), weekend versus weekday admission, and DRG. A comorbidity score was generated using the Elixhauser comorbidity scale using inpatient and outpatient billing data.19 In analyses, we listed the total number of comorbidities identified. The specialty of each physician was determined from the codes in the Medicare Carrier files. The 2004 POS files provided hospital‐level information such as zip code, metropolitan size, state, total number of beds, type of hospital, and medical school affiliation. We divided metropolitan size and total number of hospital beds into quartiles. We categorized hospitals as nonprofit, for profit, or public; medical school affiliation was categorized as non, minor, or major.

Determination of Primary Care Physician (PCP)

We identified outpatient visits using American Medical AssociationCommon Procedure Terminology (CPT) evaluation and management codes 99201 to 99205 (new patient) and 99221 to 99215 (established patient encounters). Individual providers were differentiated by using their Unique Provider Identification Number (UPIN). We defined a PCP as a general practitioner, family physician, internist, or geriatrician. Patients had to make at least 3 visits on different days to the same PCP within a year prior to the hospitalization to be categorized as having a PCP.20

Identification of Hospitalists Versus Other Generalist Physicians

As previously described, we defined hospitalists as general internal medicine physicians who derive at least 90% of their Medicare claims for Evaluation and Management services from care provided to hospitalized patients.21 Non‐hospitalist generalist physicians were those generalists who met the criteria for generalists but did not derive at least 90% of their Medicare claims from inpatient medicine.

Definition of Inpatient Continuity of Care

We measured inpatient continuity of care by number of generalist physicians (including hospitalists) who provided care during a hospitalization, through all inpatient claims made during that hospitalization. We considered patients to have had inpatient continuity of care if all billing by generalist physicians was done by one physician during the entire hospitalization.

Statistical Analyses

We calculated the percentage of admissions that received care from 1, 2, or 3 or more generalist physicians during the hospitalization, and stratified by selected patient and hospital characteristics. These proportions were also stratified by whether the patients were cared for by their outpatient PCP or not, and whether they were cared for by hospitalists or not. Based on who cared for the patient during the hospitalization, all admissions were classified as receiving care from: 1) non‐hospitalist generalist physicians, 2) a combination of generalist physicians and hospitalists, and 3) hospitalists only. The effect of patient and hospital characteristics on whether a patient experienced inpatient continuity was evaluated using a hierarchical generalized linear model (HGLM) with a logistic link, adjusting for clustering of admissions within hospitals and all covariates. We repeated our analyses using HGLM with an ordinal logit link to explore the factors associated with number of generalists seen in the hospital. All analyses were performed with SAS version 9.1 (SAS Inc, Cary, NC). The SAS GLIMMIX procedure was used to conduct multilevel analyses.

Results

Between 1996 and 2006, 528,453 patients hospitalized for COPD, pneumonia, and CHF received care by a generalist physician during their hospital stay. Of these, 64.3% were seen by one generalist physician, 26.9% by two generalist physicians, and 8.8% by three or more generalist physicians during hospitalization.

Figure 1 shows the percentage of all patients seen by 1, 2, and 3 or more generalist physicians between 1996 and 2006. The percentage of patients receiving care from one generalist physician declined from 70.7% in 1996 to 59.4% in 2006 (P < 0.001). During the same period, the percentage of patients receiving care from 3 or more generalist physicians increased from 6.5% to 10.7% (P < 0.001). Similar trends were seen for each of the 3 conditions. There was a decrease in overall length of stay during this period, from a mean of 5.7 to 4.9 days (P < 0.001). The increase in the number of generalist physicians providing care during the hospital stay did not correspond to an increase in total number of visits during the hospitalization. The average number of daily visits from a generalist physician was 0.94 (0.30) in 1996 and 0.96 (0.35) in 2006.

Figure 1
Percentage of patients seen by 1, 2, or 3 or more generalist physicians during a hospitalization for the years 1996–2006. P < 0.001 for Cochran‐Armitage trend test.

Table 1 presents the percentage of patients receiving care from 1, 2, and 3 or more generalist physicians during hospitalization stratified by patient and hospital characteristics. Older adults, females, non‐Hispanic whites, those with higher socioeconomic status, and those with more comorbidities were more likely to receive care by multiple generalist physicians. There was also large variation by geographic region, metropolitan area size, and hospital characteristics. All of these differences were significant at the P < 0.0001 level.

Percentage of Patients Receiving Care From 1, 2, and 3 or More Generalist Physicians During Hospitalization for COPD, Pneumonia, and CHF Stratifiedby Patient and Hospital Characteristics (N = 528,453)
  No. of Generalist Physicians Seen During Hospitalization
CharacteristicN123 (Percentage of Patients)
  • Abbreviations: CHF, congestive heart failure; COPD, chronic obstructive pulmonary disease; ICU, intensive care unit; PCP, primary care physician; SNF, skilled nursing facility.

  • Data missing (n = 1827). Note that differences in all categories were significant at the P < 0.0001 level.

Age at admission
6674152,48866.425.68.0
7584226,80263.827.38.9
85+149,16363.027.79.3
Gender    
Male216,60265.326.48.3
Female311,85163.627.39.1
Ethnicity    
White461,54363.727.49.0
Black46,96068.623.87.6
Other19,95067.924.57.6
Low socioeconomic status    
No366,39263.427.59.1
Yes162,06166.325.78.0
Emergency admission    
No188,35466.825.67.6
Yes340,09962.927.79.4
Weekend admission    
No392,15065.725.88.5
Yes136,30360.130.39.6
Diagnosis‐related groups    
CHF213,91465.026.38.7
Pneumonia195,43062.528.09.5
COPD119,10966.126.27.7
Had a PCP    
No201,01666.525.48.0
Yes327,43762.927.99.2
Seen hospitalist    
No431,78467.825.17.0
Yes96,66948.534.916.6
Charlson comorbidity score    
0127,38564.027.28.8
1131,40265.126.88.1
2105,83164.926.68.5
3163,83563.427.19.5
ICU use    
No431,46265.326.58.2
Yes96,99160.128.711.2
Length of stay (in days)    
Mean (SD) 4.7 (2.9)5.8 (3.1)8.1 (3.7)
Geographic region    
New England23,57255.730.813.5
Middle Atlantic78,18160.827.811.4
East North Central98,07265.726.38.0
West North Central44,78559.630.59.9
South Atlantic104,89463.827.09.2
East South Central51,45067.824.67.6
West South Central63,49369.224.86.0
Mountain20,31061.929.48.7
Pacific36,48466.726.37.0
Size of metropolitan area*    
1,000,000229,14563.726.59.8
250,000999,999114,44861.029.29.8
100,000249,99911,44861.330.48.3
<100,000171,58567.425.86.8
Medical school affiliation*    
Major77,60562.926.810.3
Minor107,14461.528.410.1
Non341,87465.526.58.0
Type of hospital*    
Nonprofit375,88862.727.89.5
For profit63,89867.525.57.0
Public86,83768.924.26.9
Hospital size* ...
<200 beds232,86967.225.77.1
200349 beds135,95462.627.99.5
350499 beds77,08061.128.310.6
500 beds80,72361.727.610.7
Discharge location    
Home361,89366.626.07.4
SNF94,72357.630.112.3
Rehab3,03045.734.220.1
Death22,13363.125.411.5
Other46,67461.828.110.1

Table 2 presents the results of a multivariable analysis of factors independently associated with experiencing continuity of care. In this analysis, continuity of care was defined as receiving inpatient care from one generalist physician (vs two or more). In the unadjusted models, the odds of experiencing continuity of care decreased by 5.5% per year from 1996 through 2006, and this decrease did not substantially change after adjusting for all other variables (4.8% yearly decrease). Younger patients, females, black patients, and those with low socioeconomic status were slightly more likely to experience continuity of care. As expected, patients admitted on weekends, emergency admissions, and those with intensive care unit (ICU) stays were less likely to experience continuity. There were marked geographic variations in continuity, with continuity approximately half as likely in New England as in the South. Continuity was greatest in smaller metropolitan areas versus rural and large metropolitan areas. Hospital size and teaching status produced only minor variation.

Multivariable Analysis of Odds of Experiencing Continuity of Care During Hospitalization Between 1996 and 2006
CharacteristicOdds Ratio (95% CI)
  • Abbreviations: CHF, congestive heart failure; CI, confidence interval; COPD, chronic obstructive pulmonary disease; ICU, Intensive care unit; PCP, primary care physician.

Admission year (increase by year)0.952 (0.9500.954)
Length of stay (increase by day)0.822 (0.8200.823)
Had a PCP 
No1.0
Yes0.762 (0.7520.773)
Seen by a hospitalist 
No1.0
Yes0.391 (0.3840.398)
Age 
66741.0
75840.959 (0.9440.973)
85+0.946 (0.9300.962)
Gender 
Male1.0
Female1.047 (1.0331.060)
Ethnicity 
White1.0
Black1.126 (1.0971.155)
Other1.062 (1.0231.103)
Low socioeconomic status 
No1.0
Yes1.036 (1.0201.051)
Emergency admission 
No1.0
Yes0.864 (0.8510.878)
Weekend admission 
No1.0
Yes0.778 (0.7680.789)
Diagnosis‐related group 
CHF1.0
Pneumonia0.964 (0.9500.978)
COPD1.002 (0.9851.019)
Charlson comorbidity score 
01.0
11.053 (1.0351.072)
21.062 (1.0421.083)
31.040 (1.0221.058)
ICU use 
No1.0
Yes0.918 (0.9020.935)
Geographic region 
Middle Atlantic1.0
New England0.714 (0.6210.822)
East North Central1.015 (0.9221.119)
West North Central0.791 (0.7110.879)
South Atlantic1.074 (0.9711.186)
East South Central1.250 (1.1131.403)
West South Central1.377 (1.2401.530)
Mountain0.839 (0.7400.951)
Pacific0.985 (0.8841.097)
Size of metropolitan area 
1,000,0001.0
250,000999,9990.743 (0.6910.798)
100,000249,9990.651 (0.5380.789)
<100,0001.062 (0.9911.138)
Medical school affiliation 
None1.0
Minor0.889 (0.8270.956)
Major1.048 (0.9521.154)
Type of hospital 
Nonprofit1.0
For profit1.194 (1.1061.289)
Public1.394 (1.3091.484)
Size of hospital 
<200 beds1.0
200349 beds0.918 (0.8550.986)
350499 beds0.962 (0.8721.061)
500 beds1.000 (0.8931.119)

In Table 2 we also show that patients with an established PCP and those who received care from a hospitalist in the hospital were substantially less likely to experience continuity of care. There are several possible interpretations for that finding. For example, it might be that patients admitted to a hospitalist service were likely to see multiple hospitalists. Alternatively, the decreased continuity associated with hospitalists could reflect the fact that some patients cared for predominantly by non‐hospitalists may have seen a hospitalist on call for a sudden change in health status. To further explore these possible explanatory pathways, we constructed three new cohorts: 1) patients receiving all their care from non‐hospitalists, 2) patients receiving all their care from hospitalists, and 3) patients seen by both. As shown in Table 3, in patients seen by non‐hospitalists only, the mean number of generalist physicians seen during hospitalization was slightly greater than in patients cared for only by hospitalists.

Number of Generalist Physicians Seen During Entire Hospitalization in Patients Who Received Their Care From Non‐Hospitalists Only, Hospitalists Only, or Both Hospitalists and Non‐Hospitalists
Received Care During Entire HospitalizationNo. of AdmissionsMean (SD) No. of Generalist Physicians Seen During Hospitalization
  • Abbreviations: SD, standard deviation.

  • Chi‐square P < 0.001.

Non‐hospitalist physician431,7841.41 (0.68)*
Hospitalist physician64,6621.34 (0.62)*
Both32,0072.55 (0.83)*

We also tested for interactions in Table 2 between admission year and other factors. There was a significant interaction between admission year and having an identifiable PCP in the year prior to admission (Table 2). The odds of experiencing continuity of care decreased more rapidly for patients who did not have a PCP (5.5% per year; 95% CI: 5.2%5.8%) than for those who had one (4.3% per year; 95% CI: 4.1%4.6%).

Discussion

We conducted this study to better understand the degree to which hospitalized patients experience discontinuity of care within a hospital stay and to determine which patients are most likely to experience discontinuity. In our study, we specifically chose admission conditions that would likely be followed primarily by generalist physicians. We found that, over the past decade, discontinuity of care for hospitalized patients has increased substantially, as indicated by the proportion of patients taken care of by more than one generalist physician during a single hospital stay. This occurred even though overall length of stay was decreasing in this same period.

It is perhaps not surprising that inpatient continuity of care has been decreasing in the past 10 years. Outpatient practices are becoming busier, and more doctors are practicing in large group practices, which could lead to several different physicians in the same practice rounding on a hospitalized patient. We have previously demonstrated that hospitalists are caring for an increasing number of patients over this same time period,21 so another possibility is that hospitalist services are being used more often because of this heavy outpatient workload. Our analyses allowed us to test the hypothesis that having hospitalists involved in patient care increases discontinuity.

At first glance, it appears that being cared for by hospitalists may result in worse continuity of care. However, closer scrutiny of the data reveals that the discontinuity ascribed to the hospitalists in the multivariable model appears to be an artifact of defining the hospitalist variable as having been seen by any hospitalist during the hospital stay. This would include patients who saw a hospitalist in addition to their PCP or another non‐hospitalist generalist. When we compared hospitalist‐only care to other generalist care, we could not detect a difference in discontinuity. We know that generalist visits per day to patients has not substantially increased over time, so this discontinuity trend is not explained by having visits by both a hospitalist and the PCP. Therefore, this combination of findings suggests that the increased discontinuity associated with having a hospitalist involved in patient care is likely the result of system issues rather than hospitalist care per se. In fact, patients seem to experience slightly better continuity when they see only hospitalists as opposed to only non‐hospitalists.

What types of systems issues might lead to this finding? Generalists in most settings could choose to involve a hospitalist at any point in the patient's hospital stay. This could occur because of a change in patient acuity requiring the involvement of hospitalists who are present in the hospital more. It is also possible that hospitalists' schedules are created to maximize inpatient continuity of care with individual hospitalists. Even though hospitalists clearly work shifts, the 7 on, 7 off model22 likely results in patients seeing the same physician each day until the switch day. This is in contrast to outpatient primary care doctors whose concentration may be on maintaining continuity within their practice.

As the field of hospital medicine was emerging, many internal medicine physicians from various specialties were concerned about the impact of hospitalists on patient care. In one study, 73% of internal medicine physicians who were not hospitalists thought that hospitalists would worsen continuity of care.23 Primary care and subspecialist internal medicine physicians also expressed the concern that hospitalists could hurt their own relationships with patients,6 presumably because of lost continuity between the inpatient and outpatient settings. However, this fear seems to diminish once hospitalist programs are implemented and primary care doctors have experience with them.23 Our study suggests that the decrease in continuity that has occurred since these studies were published is not likely due to the emergence of hospital medicine, but rather due to other factors that influence who cares for hospitalized patients.

This study had some limitations. Length of stay is an obvious mediator of number of generalist physicians seen. Therefore, the sickest patients are likely to have both a long length of stay and low continuity. We adjusted for this in the multivariable modeling. In addition, given that this study used a large database, certain details are not discernable. For example, we chose to operationalize discontinuity as visits from multiple generalists during a single hospital stay. That is not a perfect definition, but it does represent multiple physicians directing the care of a patient. Importantly, this does not appear to represent continuity with one physician with extra visits from another, as the total number of generalist visits per day did not change over time. It is also possible that patients in the non‐hospitalist group saw physicians only from a single practice, but those details are not included in the database. Finally, we cannot tell what type of hand‐offs were occurring for individual patients during each hospital stay. Despite these disadvantages, using a large database like this one allows for detection of fairly small differences that could still be clinically important.

In summary, hospitalized patients appear to experience less continuity now than 10 years ago. However, the hospitalist model does not appear to play a role in this discontinuity. It is worth exploring in more detail why patients would see both hospitalists and other generalists. This pattern is not surprising, but may have some repercussions in terms of increasing the number of hand‐offs experienced by patients. These could lead to problems with patient safety and quality of care. Future work should explore the reasons for this discontinuity and look at the relationship between inpatient discontinuity outcomes such as quality of care and the doctorpatient relationship.

Acknowledgements

The authors thank Sarah Toombs Smith, PhD, for help in preparation of the manuscript.

Files
References
  1. Saultz JW.Defining and measuring interpersonal continuity of care.Ann Fam Med.2003;1(3):134143.
  2. Hanninen J,Takala J,Keinanen‐Kiukaanniemi S.Good continuity of care may improve quality of life in Type 2 diabetes.Diabetes Res Clin Pract.2001;51(1):2127.
  3. De Maeseneer JM,De Prins L,Gosset C,Heyerick J.Provider continuity in family medicine: Does it make a difference for total health care costs?Ann Fam Med.2003;1(3):144148.
  4. Gill JM,Mainous AG,Nsereko M.The effect of continuity of care on emergency department use.Arch Fam Med.2000;9(4):333338.
  5. Auerbach AD,Nelson EA,Lindenauer PK,Pantilat SZ,Katz PP,Wachter RM.Physician attitudes toward and prevalence of the hospitalist model of care: Results of a national survey.Am J Med.2000;109(8):648653.
  6. Auerbach AD,Davis RB,Phillips RS.Physician views on caring for hospitalized patients and the hospitalist model of inpatient care.J Gen Intern Med.2001;16(2):116119.
  7. Fletcher KE,Davis SQ,Underwood W,Mangrulkar RS,McMahon LF,Saint S.Systematic review: Effects of resident work hours on patient safety.Ann Intern Med.2004;141(11):851857.
  8. Fletcher KE,Saint S,Mangrulkar RS.Balancing continuity of care with residents' limited work hours: Defining the implications.Acad Med.2005;80(1):3943.
  9. Hinami K FJ,Meltzer DO,Arora VM.Understanding communication during hospitalist sevice changes: A mixed methods study.J Hosp Med.2009;4:535540.
  10. Beach C,Croskerry P,Shapiro M,Center for Safety in Emergency C. Profiles in patient safety: Emergency care transitions.Acad Emerg Med.2003;10(4):364367.
  11. Gandhi TK.Fumbled handoffs: One dropped ball after another.Ann Intern Med.2005;142(5):352358.
  12. Agency for Healthcare Research and Quality. Fumbled handoff.2004. Available at: http://www.webmm.ahrq.gov/printview.aspx?caseID=55. Accessed December 27, 2005.
  13. Shojania KG,Fletcher KE,Saint S.Graduate medical education and patient safety: A busy—and occasionally hazardous—intersection.Ann Intern Med.2006;145(8):592598.
  14. Petersen LA,Brennan TA,O'Neil AC,Cook EF,Lee TH.Does housestaff discontinuity of care increase the risk for preventable adverse events?Ann Intern Med.1994;121(11):866872.
  15. Laine C,Goldman L,Soukup JR,Hayes JG.The impact of a regulation restricting medical house staff working hours on the quality of patient care.JAMA.1993;269(3):374378.
  16. Centers for Medicare and Medicaid Services. Standard analytical files. Available at: http://www.cms.hhs.gov/IdentifiableDataFiles/02_Standard AnalyticalFiles.asp. Accessed March 1,2009.
  17. Centers for Medicare and Medicaid Services. Nonidentifiable data files: Provider of services files. Available at: http://www.cms.hhs.gov/NonIdentifiableDataFiles/04_ProviderofSerrvicesFile.asp. Accessed March 1,2009.
  18. Research Data Assistance Center. Medicare data file description. Available at: http://www.resdac.umn.edu/Medicare/file_descriptions.asp. Accessed March 1,2009.
  19. Weinhandl ED, SJ,Israni AK,Kasiske BL.Effect of comorbidity adjustment on CMS criteria for kidney transplant center performance.Am J Transplant.2009;9:506516.
  20. Sharma G,Fletcher K,Zhang D,Kuo YF,Freeman JL,Goodwin JS.Continuity of outpatient and inpatient care by primary care physicians for hospitalized older adults.JAMA.2009;301:16711680.
  21. Kuo YF,Sharma G,Freeman JL,Goodwin JS.Growth in the care of older patients by hospitalists in the United States.N Engl J Med. 2009;360:11021112.
  22. HCPro Inc.Medical Staff Leader blog.2010. Available at: http://blogs. hcpro.com/medicalstaff/2010/01/free‐form‐example‐seven‐day‐on‐seven‐day‐off‐hospitalist‐schedule/. Accessed November 20, 2010.
  23. Auerbach AD,Aronson MD,Davis RB,Phillips RS.How physicians perceive hospitalist services after implementation: Anticipation vs reality.Arch Intern Med.2003;163(19):23302336.
Article PDF
Issue
Journal of Hospital Medicine - 6(8)
Publications
Page Number
438-444
Sections
Files
Files
Article PDF
Article PDF

Continuity of care is considered by many physicians to be of critical importance in providing high‐quality patient care. Most of the research to date has focused on continuity in outpatient primary care. Research on outpatient continuity of care has been facilitated by the fact that a number of measurement tools for outpatient continuity exist.1 Outpatient continuity of care has been linked to better quality of life scores,2 lower costs,3 and less emergency room use.4 As hospital medicine has taken on more and more of the responsibility of inpatient care, primary care doctors have voiced concerns about the impact of hospitalists on overall continuity of care5 and the quality of the doctorpatient relationship.6

Recently, continuity of care in the hospital setting has also received attention. When the Accreditation Council for Graduate Medical Education (ACGME) first proposed restrictions to resident duty hours, the importance of continuity of inpatient care began to be debated in earnest in large part because of the increase in hand‐offs which accompanies discontinuity.7, 8 A recent study of hospitalist communication documented that as many as 13% of hand‐offs at the time of service changes are judged as incomplete by the receiving physician. These incomplete hand‐offs were more likely to be associated with uncertainty regarding the plan of care, as well as perceived near misses or adverse events.9 In addition, several case reports and studies suggest that systems with less continuity may have poorer outcomes.7, 1015

Continuity in the hospital setting is likely to be important for several reasons. First, the acuity of a patient's problem during a hospitalization is likely greater than during an outpatient visit. Thus the complexity of information to be transferred between physicians during a hospital stay is correspondingly greater. Second, the diagnostic uncertainty surrounding many admissions leads to complex thought processes that may be difficult to recreate when handing off patient care to another physician. Finally, knowledge of a patient's hospital course and the likely trajectory of care is facilitated by firsthand knowledge of where the patient has been. All this information can be difficult to distill into a brief sign‐out to another physician who assumes care of the patient.

In the current study, we sought to examine the trends over time in continuity of inpatient care. We chose patients likely to be cared for by general internists: those hospitalized for chronic obstructive pulmonary disease (COPD), pneumonia, and congestive heart failure (CHF). The general internists caring for patients in the hospital could be the patient's primary care physician (PCP), a physician covering for the patient's PCP, a physician assigned at admission by the hospital, or a hospitalist. Our goals were to describe the current level of continuity of care in the hospital setting, to examine whether continuity has changed over time, and to determine factors affecting continuity of care.

Methods

We used a 5% national sample of claims data from Medicare beneficiaries for the years 19962006.16 This included Medicare enrollment files, Medicare Provider Analysis and Review (MEDPAR) files, Medicare Carrier files, and Provider of Services (POS) files.17, 18

Establishment of the Study Cohort

Hospital admissions for COPD (Diagnosis Related Group [DRG] 088), pneumonia (DRG 089, 090), and CHF (DRG 127) from 1996 to 2006 for patients older than 66 years in MEDPAR were selected (n = 781,348). We excluded admissions for patients enrolled in health maintenance organizations (HMOs) or who did not have Medicare Parts A and B for the entire year prior to admission (n = 57,558). Admissions with a length of stay >18 days (n = 10,688) were considered outliers (exceeding the 99th percentile) and were excluded. Only admissions cared for by a general internist, family physician, general practitioner, or geriatrician were included (n = 528,453).

Measures

We categorized patients by age, gender, and ethnicity using Medicare enrollment files. We used the Medicaid indicator in the Medicare file as a proxy of low socioeconomic status. We used MEDPAR files to determine the origin of the admission (via the emergency department vs other), weekend versus weekday admission, and DRG. A comorbidity score was generated using the Elixhauser comorbidity scale using inpatient and outpatient billing data.19 In analyses, we listed the total number of comorbidities identified. The specialty of each physician was determined from the codes in the Medicare Carrier files. The 2004 POS files provided hospital‐level information such as zip code, metropolitan size, state, total number of beds, type of hospital, and medical school affiliation. We divided metropolitan size and total number of hospital beds into quartiles. We categorized hospitals as nonprofit, for profit, or public; medical school affiliation was categorized as non, minor, or major.

Determination of Primary Care Physician (PCP)

We identified outpatient visits using American Medical AssociationCommon Procedure Terminology (CPT) evaluation and management codes 99201 to 99205 (new patient) and 99221 to 99215 (established patient encounters). Individual providers were differentiated by using their Unique Provider Identification Number (UPIN). We defined a PCP as a general practitioner, family physician, internist, or geriatrician. Patients had to make at least 3 visits on different days to the same PCP within a year prior to the hospitalization to be categorized as having a PCP.20

Identification of Hospitalists Versus Other Generalist Physicians

As previously described, we defined hospitalists as general internal medicine physicians who derive at least 90% of their Medicare claims for Evaluation and Management services from care provided to hospitalized patients.21 Non‐hospitalist generalist physicians were those generalists who met the criteria for generalists but did not derive at least 90% of their Medicare claims from inpatient medicine.

Definition of Inpatient Continuity of Care

We measured inpatient continuity of care by number of generalist physicians (including hospitalists) who provided care during a hospitalization, through all inpatient claims made during that hospitalization. We considered patients to have had inpatient continuity of care if all billing by generalist physicians was done by one physician during the entire hospitalization.

Statistical Analyses

We calculated the percentage of admissions that received care from 1, 2, or 3 or more generalist physicians during the hospitalization, and stratified by selected patient and hospital characteristics. These proportions were also stratified by whether the patients were cared for by their outpatient PCP or not, and whether they were cared for by hospitalists or not. Based on who cared for the patient during the hospitalization, all admissions were classified as receiving care from: 1) non‐hospitalist generalist physicians, 2) a combination of generalist physicians and hospitalists, and 3) hospitalists only. The effect of patient and hospital characteristics on whether a patient experienced inpatient continuity was evaluated using a hierarchical generalized linear model (HGLM) with a logistic link, adjusting for clustering of admissions within hospitals and all covariates. We repeated our analyses using HGLM with an ordinal logit link to explore the factors associated with number of generalists seen in the hospital. All analyses were performed with SAS version 9.1 (SAS Inc, Cary, NC). The SAS GLIMMIX procedure was used to conduct multilevel analyses.

Results

Between 1996 and 2006, 528,453 patients hospitalized for COPD, pneumonia, and CHF received care by a generalist physician during their hospital stay. Of these, 64.3% were seen by one generalist physician, 26.9% by two generalist physicians, and 8.8% by three or more generalist physicians during hospitalization.

Figure 1 shows the percentage of all patients seen by 1, 2, and 3 or more generalist physicians between 1996 and 2006. The percentage of patients receiving care from one generalist physician declined from 70.7% in 1996 to 59.4% in 2006 (P < 0.001). During the same period, the percentage of patients receiving care from 3 or more generalist physicians increased from 6.5% to 10.7% (P < 0.001). Similar trends were seen for each of the 3 conditions. There was a decrease in overall length of stay during this period, from a mean of 5.7 to 4.9 days (P < 0.001). The increase in the number of generalist physicians providing care during the hospital stay did not correspond to an increase in total number of visits during the hospitalization. The average number of daily visits from a generalist physician was 0.94 (0.30) in 1996 and 0.96 (0.35) in 2006.

Figure 1
Percentage of patients seen by 1, 2, or 3 or more generalist physicians during a hospitalization for the years 1996–2006. P < 0.001 for Cochran‐Armitage trend test.

Table 1 presents the percentage of patients receiving care from 1, 2, and 3 or more generalist physicians during hospitalization stratified by patient and hospital characteristics. Older adults, females, non‐Hispanic whites, those with higher socioeconomic status, and those with more comorbidities were more likely to receive care by multiple generalist physicians. There was also large variation by geographic region, metropolitan area size, and hospital characteristics. All of these differences were significant at the P < 0.0001 level.

Percentage of Patients Receiving Care From 1, 2, and 3 or More Generalist Physicians During Hospitalization for COPD, Pneumonia, and CHF Stratifiedby Patient and Hospital Characteristics (N = 528,453)
  No. of Generalist Physicians Seen During Hospitalization
CharacteristicN123 (Percentage of Patients)
  • Abbreviations: CHF, congestive heart failure; COPD, chronic obstructive pulmonary disease; ICU, intensive care unit; PCP, primary care physician; SNF, skilled nursing facility.

  • Data missing (n = 1827). Note that differences in all categories were significant at the P < 0.0001 level.

Age at admission
6674152,48866.425.68.0
7584226,80263.827.38.9
85+149,16363.027.79.3
Gender    
Male216,60265.326.48.3
Female311,85163.627.39.1
Ethnicity    
White461,54363.727.49.0
Black46,96068.623.87.6
Other19,95067.924.57.6
Low socioeconomic status    
No366,39263.427.59.1
Yes162,06166.325.78.0
Emergency admission    
No188,35466.825.67.6
Yes340,09962.927.79.4
Weekend admission    
No392,15065.725.88.5
Yes136,30360.130.39.6
Diagnosis‐related groups    
CHF213,91465.026.38.7
Pneumonia195,43062.528.09.5
COPD119,10966.126.27.7
Had a PCP    
No201,01666.525.48.0
Yes327,43762.927.99.2
Seen hospitalist    
No431,78467.825.17.0
Yes96,66948.534.916.6
Charlson comorbidity score    
0127,38564.027.28.8
1131,40265.126.88.1
2105,83164.926.68.5
3163,83563.427.19.5
ICU use    
No431,46265.326.58.2
Yes96,99160.128.711.2
Length of stay (in days)    
Mean (SD) 4.7 (2.9)5.8 (3.1)8.1 (3.7)
Geographic region    
New England23,57255.730.813.5
Middle Atlantic78,18160.827.811.4
East North Central98,07265.726.38.0
West North Central44,78559.630.59.9
South Atlantic104,89463.827.09.2
East South Central51,45067.824.67.6
West South Central63,49369.224.86.0
Mountain20,31061.929.48.7
Pacific36,48466.726.37.0
Size of metropolitan area*    
1,000,000229,14563.726.59.8
250,000999,999114,44861.029.29.8
100,000249,99911,44861.330.48.3
<100,000171,58567.425.86.8
Medical school affiliation*    
Major77,60562.926.810.3
Minor107,14461.528.410.1
Non341,87465.526.58.0
Type of hospital*    
Nonprofit375,88862.727.89.5
For profit63,89867.525.57.0
Public86,83768.924.26.9
Hospital size* ...
<200 beds232,86967.225.77.1
200349 beds135,95462.627.99.5
350499 beds77,08061.128.310.6
500 beds80,72361.727.610.7
Discharge location    
Home361,89366.626.07.4
SNF94,72357.630.112.3
Rehab3,03045.734.220.1
Death22,13363.125.411.5
Other46,67461.828.110.1

Table 2 presents the results of a multivariable analysis of factors independently associated with experiencing continuity of care. In this analysis, continuity of care was defined as receiving inpatient care from one generalist physician (vs two or more). In the unadjusted models, the odds of experiencing continuity of care decreased by 5.5% per year from 1996 through 2006, and this decrease did not substantially change after adjusting for all other variables (4.8% yearly decrease). Younger patients, females, black patients, and those with low socioeconomic status were slightly more likely to experience continuity of care. As expected, patients admitted on weekends, emergency admissions, and those with intensive care unit (ICU) stays were less likely to experience continuity. There were marked geographic variations in continuity, with continuity approximately half as likely in New England as in the South. Continuity was greatest in smaller metropolitan areas versus rural and large metropolitan areas. Hospital size and teaching status produced only minor variation.

Multivariable Analysis of Odds of Experiencing Continuity of Care During Hospitalization Between 1996 and 2006
CharacteristicOdds Ratio (95% CI)
  • Abbreviations: CHF, congestive heart failure; CI, confidence interval; COPD, chronic obstructive pulmonary disease; ICU, Intensive care unit; PCP, primary care physician.

Admission year (increase by year)0.952 (0.9500.954)
Length of stay (increase by day)0.822 (0.8200.823)
Had a PCP 
No1.0
Yes0.762 (0.7520.773)
Seen by a hospitalist 
No1.0
Yes0.391 (0.3840.398)
Age 
66741.0
75840.959 (0.9440.973)
85+0.946 (0.9300.962)
Gender 
Male1.0
Female1.047 (1.0331.060)
Ethnicity 
White1.0
Black1.126 (1.0971.155)
Other1.062 (1.0231.103)
Low socioeconomic status 
No1.0
Yes1.036 (1.0201.051)
Emergency admission 
No1.0
Yes0.864 (0.8510.878)
Weekend admission 
No1.0
Yes0.778 (0.7680.789)
Diagnosis‐related group 
CHF1.0
Pneumonia0.964 (0.9500.978)
COPD1.002 (0.9851.019)
Charlson comorbidity score 
01.0
11.053 (1.0351.072)
21.062 (1.0421.083)
31.040 (1.0221.058)
ICU use 
No1.0
Yes0.918 (0.9020.935)
Geographic region 
Middle Atlantic1.0
New England0.714 (0.6210.822)
East North Central1.015 (0.9221.119)
West North Central0.791 (0.7110.879)
South Atlantic1.074 (0.9711.186)
East South Central1.250 (1.1131.403)
West South Central1.377 (1.2401.530)
Mountain0.839 (0.7400.951)
Pacific0.985 (0.8841.097)
Size of metropolitan area 
1,000,0001.0
250,000999,9990.743 (0.6910.798)
100,000249,9990.651 (0.5380.789)
<100,0001.062 (0.9911.138)
Medical school affiliation 
None1.0
Minor0.889 (0.8270.956)
Major1.048 (0.9521.154)
Type of hospital 
Nonprofit1.0
For profit1.194 (1.1061.289)
Public1.394 (1.3091.484)
Size of hospital 
<200 beds1.0
200349 beds0.918 (0.8550.986)
350499 beds0.962 (0.8721.061)
500 beds1.000 (0.8931.119)

In Table 2 we also show that patients with an established PCP and those who received care from a hospitalist in the hospital were substantially less likely to experience continuity of care. There are several possible interpretations for that finding. For example, it might be that patients admitted to a hospitalist service were likely to see multiple hospitalists. Alternatively, the decreased continuity associated with hospitalists could reflect the fact that some patients cared for predominantly by non‐hospitalists may have seen a hospitalist on call for a sudden change in health status. To further explore these possible explanatory pathways, we constructed three new cohorts: 1) patients receiving all their care from non‐hospitalists, 2) patients receiving all their care from hospitalists, and 3) patients seen by both. As shown in Table 3, in patients seen by non‐hospitalists only, the mean number of generalist physicians seen during hospitalization was slightly greater than in patients cared for only by hospitalists.

Number of Generalist Physicians Seen During Entire Hospitalization in Patients Who Received Their Care From Non‐Hospitalists Only, Hospitalists Only, or Both Hospitalists and Non‐Hospitalists
Received Care During Entire HospitalizationNo. of AdmissionsMean (SD) No. of Generalist Physicians Seen During Hospitalization
  • Abbreviations: SD, standard deviation.

  • Chi‐square P < 0.001.

Non‐hospitalist physician431,7841.41 (0.68)*
Hospitalist physician64,6621.34 (0.62)*
Both32,0072.55 (0.83)*

We also tested for interactions in Table 2 between admission year and other factors. There was a significant interaction between admission year and having an identifiable PCP in the year prior to admission (Table 2). The odds of experiencing continuity of care decreased more rapidly for patients who did not have a PCP (5.5% per year; 95% CI: 5.2%5.8%) than for those who had one (4.3% per year; 95% CI: 4.1%4.6%).

Discussion

We conducted this study to better understand the degree to which hospitalized patients experience discontinuity of care within a hospital stay and to determine which patients are most likely to experience discontinuity. In our study, we specifically chose admission conditions that would likely be followed primarily by generalist physicians. We found that, over the past decade, discontinuity of care for hospitalized patients has increased substantially, as indicated by the proportion of patients taken care of by more than one generalist physician during a single hospital stay. This occurred even though overall length of stay was decreasing in this same period.

It is perhaps not surprising that inpatient continuity of care has been decreasing in the past 10 years. Outpatient practices are becoming busier, and more doctors are practicing in large group practices, which could lead to several different physicians in the same practice rounding on a hospitalized patient. We have previously demonstrated that hospitalists are caring for an increasing number of patients over this same time period,21 so another possibility is that hospitalist services are being used more often because of this heavy outpatient workload. Our analyses allowed us to test the hypothesis that having hospitalists involved in patient care increases discontinuity.

At first glance, it appears that being cared for by hospitalists may result in worse continuity of care. However, closer scrutiny of the data reveals that the discontinuity ascribed to the hospitalists in the multivariable model appears to be an artifact of defining the hospitalist variable as having been seen by any hospitalist during the hospital stay. This would include patients who saw a hospitalist in addition to their PCP or another non‐hospitalist generalist. When we compared hospitalist‐only care to other generalist care, we could not detect a difference in discontinuity. We know that generalist visits per day to patients has not substantially increased over time, so this discontinuity trend is not explained by having visits by both a hospitalist and the PCP. Therefore, this combination of findings suggests that the increased discontinuity associated with having a hospitalist involved in patient care is likely the result of system issues rather than hospitalist care per se. In fact, patients seem to experience slightly better continuity when they see only hospitalists as opposed to only non‐hospitalists.

What types of systems issues might lead to this finding? Generalists in most settings could choose to involve a hospitalist at any point in the patient's hospital stay. This could occur because of a change in patient acuity requiring the involvement of hospitalists who are present in the hospital more. It is also possible that hospitalists' schedules are created to maximize inpatient continuity of care with individual hospitalists. Even though hospitalists clearly work shifts, the 7 on, 7 off model22 likely results in patients seeing the same physician each day until the switch day. This is in contrast to outpatient primary care doctors whose concentration may be on maintaining continuity within their practice.

As the field of hospital medicine was emerging, many internal medicine physicians from various specialties were concerned about the impact of hospitalists on patient care. In one study, 73% of internal medicine physicians who were not hospitalists thought that hospitalists would worsen continuity of care.23 Primary care and subspecialist internal medicine physicians also expressed the concern that hospitalists could hurt their own relationships with patients,6 presumably because of lost continuity between the inpatient and outpatient settings. However, this fear seems to diminish once hospitalist programs are implemented and primary care doctors have experience with them.23 Our study suggests that the decrease in continuity that has occurred since these studies were published is not likely due to the emergence of hospital medicine, but rather due to other factors that influence who cares for hospitalized patients.

This study had some limitations. Length of stay is an obvious mediator of number of generalist physicians seen. Therefore, the sickest patients are likely to have both a long length of stay and low continuity. We adjusted for this in the multivariable modeling. In addition, given that this study used a large database, certain details are not discernable. For example, we chose to operationalize discontinuity as visits from multiple generalists during a single hospital stay. That is not a perfect definition, but it does represent multiple physicians directing the care of a patient. Importantly, this does not appear to represent continuity with one physician with extra visits from another, as the total number of generalist visits per day did not change over time. It is also possible that patients in the non‐hospitalist group saw physicians only from a single practice, but those details are not included in the database. Finally, we cannot tell what type of hand‐offs were occurring for individual patients during each hospital stay. Despite these disadvantages, using a large database like this one allows for detection of fairly small differences that could still be clinically important.

In summary, hospitalized patients appear to experience less continuity now than 10 years ago. However, the hospitalist model does not appear to play a role in this discontinuity. It is worth exploring in more detail why patients would see both hospitalists and other generalists. This pattern is not surprising, but may have some repercussions in terms of increasing the number of hand‐offs experienced by patients. These could lead to problems with patient safety and quality of care. Future work should explore the reasons for this discontinuity and look at the relationship between inpatient discontinuity outcomes such as quality of care and the doctorpatient relationship.

Acknowledgements

The authors thank Sarah Toombs Smith, PhD, for help in preparation of the manuscript.

Continuity of care is considered by many physicians to be of critical importance in providing high‐quality patient care. Most of the research to date has focused on continuity in outpatient primary care. Research on outpatient continuity of care has been facilitated by the fact that a number of measurement tools for outpatient continuity exist.1 Outpatient continuity of care has been linked to better quality of life scores,2 lower costs,3 and less emergency room use.4 As hospital medicine has taken on more and more of the responsibility of inpatient care, primary care doctors have voiced concerns about the impact of hospitalists on overall continuity of care5 and the quality of the doctorpatient relationship.6

Recently, continuity of care in the hospital setting has also received attention. When the Accreditation Council for Graduate Medical Education (ACGME) first proposed restrictions to resident duty hours, the importance of continuity of inpatient care began to be debated in earnest in large part because of the increase in hand‐offs which accompanies discontinuity.7, 8 A recent study of hospitalist communication documented that as many as 13% of hand‐offs at the time of service changes are judged as incomplete by the receiving physician. These incomplete hand‐offs were more likely to be associated with uncertainty regarding the plan of care, as well as perceived near misses or adverse events.9 In addition, several case reports and studies suggest that systems with less continuity may have poorer outcomes.7, 1015

Continuity in the hospital setting is likely to be important for several reasons. First, the acuity of a patient's problem during a hospitalization is likely greater than during an outpatient visit. Thus the complexity of information to be transferred between physicians during a hospital stay is correspondingly greater. Second, the diagnostic uncertainty surrounding many admissions leads to complex thought processes that may be difficult to recreate when handing off patient care to another physician. Finally, knowledge of a patient's hospital course and the likely trajectory of care is facilitated by firsthand knowledge of where the patient has been. All this information can be difficult to distill into a brief sign‐out to another physician who assumes care of the patient.

In the current study, we sought to examine the trends over time in continuity of inpatient care. We chose patients likely to be cared for by general internists: those hospitalized for chronic obstructive pulmonary disease (COPD), pneumonia, and congestive heart failure (CHF). The general internists caring for patients in the hospital could be the patient's primary care physician (PCP), a physician covering for the patient's PCP, a physician assigned at admission by the hospital, or a hospitalist. Our goals were to describe the current level of continuity of care in the hospital setting, to examine whether continuity has changed over time, and to determine factors affecting continuity of care.

Methods

We used a 5% national sample of claims data from Medicare beneficiaries for the years 19962006.16 This included Medicare enrollment files, Medicare Provider Analysis and Review (MEDPAR) files, Medicare Carrier files, and Provider of Services (POS) files.17, 18

Establishment of the Study Cohort

Hospital admissions for COPD (Diagnosis Related Group [DRG] 088), pneumonia (DRG 089, 090), and CHF (DRG 127) from 1996 to 2006 for patients older than 66 years in MEDPAR were selected (n = 781,348). We excluded admissions for patients enrolled in health maintenance organizations (HMOs) or who did not have Medicare Parts A and B for the entire year prior to admission (n = 57,558). Admissions with a length of stay >18 days (n = 10,688) were considered outliers (exceeding the 99th percentile) and were excluded. Only admissions cared for by a general internist, family physician, general practitioner, or geriatrician were included (n = 528,453).

Measures

We categorized patients by age, gender, and ethnicity using Medicare enrollment files. We used the Medicaid indicator in the Medicare file as a proxy of low socioeconomic status. We used MEDPAR files to determine the origin of the admission (via the emergency department vs other), weekend versus weekday admission, and DRG. A comorbidity score was generated using the Elixhauser comorbidity scale using inpatient and outpatient billing data.19 In analyses, we listed the total number of comorbidities identified. The specialty of each physician was determined from the codes in the Medicare Carrier files. The 2004 POS files provided hospital‐level information such as zip code, metropolitan size, state, total number of beds, type of hospital, and medical school affiliation. We divided metropolitan size and total number of hospital beds into quartiles. We categorized hospitals as nonprofit, for profit, or public; medical school affiliation was categorized as non, minor, or major.

Determination of Primary Care Physician (PCP)

We identified outpatient visits using American Medical AssociationCommon Procedure Terminology (CPT) evaluation and management codes 99201 to 99205 (new patient) and 99221 to 99215 (established patient encounters). Individual providers were differentiated by using their Unique Provider Identification Number (UPIN). We defined a PCP as a general practitioner, family physician, internist, or geriatrician. Patients had to make at least 3 visits on different days to the same PCP within a year prior to the hospitalization to be categorized as having a PCP.20

Identification of Hospitalists Versus Other Generalist Physicians

As previously described, we defined hospitalists as general internal medicine physicians who derive at least 90% of their Medicare claims for Evaluation and Management services from care provided to hospitalized patients.21 Non‐hospitalist generalist physicians were those generalists who met the criteria for generalists but did not derive at least 90% of their Medicare claims from inpatient medicine.

Definition of Inpatient Continuity of Care

We measured inpatient continuity of care by number of generalist physicians (including hospitalists) who provided care during a hospitalization, through all inpatient claims made during that hospitalization. We considered patients to have had inpatient continuity of care if all billing by generalist physicians was done by one physician during the entire hospitalization.

Statistical Analyses

We calculated the percentage of admissions that received care from 1, 2, or 3 or more generalist physicians during the hospitalization, and stratified by selected patient and hospital characteristics. These proportions were also stratified by whether the patients were cared for by their outpatient PCP or not, and whether they were cared for by hospitalists or not. Based on who cared for the patient during the hospitalization, all admissions were classified as receiving care from: 1) non‐hospitalist generalist physicians, 2) a combination of generalist physicians and hospitalists, and 3) hospitalists only. The effect of patient and hospital characteristics on whether a patient experienced inpatient continuity was evaluated using a hierarchical generalized linear model (HGLM) with a logistic link, adjusting for clustering of admissions within hospitals and all covariates. We repeated our analyses using HGLM with an ordinal logit link to explore the factors associated with number of generalists seen in the hospital. All analyses were performed with SAS version 9.1 (SAS Inc, Cary, NC). The SAS GLIMMIX procedure was used to conduct multilevel analyses.

Results

Between 1996 and 2006, 528,453 patients hospitalized for COPD, pneumonia, and CHF received care by a generalist physician during their hospital stay. Of these, 64.3% were seen by one generalist physician, 26.9% by two generalist physicians, and 8.8% by three or more generalist physicians during hospitalization.

Figure 1 shows the percentage of all patients seen by 1, 2, and 3 or more generalist physicians between 1996 and 2006. The percentage of patients receiving care from one generalist physician declined from 70.7% in 1996 to 59.4% in 2006 (P < 0.001). During the same period, the percentage of patients receiving care from 3 or more generalist physicians increased from 6.5% to 10.7% (P < 0.001). Similar trends were seen for each of the 3 conditions. There was a decrease in overall length of stay during this period, from a mean of 5.7 to 4.9 days (P < 0.001). The increase in the number of generalist physicians providing care during the hospital stay did not correspond to an increase in total number of visits during the hospitalization. The average number of daily visits from a generalist physician was 0.94 (0.30) in 1996 and 0.96 (0.35) in 2006.

Figure 1
Percentage of patients seen by 1, 2, or 3 or more generalist physicians during a hospitalization for the years 1996–2006. P < 0.001 for Cochran‐Armitage trend test.

Table 1 presents the percentage of patients receiving care from 1, 2, and 3 or more generalist physicians during hospitalization stratified by patient and hospital characteristics. Older adults, females, non‐Hispanic whites, those with higher socioeconomic status, and those with more comorbidities were more likely to receive care by multiple generalist physicians. There was also large variation by geographic region, metropolitan area size, and hospital characteristics. All of these differences were significant at the P < 0.0001 level.

Percentage of Patients Receiving Care From 1, 2, and 3 or More Generalist Physicians During Hospitalization for COPD, Pneumonia, and CHF Stratifiedby Patient and Hospital Characteristics (N = 528,453)
  No. of Generalist Physicians Seen During Hospitalization
CharacteristicN123 (Percentage of Patients)
  • Abbreviations: CHF, congestive heart failure; COPD, chronic obstructive pulmonary disease; ICU, intensive care unit; PCP, primary care physician; SNF, skilled nursing facility.

  • Data missing (n = 1827). Note that differences in all categories were significant at the P < 0.0001 level.

Age at admission
6674152,48866.425.68.0
7584226,80263.827.38.9
85+149,16363.027.79.3
Gender    
Male216,60265.326.48.3
Female311,85163.627.39.1
Ethnicity    
White461,54363.727.49.0
Black46,96068.623.87.6
Other19,95067.924.57.6
Low socioeconomic status    
No366,39263.427.59.1
Yes162,06166.325.78.0
Emergency admission    
No188,35466.825.67.6
Yes340,09962.927.79.4
Weekend admission    
No392,15065.725.88.5
Yes136,30360.130.39.6
Diagnosis‐related groups    
CHF213,91465.026.38.7
Pneumonia195,43062.528.09.5
COPD119,10966.126.27.7
Had a PCP    
No201,01666.525.48.0
Yes327,43762.927.99.2
Seen hospitalist    
No431,78467.825.17.0
Yes96,66948.534.916.6
Charlson comorbidity score    
0127,38564.027.28.8
1131,40265.126.88.1
2105,83164.926.68.5
3163,83563.427.19.5
ICU use    
No431,46265.326.58.2
Yes96,99160.128.711.2
Length of stay (in days)    
Mean (SD) 4.7 (2.9)5.8 (3.1)8.1 (3.7)
Geographic region    
New England23,57255.730.813.5
Middle Atlantic78,18160.827.811.4
East North Central98,07265.726.38.0
West North Central44,78559.630.59.9
South Atlantic104,89463.827.09.2
East South Central51,45067.824.67.6
West South Central63,49369.224.86.0
Mountain20,31061.929.48.7
Pacific36,48466.726.37.0
Size of metropolitan area*    
1,000,000229,14563.726.59.8
250,000999,999114,44861.029.29.8
100,000249,99911,44861.330.48.3
<100,000171,58567.425.86.8
Medical school affiliation*    
Major77,60562.926.810.3
Minor107,14461.528.410.1
Non341,87465.526.58.0
Type of hospital*    
Nonprofit375,88862.727.89.5
For profit63,89867.525.57.0
Public86,83768.924.26.9
Hospital size* ...
<200 beds232,86967.225.77.1
200349 beds135,95462.627.99.5
350499 beds77,08061.128.310.6
500 beds80,72361.727.610.7
Discharge location    
Home361,89366.626.07.4
SNF94,72357.630.112.3
Rehab3,03045.734.220.1
Death22,13363.125.411.5
Other46,67461.828.110.1

Table 2 presents the results of a multivariable analysis of factors independently associated with experiencing continuity of care. In this analysis, continuity of care was defined as receiving inpatient care from one generalist physician (vs two or more). In the unadjusted models, the odds of experiencing continuity of care decreased by 5.5% per year from 1996 through 2006, and this decrease did not substantially change after adjusting for all other variables (4.8% yearly decrease). Younger patients, females, black patients, and those with low socioeconomic status were slightly more likely to experience continuity of care. As expected, patients admitted on weekends, emergency admissions, and those with intensive care unit (ICU) stays were less likely to experience continuity. There were marked geographic variations in continuity, with continuity approximately half as likely in New England as in the South. Continuity was greatest in smaller metropolitan areas versus rural and large metropolitan areas. Hospital size and teaching status produced only minor variation.

Multivariable Analysis of Odds of Experiencing Continuity of Care During Hospitalization Between 1996 and 2006
CharacteristicOdds Ratio (95% CI)
  • Abbreviations: CHF, congestive heart failure; CI, confidence interval; COPD, chronic obstructive pulmonary disease; ICU, Intensive care unit; PCP, primary care physician.

Admission year (increase by year)0.952 (0.9500.954)
Length of stay (increase by day)0.822 (0.8200.823)
Had a PCP 
No1.0
Yes0.762 (0.7520.773)
Seen by a hospitalist 
No1.0
Yes0.391 (0.3840.398)
Age 
66741.0
75840.959 (0.9440.973)
85+0.946 (0.9300.962)
Gender 
Male1.0
Female1.047 (1.0331.060)
Ethnicity 
White1.0
Black1.126 (1.0971.155)
Other1.062 (1.0231.103)
Low socioeconomic status 
No1.0
Yes1.036 (1.0201.051)
Emergency admission 
No1.0
Yes0.864 (0.8510.878)
Weekend admission 
No1.0
Yes0.778 (0.7680.789)
Diagnosis‐related group 
CHF1.0
Pneumonia0.964 (0.9500.978)
COPD1.002 (0.9851.019)
Charlson comorbidity score 
01.0
11.053 (1.0351.072)
21.062 (1.0421.083)
31.040 (1.0221.058)
ICU use 
No1.0
Yes0.918 (0.9020.935)
Geographic region 
Middle Atlantic1.0
New England0.714 (0.6210.822)
East North Central1.015 (0.9221.119)
West North Central0.791 (0.7110.879)
South Atlantic1.074 (0.9711.186)
East South Central1.250 (1.1131.403)
West South Central1.377 (1.2401.530)
Mountain0.839 (0.7400.951)
Pacific0.985 (0.8841.097)
Size of metropolitan area 
1,000,0001.0
250,000999,9990.743 (0.6910.798)
100,000249,9990.651 (0.5380.789)
<100,0001.062 (0.9911.138)
Medical school affiliation 
None1.0
Minor0.889 (0.8270.956)
Major1.048 (0.9521.154)
Type of hospital 
Nonprofit1.0
For profit1.194 (1.1061.289)
Public1.394 (1.3091.484)
Size of hospital 
<200 beds1.0
200349 beds0.918 (0.8550.986)
350499 beds0.962 (0.8721.061)
500 beds1.000 (0.8931.119)

In Table 2 we also show that patients with an established PCP and those who received care from a hospitalist in the hospital were substantially less likely to experience continuity of care. There are several possible interpretations for that finding. For example, it might be that patients admitted to a hospitalist service were likely to see multiple hospitalists. Alternatively, the decreased continuity associated with hospitalists could reflect the fact that some patients cared for predominantly by non‐hospitalists may have seen a hospitalist on call for a sudden change in health status. To further explore these possible explanatory pathways, we constructed three new cohorts: 1) patients receiving all their care from non‐hospitalists, 2) patients receiving all their care from hospitalists, and 3) patients seen by both. As shown in Table 3, in patients seen by non‐hospitalists only, the mean number of generalist physicians seen during hospitalization was slightly greater than in patients cared for only by hospitalists.

Number of Generalist Physicians Seen During Entire Hospitalization in Patients Who Received Their Care From Non‐Hospitalists Only, Hospitalists Only, or Both Hospitalists and Non‐Hospitalists
Received Care During Entire HospitalizationNo. of AdmissionsMean (SD) No. of Generalist Physicians Seen During Hospitalization
  • Abbreviations: SD, standard deviation.

  • Chi‐square P < 0.001.

Non‐hospitalist physician431,7841.41 (0.68)*
Hospitalist physician64,6621.34 (0.62)*
Both32,0072.55 (0.83)*

We also tested for interactions in Table 2 between admission year and other factors. There was a significant interaction between admission year and having an identifiable PCP in the year prior to admission (Table 2). The odds of experiencing continuity of care decreased more rapidly for patients who did not have a PCP (5.5% per year; 95% CI: 5.2%5.8%) than for those who had one (4.3% per year; 95% CI: 4.1%4.6%).

Discussion

We conducted this study to better understand the degree to which hospitalized patients experience discontinuity of care within a hospital stay and to determine which patients are most likely to experience discontinuity. In our study, we specifically chose admission conditions that would likely be followed primarily by generalist physicians. We found that, over the past decade, discontinuity of care for hospitalized patients has increased substantially, as indicated by the proportion of patients taken care of by more than one generalist physician during a single hospital stay. This occurred even though overall length of stay was decreasing in this same period.

It is perhaps not surprising that inpatient continuity of care has been decreasing in the past 10 years. Outpatient practices are becoming busier, and more doctors are practicing in large group practices, which could lead to several different physicians in the same practice rounding on a hospitalized patient. We have previously demonstrated that hospitalists are caring for an increasing number of patients over this same time period,21 so another possibility is that hospitalist services are being used more often because of this heavy outpatient workload. Our analyses allowed us to test the hypothesis that having hospitalists involved in patient care increases discontinuity.

At first glance, it appears that being cared for by hospitalists may result in worse continuity of care. However, closer scrutiny of the data reveals that the discontinuity ascribed to the hospitalists in the multivariable model appears to be an artifact of defining the hospitalist variable as having been seen by any hospitalist during the hospital stay. This would include patients who saw a hospitalist in addition to their PCP or another non‐hospitalist generalist. When we compared hospitalist‐only care to other generalist care, we could not detect a difference in discontinuity. We know that generalist visits per day to patients has not substantially increased over time, so this discontinuity trend is not explained by having visits by both a hospitalist and the PCP. Therefore, this combination of findings suggests that the increased discontinuity associated with having a hospitalist involved in patient care is likely the result of system issues rather than hospitalist care per se. In fact, patients seem to experience slightly better continuity when they see only hospitalists as opposed to only non‐hospitalists.

What types of systems issues might lead to this finding? Generalists in most settings could choose to involve a hospitalist at any point in the patient's hospital stay. This could occur because of a change in patient acuity requiring the involvement of hospitalists who are present in the hospital more. It is also possible that hospitalists' schedules are created to maximize inpatient continuity of care with individual hospitalists. Even though hospitalists clearly work shifts, the 7 on, 7 off model22 likely results in patients seeing the same physician each day until the switch day. This is in contrast to outpatient primary care doctors whose concentration may be on maintaining continuity within their practice.

As the field of hospital medicine was emerging, many internal medicine physicians from various specialties were concerned about the impact of hospitalists on patient care. In one study, 73% of internal medicine physicians who were not hospitalists thought that hospitalists would worsen continuity of care.23 Primary care and subspecialist internal medicine physicians also expressed the concern that hospitalists could hurt their own relationships with patients,6 presumably because of lost continuity between the inpatient and outpatient settings. However, this fear seems to diminish once hospitalist programs are implemented and primary care doctors have experience with them.23 Our study suggests that the decrease in continuity that has occurred since these studies were published is not likely due to the emergence of hospital medicine, but rather due to other factors that influence who cares for hospitalized patients.

This study had some limitations. Length of stay is an obvious mediator of number of generalist physicians seen. Therefore, the sickest patients are likely to have both a long length of stay and low continuity. We adjusted for this in the multivariable modeling. In addition, given that this study used a large database, certain details are not discernable. For example, we chose to operationalize discontinuity as visits from multiple generalists during a single hospital stay. That is not a perfect definition, but it does represent multiple physicians directing the care of a patient. Importantly, this does not appear to represent continuity with one physician with extra visits from another, as the total number of generalist visits per day did not change over time. It is also possible that patients in the non‐hospitalist group saw physicians only from a single practice, but those details are not included in the database. Finally, we cannot tell what type of hand‐offs were occurring for individual patients during each hospital stay. Despite these disadvantages, using a large database like this one allows for detection of fairly small differences that could still be clinically important.

In summary, hospitalized patients appear to experience less continuity now than 10 years ago. However, the hospitalist model does not appear to play a role in this discontinuity. It is worth exploring in more detail why patients would see both hospitalists and other generalists. This pattern is not surprising, but may have some repercussions in terms of increasing the number of hand‐offs experienced by patients. These could lead to problems with patient safety and quality of care. Future work should explore the reasons for this discontinuity and look at the relationship between inpatient discontinuity outcomes such as quality of care and the doctorpatient relationship.

Acknowledgements

The authors thank Sarah Toombs Smith, PhD, for help in preparation of the manuscript.

References
  1. Saultz JW.Defining and measuring interpersonal continuity of care.Ann Fam Med.2003;1(3):134143.
  2. Hanninen J,Takala J,Keinanen‐Kiukaanniemi S.Good continuity of care may improve quality of life in Type 2 diabetes.Diabetes Res Clin Pract.2001;51(1):2127.
  3. De Maeseneer JM,De Prins L,Gosset C,Heyerick J.Provider continuity in family medicine: Does it make a difference for total health care costs?Ann Fam Med.2003;1(3):144148.
  4. Gill JM,Mainous AG,Nsereko M.The effect of continuity of care on emergency department use.Arch Fam Med.2000;9(4):333338.
  5. Auerbach AD,Nelson EA,Lindenauer PK,Pantilat SZ,Katz PP,Wachter RM.Physician attitudes toward and prevalence of the hospitalist model of care: Results of a national survey.Am J Med.2000;109(8):648653.
  6. Auerbach AD,Davis RB,Phillips RS.Physician views on caring for hospitalized patients and the hospitalist model of inpatient care.J Gen Intern Med.2001;16(2):116119.
  7. Fletcher KE,Davis SQ,Underwood W,Mangrulkar RS,McMahon LF,Saint S.Systematic review: Effects of resident work hours on patient safety.Ann Intern Med.2004;141(11):851857.
  8. Fletcher KE,Saint S,Mangrulkar RS.Balancing continuity of care with residents' limited work hours: Defining the implications.Acad Med.2005;80(1):3943.
  9. Hinami K FJ,Meltzer DO,Arora VM.Understanding communication during hospitalist sevice changes: A mixed methods study.J Hosp Med.2009;4:535540.
  10. Beach C,Croskerry P,Shapiro M,Center for Safety in Emergency C. Profiles in patient safety: Emergency care transitions.Acad Emerg Med.2003;10(4):364367.
  11. Gandhi TK.Fumbled handoffs: One dropped ball after another.Ann Intern Med.2005;142(5):352358.
  12. Agency for Healthcare Research and Quality. Fumbled handoff.2004. Available at: http://www.webmm.ahrq.gov/printview.aspx?caseID=55. Accessed December 27, 2005.
  13. Shojania KG,Fletcher KE,Saint S.Graduate medical education and patient safety: A busy—and occasionally hazardous—intersection.Ann Intern Med.2006;145(8):592598.
  14. Petersen LA,Brennan TA,O'Neil AC,Cook EF,Lee TH.Does housestaff discontinuity of care increase the risk for preventable adverse events?Ann Intern Med.1994;121(11):866872.
  15. Laine C,Goldman L,Soukup JR,Hayes JG.The impact of a regulation restricting medical house staff working hours on the quality of patient care.JAMA.1993;269(3):374378.
  16. Centers for Medicare and Medicaid Services. Standard analytical files. Available at: http://www.cms.hhs.gov/IdentifiableDataFiles/02_Standard AnalyticalFiles.asp. Accessed March 1,2009.
  17. Centers for Medicare and Medicaid Services. Nonidentifiable data files: Provider of services files. Available at: http://www.cms.hhs.gov/NonIdentifiableDataFiles/04_ProviderofSerrvicesFile.asp. Accessed March 1,2009.
  18. Research Data Assistance Center. Medicare data file description. Available at: http://www.resdac.umn.edu/Medicare/file_descriptions.asp. Accessed March 1,2009.
  19. Weinhandl ED, SJ,Israni AK,Kasiske BL.Effect of comorbidity adjustment on CMS criteria for kidney transplant center performance.Am J Transplant.2009;9:506516.
  20. Sharma G,Fletcher K,Zhang D,Kuo YF,Freeman JL,Goodwin JS.Continuity of outpatient and inpatient care by primary care physicians for hospitalized older adults.JAMA.2009;301:16711680.
  21. Kuo YF,Sharma G,Freeman JL,Goodwin JS.Growth in the care of older patients by hospitalists in the United States.N Engl J Med. 2009;360:11021112.
  22. HCPro Inc.Medical Staff Leader blog.2010. Available at: http://blogs. hcpro.com/medicalstaff/2010/01/free‐form‐example‐seven‐day‐on‐seven‐day‐off‐hospitalist‐schedule/. Accessed November 20, 2010.
  23. Auerbach AD,Aronson MD,Davis RB,Phillips RS.How physicians perceive hospitalist services after implementation: Anticipation vs reality.Arch Intern Med.2003;163(19):23302336.
References
  1. Saultz JW.Defining and measuring interpersonal continuity of care.Ann Fam Med.2003;1(3):134143.
  2. Hanninen J,Takala J,Keinanen‐Kiukaanniemi S.Good continuity of care may improve quality of life in Type 2 diabetes.Diabetes Res Clin Pract.2001;51(1):2127.
  3. De Maeseneer JM,De Prins L,Gosset C,Heyerick J.Provider continuity in family medicine: Does it make a difference for total health care costs?Ann Fam Med.2003;1(3):144148.
  4. Gill JM,Mainous AG,Nsereko M.The effect of continuity of care on emergency department use.Arch Fam Med.2000;9(4):333338.
  5. Auerbach AD,Nelson EA,Lindenauer PK,Pantilat SZ,Katz PP,Wachter RM.Physician attitudes toward and prevalence of the hospitalist model of care: Results of a national survey.Am J Med.2000;109(8):648653.
  6. Auerbach AD,Davis RB,Phillips RS.Physician views on caring for hospitalized patients and the hospitalist model of inpatient care.J Gen Intern Med.2001;16(2):116119.
  7. Fletcher KE,Davis SQ,Underwood W,Mangrulkar RS,McMahon LF,Saint S.Systematic review: Effects of resident work hours on patient safety.Ann Intern Med.2004;141(11):851857.
  8. Fletcher KE,Saint S,Mangrulkar RS.Balancing continuity of care with residents' limited work hours: Defining the implications.Acad Med.2005;80(1):3943.
  9. Hinami K FJ,Meltzer DO,Arora VM.Understanding communication during hospitalist sevice changes: A mixed methods study.J Hosp Med.2009;4:535540.
  10. Beach C,Croskerry P,Shapiro M,Center for Safety in Emergency C. Profiles in patient safety: Emergency care transitions.Acad Emerg Med.2003;10(4):364367.
  11. Gandhi TK.Fumbled handoffs: One dropped ball after another.Ann Intern Med.2005;142(5):352358.
  12. Agency for Healthcare Research and Quality. Fumbled handoff.2004. Available at: http://www.webmm.ahrq.gov/printview.aspx?caseID=55. Accessed December 27, 2005.
  13. Shojania KG,Fletcher KE,Saint S.Graduate medical education and patient safety: A busy—and occasionally hazardous—intersection.Ann Intern Med.2006;145(8):592598.
  14. Petersen LA,Brennan TA,O'Neil AC,Cook EF,Lee TH.Does housestaff discontinuity of care increase the risk for preventable adverse events?Ann Intern Med.1994;121(11):866872.
  15. Laine C,Goldman L,Soukup JR,Hayes JG.The impact of a regulation restricting medical house staff working hours on the quality of patient care.JAMA.1993;269(3):374378.
  16. Centers for Medicare and Medicaid Services. Standard analytical files. Available at: http://www.cms.hhs.gov/IdentifiableDataFiles/02_Standard AnalyticalFiles.asp. Accessed March 1,2009.
  17. Centers for Medicare and Medicaid Services. Nonidentifiable data files: Provider of services files. Available at: http://www.cms.hhs.gov/NonIdentifiableDataFiles/04_ProviderofSerrvicesFile.asp. Accessed March 1,2009.
  18. Research Data Assistance Center. Medicare data file description. Available at: http://www.resdac.umn.edu/Medicare/file_descriptions.asp. Accessed March 1,2009.
  19. Weinhandl ED, SJ,Israni AK,Kasiske BL.Effect of comorbidity adjustment on CMS criteria for kidney transplant center performance.Am J Transplant.2009;9:506516.
  20. Sharma G,Fletcher K,Zhang D,Kuo YF,Freeman JL,Goodwin JS.Continuity of outpatient and inpatient care by primary care physicians for hospitalized older adults.JAMA.2009;301:16711680.
  21. Kuo YF,Sharma G,Freeman JL,Goodwin JS.Growth in the care of older patients by hospitalists in the United States.N Engl J Med. 2009;360:11021112.
  22. HCPro Inc.Medical Staff Leader blog.2010. Available at: http://blogs. hcpro.com/medicalstaff/2010/01/free‐form‐example‐seven‐day‐on‐seven‐day‐off‐hospitalist‐schedule/. Accessed November 20, 2010.
  23. Auerbach AD,Aronson MD,Davis RB,Phillips RS.How physicians perceive hospitalist services after implementation: Anticipation vs reality.Arch Intern Med.2003;163(19):23302336.
Issue
Journal of Hospital Medicine - 6(8)
Issue
Journal of Hospital Medicine - 6(8)
Page Number
438-444
Page Number
438-444
Publications
Publications
Article Type
Display Headline
Trends in inpatient continuity of care for a cohort of Medicare patients 1996–2006
Display Headline
Trends in inpatient continuity of care for a cohort of Medicare patients 1996–2006
Sections
Article Source

Copyright © 2011 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
PC Division, Clement J. Zablocki VAMC, 5000 W. National Ave., Milwaukee, WI 53295
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Physician Assistant‐Based General Medical Inpatient Care

Article Type
Changed
Thu, 05/25/2017 - 21:18
Display Headline
A comparison of outcomes of general medical inpatient care provided by a hospitalist‐physician assistant model vs a traditional resident‐based model

In 2003 the Accreditation Council for Graduate Medical Education (ACGME) prescribed residency reform in the form of work hour restrictions without prescribing alternatives to resident based care.1 As a response, many academic medical centers have developed innovative models for providing inpatient care, some of which incorporate Physician Assistants (PAs).2 With further restrictions in resident work hours possible,3 teaching hospitals may increase use of these alternate models to provide inpatient care. Widespread implementation of such new and untested models could impact the care of the approximately 20 million hospitalizations that occur every year in US teaching hospitals.4

Few reports have compared the care delivered by these alternate models with the care provided by traditional resident‐based models of care.58 Roy et al.8 have provided the only recent comparison of a PA‐based model of care with a resident‐based model. They showed lower adjusted costs of inpatient care associated with PA based care but other outcomes were similar to resident‐based teams.

The objective of this study is to provide a valid and usable comparison of the outcomes of a hospitalist‐PA (H‐PA) model of inpatient care with the traditional resident‐based model. This will add to the quantity and quality of the limited research on PA‐based inpatient care, and informs the anticipated increase in the involvement of PAs in this arena.

Methods

Study Design and Setting

We conducted a retrospective cohort study at a 430‐bed urban academic medical center in the Midwestern United States.

Models of General Medical (GM) Inpatient Care at the Study Hospital During the Study Period

In November 2004, as a response to the ACGME‐mandated work hour regulations, we formed 2 Hospitalist‐PA teams (H‐PA) to supplement the 6 preexisting general medicine resident teams (RES).

The H‐PA and RES teams differed in staffing, admitting times and weekend/overnight cross coverage structure (Table 1). There were no predesigned differences between the teams in the ward location of their patients, availability of laboratory/radiology services, specialty consultation, social services/case management resources, nursing resources or documentation requirements for admission, daily care, and discharge.

Differences in Structure and Function Between Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA TeamsRES Teams
Attending physicianAlways a hospitalistHospitalist, non‐hospitalist general internist or rarely a specialist
Attending physician roleSupervisory for some patients (about half) and sole care provider for others.Supervisory for all patients
Team compositionOne attending paired with 1 PAAttending + senior resident + (2) interns + (2‐3) medical students
Rotation schedule  
AttendingEvery 2 weeksEvery 2 weeks
Physician assistantOff on weekends 
House staff & medical students Every month
WeekendNo new admissions & hospitalist manages all patientsAccept new admissions
Admission times (weekdays)7 AM to 3 PMNoon to 7 AM
Source of admissionsEmergency room, clinics, other hospitalsEmergency room, clinics, other hospitals
Number of admissions (weekdays)4‐6 patients per day per teamNoon to 5 PM: 2 teams admit a maximum of 9 patients total
  5 PM to 7 AM: 3 teams admit a maximum 5 patients each.
Overnight coverageroles and responsibilitiesOne in‐house faculty3 on call interns
 Cross‐covering 2 H‐PA teamsCross‐covering 2 teams each
 Performing triageAdmitting up to 5 patients each
 Admitting patients if necessary 
 Assisting residents if necessary 
 General medical consultation 

Admission Schedule for H‐PA or RES Teams

The admitting schedule was designed to decrease the workload of the house staff and to do so specifically during the periods of peak educational activity (morning report, attending‐led teaching rounds, and noon report). A faculty admitting medical officer (AMO) assigned patients strictly based on the time an admission was requested. Importantly, the request for admission preceded the time of actual admission recorded when the patient reached the ward. The time difference between request for admission and actual admission depended on the source of admission and the delay associated with assigning a patient room. The AMO assigned 8 to 12 new patients to the H‐PA teams every weekday between 7 AM and 3 PM and to the RES teams between noon and 7 AM the next day. There was a designed period of overlap from noon to 3 PM during which both H‐PA and RES teams could admit patients. This period allowed for flexibility in assigning patients to either type of team depending on their workload. The AMO did not use patient complexity or teaching value to assign patients.

Exceptions to Admission Schedule

Patients admitted overnight after the on call RES had reached their admission limits were assigned to H‐PA teams the next morning. In addition, recently discharged patients who were readmitted while the discharging hospitalist (H‐PA teams) or the discharging resident (RES teams) was still scheduled for inpatient duties, were assigned back to the discharging team irrespective of the admitting schedule.

The same medicine team cared for a patient from admission to discharge but on transfer to the intensive care unit (ICU), an intensivist led critical care team assumed care. On transfer out of the ICU these patients were assigned back to the original team irrespective of admitting schedulethe so called bounce back rule to promote inpatient continuity of care. But if the residents (RES teams) or the hospitalist (H‐PA teams) had changedthe bounce back rule was no longer in effect and these patients were assigned to a team according to the admission schedule.

Study Population and Study Period

We included all hospitalizations of adult patients to GM teams if both their date of admission and their date of discharge fell within the study period (January 1, 2005 to December 31, 2006). We excluded hospitalizations with admissions during the weekendwhen H‐PA teams did not admit patients; hospitalizations to GM services with transfer to nonGM service (excluding ICU) and hospitalizations involving comanagement with specialty servicesas the contribution of GM teams for these was variable; and hospitalizations of private patients.

Data Collection and Team Assignment

We collected patient data from our hospital's discharge abstract database. This database did not contain team information so to assign teams we matched the discharging attending and the day of discharge to the type of team that the discharging attending was leading that day.

We collected patient age, gender, race, insurance status, zip‐code, primary care provider, source of admission, ward type, time and day of admission, and time and day of discharge for use as independent variables. The time of admission captured in the database was the time of actual admission and not the time the admission was requested.

We grouped the principal diagnosis International Statistical Classification of Diseases and Related Health Problems, 9th edition (ICD‐9) codes into clinically relevant categories using the Clinical Classification Software.9 We created comorbidity measures using Healthcare Cost and Utilization Project Comorbidity Software, version 3.4.10

Outcome Measures

We used length of stay (LOS), charges, readmissions within 7, 14, and 30 days and inpatient mortality as our outcome measures. We calculated LOS by subtracting the discharge day and time from the admission day and time. The LOS included time spent in the ICU. We summed all charges accrued during the entire hospitalization including any stay in the ICU but did not include professional fees. We considered any repeat hospitalization to our hospital within 7, 14, and 30 days following a discharge to be a readmission except that we excluded readmissions for a planned procedure or for inpatient rehabilitation.

Statistical Analysis

Descriptive Analysis

We performed unadjusted descriptive statistics at the level of an individual hospitalization using medians and interquartile ranges for continuous data and frequencies and percentages for categorical data. We used chi‐square tests of association and KruskalWallis analysis of variance to compare H‐PA and RES teams.

Missing Data

Because we lacked data on whether a primary outpatient care provider was available for 284 (2.9%) of our study hospitalizations, we dropped them from our multivariable analyses. We used an arbitrary discharge time of noon for the 11 hospitalizations which did not have a discharge time recorded.

Multivariable Analysis

We used multivariable mixed models to risk adjust for a wide variety of variables. We included age, gender, race, insurance, presence of primary care physician, and total number of comorbidities as fixed effects in all models because of the high face validity of these variables. We then added admission source, ward, time, day of week, discharge day of week, and comorbidity measures one by one as fixed effects, including them only if significant at P < 0.01. For assessing LOS, charges, and readmissions, we added a variable identifying each patient as a random effect to account for multiple admissions for the same patient. We then added variables identifying attending physician, principal diagnostic group, and ZIP code of residence as random effects to account for clustering of hospitalizations within these categories, including them only if significant at P < 0.01. For the model assessing mortality we included variables for attending physician, principal diagnostic group, and ZIP code of residence as random effects if significant at P < 0.01. We log transformed LOS and charges because they were extremely skewed in nature. Readmissions were analyzed after excluding patients who died or were discharged alive within 7, 14, or 30 days of the end of the study period.

Sensitivity Analyses

To assess the influence of LOS outliers, we changed LOS to 6 hours if it was less than 6 hours, and 45 days if it was more than 45 daysa process called winsorizing. We consider winsorizing superior to dropping outliers because it acknowledges that outliers contribute information, but prevent them from being too influential. We chose the 6 hour cut off because we believed that was the minimum time required to admit and then discharge a patient. We chose the upper limit of 45 days on reviewing the frequency distribution for outliers. Similarly, we winsorized charges at the first and 99th percentile after reviewing the frequency distribution for outliers. We then log transformed the winsorized data before analysis.

Inpatient deaths reduce the LOS and charges associated with a hospitalization. Thus excess mortality may provide a false concession in terms of lower LOS or charges. To check if this occurred in our study we repeated the analyses after excluding inpatient deaths.

ICU stays are associated with higher LOS, charges, and mortality. In our model of care, some patients transferred to the ICU are not cared for by the original team on transfer out. Moreover, care in the ICU is not controlled by the team that discharges them. Since this might obscure differences in outcomes achieved by RES vs. H‐PA teams, we repeated these analyses after excluding hospitalizations with an ICU stay.

Since mortality can only occur during 1 hospitalization per patient, we repeated the mortality analysis using only each patient's first admission or last admission and using a randomly selected single admission for each patient.

Subgroup Analysis

To limit the effect of different physician characteristics on H‐PA and RES teams we separately analyzed the hospitalizations under the care of hospitalists who served on both H‐PA and RES teams.

To limit the effect of different admission schedules of H‐PA and RES teams we analyzed the hospitalizations with admission times between 11.00 AM and 4.00 PM. Such hospitalizations were likely to be assigned during the noon to 3 PM period when they could be assigned to either an H‐PA or RES team.

Interactions

Finally we explored interactions between the type of team and the fixed effect variables included in each model.

Statistical Software

We performed the statistical analysis using SAS software version 9.0 for UNIX (SAS Institute, Inc., Cary, NC) and R software (The R Project for Statistical Computing).

This study protocol was approved by the hospital's institutional review board.

Results

Study Population

Of the 52,391 hospitalizations to our hospital during the study period, 13,058 were admitted to general medicine. We excluded 3102 weekend admissions and 209 who met other exclusion criteria. We could not determine the team assignment for 66. Of the remaining 9681 hospitalizations, we assigned 2171 to H‐PA teams and 7510 to RES teams (Figure 1).

Figure 1
Study population (H‐PA, hospitalist‐physician assistant team; RES, traditional resident team).

Descriptive Analysis

We compare patients assigned to H‐PA and RES teams in Table 2. They were similar in age, gender, race, having a primary care provider or not, and insurance status. Clinically, they had similar comorbidities and a similar distribution of common principal diagnoses. Consistent with their admitting schedule, H‐PA teams admitted and discharged more patients earlier in the day and admitted more patients earlier in the work week. Patients cared for by H‐PA teams were admitted from the Emergency Room (ER) less often and were more likely to reside on wards designated as nonmedicine by nursing specialty. Hospitalizations to H‐PA teams more often included an ICU stay.

Characteristics of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA (n = 2171)RES (n = 7510)P Value
  • Abbreviations: CI, confidence interval; ER, emergency room; H‐PA, hospitalist‐physician assistant; ICU, Intensive care unit; RES, traditional resident.

Age   
Mean56.8057.04 
Median56560.15
Interquartile range43‐7243‐73 
Age group (years), n (%)   
< 2010 (0.5)57 (0.8) 
20‐29186 (8.6)632 (8.7) 
30‐39221 (10.2)766 (10.3) 
40‐49387 (17.8)1341 (18.1) 
50‐59434 (20.0)1492 (20.2)0.28
60‐69325 (15.0)974 (12.8) 
70‐79271 (12.5)1035 (13.6) 
80‐89262 (12.0)951(12.3) 
90<75 (3.5)262 (3.4) 
Female, n (%)1175 (54.1)4138 (55.1)0.42
Race, n (%)   
White1282 (59.1)4419 (58.9) 
Black793 (36.5)2754 (36.7)0.98
Other96 (4.4)337 (4.5) 
Primary care provider, n (%)  0.16
Yes1537 (73.2)5451 (74.7) 
Missing: 28471 (3.3)213 (2.8) 
Insurance status, n (%)   
Commercial/worker's comp440 (20.3)1442 (19.2) 
Medicare1017 (46.8)3589 (47.8)0.52
Medicaid/others714 (32.9)2479 (33.0) 
Time of admission, n (%)   
0000‐0259167 (7.7)1068 (14.2) 
0300‐0559244 (11.2)485 (6.5) 
0600‐0859456 (21.0)270 (3.6) 
0900‐1159782 (36.0)1146 (15.3)<0.001
1200‐1459299 (13.8)1750 (23.3) 
1500‐1759155 (7.1)1676 (22.3) 
1800‐235968 (3.1)1115 (14.9) 
Time of discharge, n (%)   
2100‐085936 (1.7)174 (2.3) 
0900‐1159275 (12.7)495 (6.6) 
1200‐1459858 (39.6)2608 (34.8)<0.001
1500‐1759749 (34.6)3122 (41.6) 
1800‐2059249 (11.5)1104 (14.7) 
Missing47 
Day of week of admission, n (%)   
Monday462 (21.3)1549 (20.6) 
Tuesday499 (23.0)1470 (19.6) 
Wednesday430 (19.8)1479 (19.7)0.001
Thursday400 (18.4)1482 (19.7) 
Friday380 (17.5)1530 (20.4) 
Day of week of discharge, n (%)   
Monday207 (9.5)829 (11.0) 
Tuesday268 (12.3)973 (13.0) 
Wednesday334 (15.4)1142 (15.2) 
Thursday362 (16.7)1297 (17.3)0.16
Friday485 (22.3)1523 (20.3) 
Saturday330 (15.2)1165 (15.5) 
Sunday185 (8.5)581 (7.7) 
Admit to non‐medicine wards, n (%)1332 (61.4)2624 (34.9)<0.001
Transfer to ICU (at least once), n (%)299 (13.8)504 (6.7)<0.001
Admit from ER No (%)1663 (76.6)6063 (80.7)<0.001
10 most frequent diagnosis (%)Pneumonia (4.9)Pneumonia (5.5) 
 Congestive heart failure; nonhypertensive (4.2)Congestive heart failure; nonhypertensive (3.9) 
 Sickle cell anemia (3.9)Nonspecific chest pain (3.7) 
 Chronic obstructive pulmonary disease and Bronchiectasis (3.3)Urinary tract infections(3.6) 
 Diabetes mellitus with complications (3.2)Skin and subcutaneous tissue infections (3.3) 
 Urinary tract infections (3.2)Sickle cell anemia (3.3) 
 Asthma (3.0)Pancreatic disorders (not diabetes) (2.8) 
 Nonspecific chest pain (3.0)Asthma (2.8) 
 Pancreatic disorders (not diabetes) (2.9)Chronic obstructive pulmonary disease and Bronchiectasis (2.6) 
 Septicemia (2.2)Diabetes mellitus with complications (2.6) 
Average number of comorbidities mean (95% CI)0.39 (0.37‐0.42)0.38 (0.36‐0.39)0.23

In unadjusted comparisons of outcomes (Table 3), hospitalizations on H‐PA teams had higher lengths of stay and charges than hospitalizations on RES teams, possibly higher inpatient mortality rates but similar unadjusted readmission rates at 7, 14, and 30 days

Unadjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA (n = 2171)RES (n = 7150)% Difference* (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; IQR, interquartile range; LOS, length of stay;

  • On comparing log transformed LOS;

  • RES is reference group.

LOSMedian (IQR)Median (IQR)  
Days3.17 (2.03‐5.30)2.99 (1.80‐5.08)+8.9% (4.71‐13.29%)<0.001
Charges    
US Dollars9390 (6196‐16,239)9044 (6106‐14,805)+5.56% (1.96‐9.28%)0.002
Readmissionsn (%)n (%)Odds Ratio (CI) 
Within 7 days147 (6.96)571 (7.78)0.88 (0.73‐1.06)0.19
Within14 days236 (11.34)924 (12.76)0.87 (0.75‐1.01)0.07
Within 30 days383 (18.91)1436 (20.31)0.91 (0.80‐1.03)0.14
Inpatient deaths39 (1.8)95 (1.3)1.36 (0.90‐2.00)0.06

Multivariable Analysis

LOS

Hospitalizations to H‐PA teams were associated with a 6.73% longer LOS (P = 0.005) (Table 4). This difference persisted when we used the winsorized data (6.45% increase, P = 0.006), excluded inpatient deaths (6.81% increase, P = 0.005), or excluded hospitalizations that involved an ICU stay (6.40%increase, P = 0.011) (Table 5).

Adjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams (RES is the reference group)
 OverallSubgroup: Restricted to Physicians Attending on Both H‐PA and RES Teams*Subgroup: Restricted to Hospitalizations Between 11.00 AM and 4.00 PM
% Difference (CI)P Value% Difference (CI)P Value% Difference (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; LOS, length of stay; OR, odds ratio;

  • Number of observations included in subgroup ranges from 2992 to 3196;

  • Number of observations included in subgroup ranges from 3174 to 3384.

LOS6.73% (1.99% to 11.70%)0.0055.44% (0.65% to 11.91%)0.082.97% (4.47% to 10.98%)0.44
Charges2.75% (1.30% to 6.97%)0.191.55% (3.76% to 7.16%)0.576.45% (0.62% to 14.03%)0.07
Risk of ReadmissionAdjusted OR (95%CI)P ValueAdjusted OR (95% CI)P ValueAdjusted OR (95% CI)P Value
Within 7 days0.88 (0.64‐1.20)0.420.74 (0.40‐1.35)0.320.90 (0.40‐2.00)0.78
Within14 days0.90 (0.69‐1.19)0.460.71 (0.51‐0.99)0.050.87 (0.36‐2.13)0.77
Within 30 days0.89 (0.75‐1.06)0.200.75 (0.51‐1.08)0.120.92 (0.55‐1.54)0.75
Inpatient mortality1.27 (0.82‐1.97)0.281.46 (0.67‐3.17)0.331.14 (0.47‐2.74)0.77
Sensitivity Analysis: Adjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams (RES Is the Reference Group)
 Analysis With Winsorized DataAnalysis After Excluding Inpatient DeathsAnalysis After Excluding Patients With ICU Stays
% Difference (CI)P Value% Difference (CI)P Value% Difference (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; ICU, intensive care unit; LOS, length of stay; OR, odds ratio.

LOS6.45% (4.04 to 8.91%)0.0066.81% (2.03 to 11.80%)0.0056.40% (1.46 to 11.58%)0.011
Charges2.67 (1.27 to 6.76%)0.1872.89% (1.16 to 7.11%)0.1640.74% (3.11 to 4.76%)0.710

Charges

Hospitalizations to H‐PA and RES teams were associated with similar charges (Table 4). The results were similar when we used winsorized data, excluded inpatient deaths or excluded hospitalizations involving an ICU stay (Table 5).

Readmissions

The risk of readmission at 7, 14, and 30 days was similar between hospitalizations to H‐PA and RES teams (Table 4).

Mortality

The risk of inpatient death was similar between all hospitalizations to H‐PA and RES teams or only hospitalizations without an ICU stay (Table 4). The results also remained the same in analyses restricted to first admissions, last admissions, or 1 randomly selected admission per patient.

Sub‐Group Analysis

On restricting the multivariable analyses to the subset of hospitalists who staffed both types of teams (Table 4), the increase in LOS associated with H‐PA care was no longer significant (5.44% higher, P = 0.081). The charges, risk of readmission at 7 and 30 days, and risk of inpatient mortality remained similar. The risk of readmission at 14 days was slightly lower following hospitalizations to H‐PA teams (odds ratio 0.71, 95% confidence interval [CI] 0.51‐0.99).

The increase in LOS associated with H‐PA care was further attenuated in analyses of the subset of admissions between 11.00 AM and 4.00 PM (2.97% higher, P = 0.444). The difference in charges approached significance (6.45% higher, P = 0.07), but risk of readmission at 7, 14, and 30 days and risk of inpatient mortality were no different (Table 4).

Interactions

On adding interaction terms between the team assignment and the fixed effect variables in each model we detected that the effect of H‐PA care on LOS (P < 0.001) and charges (P < 0.001) varied by time of admission (Figure 2a and b). Hospitalizations to H‐PA teams from 6.00 PM to 6.00 AM had greater relative increases in LOS as compared to hospitalizations to RES teams during those times. Similarly, hospitalizations during the period 3.00 PM to 3.00 AM had relatively higher charges associated with H‐PA care compared to RES care.

Figure 2
(A) Relative difference in length of stay associated with care by H‐PA teams by times of admission (in percent change with RES as reference). (B) Relative difference in charges associated with care by H‐PA teams by time of admission (in percent with RES as reference). Abbreviations: H‐PA, hospitalist‐physician assistant team; RES traditional resident team. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

Discussion

We found that hospitalizations to our H‐PA teams had longer LOS but similar charges, readmission rates, and mortality as compared to traditional resident‐based teams. These findings were robust to multiple sensitivity and subgroup analyses but when we examined times when both types of teams could receive admissions, the difference in LOS was markedly attenuated and nonsignificant.

We note that most prior reports comparing PA‐based models of inpatient care predate the ACGME work hour regulations. In a randomized control trial (1987‐1988) Simmer et al.5 showed lower lengths of stay and charges but possibly higher risk of readmission for PA based teams as compared to resident based teams. Van Rhee et al.7 conducted a nonrandomized retrospective cohort study (1994‐1995) using administrative data which showed lower resource utilization for PA‐based inpatient care. Our results from 2005 to 2006 reflect the important changes in the organization and delivery of inpatient care since these previous investigations.

Roy et al.8 report the only previously published comparison of PA and resident based GM inpatient care after the ACGME mandated work hour regulations. They found PA‐based care was associated with lower costs, whereas we found similar charges for admissions to RES and H‐PA teams. They also found that LOS was similar for PA and resident‐based care, while we found a higher LOS for admissions to our H‐PA team. We note that although the design of Roy's study was similar to our own, patients cared for by PA‐based teams were geographically localized in their model. This may contribute to the differences in results noted between our studies.

Despite no designed differences in patients assigned to either type of team other than time of admission we noted some differences between the H‐PA and RES teams in the descriptive analysis. These differences, such as a higher proportion of hospitalizations to H‐PA teams being admitted from the ER, residing on nonmedicine wards or having an ICU stay are likely a result of our system of assigning admissions to H‐PA teams early during the workday. For example patients on H‐PA teams were more often located on nonmedicine wards as a result of later discharges and bed availability on medicine wards. The difference that deserves special comment is the much higher proportion (13.8% vs. 6.7%) of hospitalizations with an ICU stay on the H‐PA teams. Hospitalizations directly to the ICU were excluded from our study which means that the hospitalizations with an ICU stay in our study were initially admitted to either H‐PA or RES teams and then transferred to the ICU. Transfers out of the ICU usually occur early in the workday when H‐PA teams accepted patients per our admission schedule. These patients may have been preferentially assigned to H‐PA teams, if on returning from the ICU the original team's resident had changed (and the bounce back rule was not in effect). Importantly, the conclusions of our research are not altered on controlling for this difference in the teams by excluding hospitalizations with an ICU stay.

Hospitalizations to H‐PA teams were associated with higher resource utilization if they occurred later in the day or overnight (Figure 2a and b). During these times a transition of care occurred shortly after admission. For a late day admission the H‐PA teams would transfer care for overnight cross cover soon after the admission and for patients admitted overnight as overflow they would assume care of a patient from the nighttime covering physician performing the admission. On the other hand, on RES teams, interns admitting patients overnight continued to care for their patients for part of the following day (30‐hour call). Similar findings of higher resource utilization associated with transfer of care after admission in the daytime11 and nighttime12 have been previously reported. An alternative hypothesis for our findings is that the hospital maybe busier and thus less efficient during times when H‐PA teams had to admit later in the day or accept patients admitted overnight as overflow. Future research to determine the cause of this significant interaction between team assignment and time of admission on resource utilization is important as the large increases in LOS (up to 30%) and charges (up to 50%) noted, could have a potentially large impact if a higher proportion of hospitalizations were affected by this phenomenon.

Our H‐PA teams were assigned equally complex patients as our RES teams, in contrast to previous reports.8, 13 This was accomplished while improving the resident's educational experience and we have previously reported increases in our resident's board pass rates and in‐service training exam scores with that introduction of our H‐PA teams.14 We thus believe that selection of less complex patients to H‐PA teams such as ours is unnecessary and may give them a second tier status in academic settings.

Our report has limitations. It is a retrospective, nonrandomized investigation using a single institution's administrative database and has the limitations of not being able to account for unmeasured confounders, severity of illness, errors in the database, selection bias and has limited generalizability. We measured charges not actual costs,15 but we feel charges are a true reflection of relative resource use when compared between similar patients within a single institution. We also did not account for the readmissions that occur to other hospitals16 and our results do not reflect resource utilization for the healthcare system in total. For example, we could not tell if higher LOS on H‐PA teams resulted in lower readmissions for their patients in all hospitals in the region, which may reveal an overall resource savings. Additionally, we measured in‐hospital mortality and could not capture deaths related to hospital care that may occur shortly after discharge.

ACGME has proposed revised standards that may further restrict resident duty hours when they take effect in July 2011.3 This may lead to further decreases in resident‐based inpatient care. Teaching hospitals will need to continue to develop alternate models for inpatient care that do not depend on house staff. Our findings provide important evidence to inform the development of such models. Our study shows that one such model: PAs paired with hospitalists, accepting admissions early in the workday, with hospitalist coverage over the weekend and nights can care for GM inpatients as complex as those cared for by resident‐based teams without increasing readmission rates, inpatient mortality, or charges but at the cost of slightly higher LOS.

References
  1. ACGME‐Common Program Requirements for Resident Duty Hours. Available at: http://www.acgme.org/acWebsite/dutyHours/dh_ComProgrRequirmentsDutyHours0707.pdf. Accessed July 2010.
  2. Sehgal NL,Shah HM,Parekh VI,Roy CL,Williams MV.Non‐housestaff medicine services in academic centers: models and challenges.J Hosp Med.2008;3(3):247255.
  3. ACGME. Duty Hours: Proposed Standards for Review and comment. Available at: http://acgme‐2010standards.org/pdf/Proposed_Standards. pdf. Accessed July 22,2010.
  4. Agency for Health Care Policy and Research. HCUPnet: A tool for identifying, tracking, and analyzing national hospital statistics. Available at: http://hcup.ahrq.gov/HCUPnet.asp. Accessed July2010.
  5. Simmer TL,Nerenz DR,Rutt WM,Newcomb CS,Benfer DW.A randomized, controlled trial of an attending staff service in general internal medicine.Med Care.1991;29(7 suppl):JS31JS40.
  6. Dhuper S,Choksi S.Replacing an academic internal medicine residency program with a physician assistant‐‐hospitalist model: a Comparative Analysis Study.Am J Med Qual.2009;24(2):132139.
  7. Rhee JV,Ritchie J,Eward AM.Resource use by physician assistant services versus teaching services.JAAPA.2002;15(1):3342.
  8. Roy CL,Liang CL,Lund M, et al.Implementation of a physician assistant/hospitalist service in an academic medical center: impact on efficiency and patient outcomes.J Hosp Med.2008;3(5):361368.
  9. AHRQ. Clinical Classifications Software (CCS) for ICD‐9‐CM. Available at: http://www.hcup‐us.ahrq.gov/toolssoftware/ccs/ccs.jsp#overview. Accessed July2010.
  10. AHRQ. HCUP: Comorbidity Software, Version 3.4.;Available at: http://www.hcup‐us.ahrq.gov/toolssoftware/comorbidity/comorbidity.jsp. Accessed July2010.
  11. Schuberth JL,Elasy TA,Butler J, et al.Effect of short call admission on length of stay and quality of care for acute decompensated heart failure.Circulation.2008;117(20):26372644.
  12. Lofgren RP,Gottlieb D,Williams RA,Rich EC.Post‐call transfer of resident responsibility: its effect on patient care.J Gen Intern Med.1990;5(6):501505.
  13. O'Connor AB,Lang VJ,Lurie SJ,Lambert DR,Rudmann A,Robbins B.The effect of nonteaching services on the distribution of inpatient cases for internal medicine residents.Acad Med.2009:84(2):220225.
  14. Singh S,Petkova JH,Gill A, et al.Allowing for better resident education and improving patient care: hospitalist‐physician assistant teams fill in the gaps.J Hosp Med.2007;2[S2]:139.
  15. Finkler SA.The distinction between cost and charges.Ann Intern Med.1982;96(1):102109.
  16. Jencks SF,Williams MV,Coleman EA.Rehospitalizations among patients in the Medicare Fee‐for‐Service Program.N Engl J Med.2009;360(14):14181428.
Article PDF
Issue
Journal of Hospital Medicine - 6(3)
Publications
Page Number
122-130
Legacy Keywords
education, outcomes measurement, physician assistant, resident
Sections
Article PDF
Article PDF

In 2003 the Accreditation Council for Graduate Medical Education (ACGME) prescribed residency reform in the form of work hour restrictions without prescribing alternatives to resident based care.1 As a response, many academic medical centers have developed innovative models for providing inpatient care, some of which incorporate Physician Assistants (PAs).2 With further restrictions in resident work hours possible,3 teaching hospitals may increase use of these alternate models to provide inpatient care. Widespread implementation of such new and untested models could impact the care of the approximately 20 million hospitalizations that occur every year in US teaching hospitals.4

Few reports have compared the care delivered by these alternate models with the care provided by traditional resident‐based models of care.58 Roy et al.8 have provided the only recent comparison of a PA‐based model of care with a resident‐based model. They showed lower adjusted costs of inpatient care associated with PA based care but other outcomes were similar to resident‐based teams.

The objective of this study is to provide a valid and usable comparison of the outcomes of a hospitalist‐PA (H‐PA) model of inpatient care with the traditional resident‐based model. This will add to the quantity and quality of the limited research on PA‐based inpatient care, and informs the anticipated increase in the involvement of PAs in this arena.

Methods

Study Design and Setting

We conducted a retrospective cohort study at a 430‐bed urban academic medical center in the Midwestern United States.

Models of General Medical (GM) Inpatient Care at the Study Hospital During the Study Period

In November 2004, as a response to the ACGME‐mandated work hour regulations, we formed 2 Hospitalist‐PA teams (H‐PA) to supplement the 6 preexisting general medicine resident teams (RES).

The H‐PA and RES teams differed in staffing, admitting times and weekend/overnight cross coverage structure (Table 1). There were no predesigned differences between the teams in the ward location of their patients, availability of laboratory/radiology services, specialty consultation, social services/case management resources, nursing resources or documentation requirements for admission, daily care, and discharge.

Differences in Structure and Function Between Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA TeamsRES Teams
Attending physicianAlways a hospitalistHospitalist, non‐hospitalist general internist or rarely a specialist
Attending physician roleSupervisory for some patients (about half) and sole care provider for others.Supervisory for all patients
Team compositionOne attending paired with 1 PAAttending + senior resident + (2) interns + (2‐3) medical students
Rotation schedule  
AttendingEvery 2 weeksEvery 2 weeks
Physician assistantOff on weekends 
House staff & medical students Every month
WeekendNo new admissions & hospitalist manages all patientsAccept new admissions
Admission times (weekdays)7 AM to 3 PMNoon to 7 AM
Source of admissionsEmergency room, clinics, other hospitalsEmergency room, clinics, other hospitals
Number of admissions (weekdays)4‐6 patients per day per teamNoon to 5 PM: 2 teams admit a maximum of 9 patients total
  5 PM to 7 AM: 3 teams admit a maximum 5 patients each.
Overnight coverageroles and responsibilitiesOne in‐house faculty3 on call interns
 Cross‐covering 2 H‐PA teamsCross‐covering 2 teams each
 Performing triageAdmitting up to 5 patients each
 Admitting patients if necessary 
 Assisting residents if necessary 
 General medical consultation 

Admission Schedule for H‐PA or RES Teams

The admitting schedule was designed to decrease the workload of the house staff and to do so specifically during the periods of peak educational activity (morning report, attending‐led teaching rounds, and noon report). A faculty admitting medical officer (AMO) assigned patients strictly based on the time an admission was requested. Importantly, the request for admission preceded the time of actual admission recorded when the patient reached the ward. The time difference between request for admission and actual admission depended on the source of admission and the delay associated with assigning a patient room. The AMO assigned 8 to 12 new patients to the H‐PA teams every weekday between 7 AM and 3 PM and to the RES teams between noon and 7 AM the next day. There was a designed period of overlap from noon to 3 PM during which both H‐PA and RES teams could admit patients. This period allowed for flexibility in assigning patients to either type of team depending on their workload. The AMO did not use patient complexity or teaching value to assign patients.

Exceptions to Admission Schedule

Patients admitted overnight after the on call RES had reached their admission limits were assigned to H‐PA teams the next morning. In addition, recently discharged patients who were readmitted while the discharging hospitalist (H‐PA teams) or the discharging resident (RES teams) was still scheduled for inpatient duties, were assigned back to the discharging team irrespective of the admitting schedule.

The same medicine team cared for a patient from admission to discharge but on transfer to the intensive care unit (ICU), an intensivist led critical care team assumed care. On transfer out of the ICU these patients were assigned back to the original team irrespective of admitting schedulethe so called bounce back rule to promote inpatient continuity of care. But if the residents (RES teams) or the hospitalist (H‐PA teams) had changedthe bounce back rule was no longer in effect and these patients were assigned to a team according to the admission schedule.

Study Population and Study Period

We included all hospitalizations of adult patients to GM teams if both their date of admission and their date of discharge fell within the study period (January 1, 2005 to December 31, 2006). We excluded hospitalizations with admissions during the weekendwhen H‐PA teams did not admit patients; hospitalizations to GM services with transfer to nonGM service (excluding ICU) and hospitalizations involving comanagement with specialty servicesas the contribution of GM teams for these was variable; and hospitalizations of private patients.

Data Collection and Team Assignment

We collected patient data from our hospital's discharge abstract database. This database did not contain team information so to assign teams we matched the discharging attending and the day of discharge to the type of team that the discharging attending was leading that day.

We collected patient age, gender, race, insurance status, zip‐code, primary care provider, source of admission, ward type, time and day of admission, and time and day of discharge for use as independent variables. The time of admission captured in the database was the time of actual admission and not the time the admission was requested.

We grouped the principal diagnosis International Statistical Classification of Diseases and Related Health Problems, 9th edition (ICD‐9) codes into clinically relevant categories using the Clinical Classification Software.9 We created comorbidity measures using Healthcare Cost and Utilization Project Comorbidity Software, version 3.4.10

Outcome Measures

We used length of stay (LOS), charges, readmissions within 7, 14, and 30 days and inpatient mortality as our outcome measures. We calculated LOS by subtracting the discharge day and time from the admission day and time. The LOS included time spent in the ICU. We summed all charges accrued during the entire hospitalization including any stay in the ICU but did not include professional fees. We considered any repeat hospitalization to our hospital within 7, 14, and 30 days following a discharge to be a readmission except that we excluded readmissions for a planned procedure or for inpatient rehabilitation.

Statistical Analysis

Descriptive Analysis

We performed unadjusted descriptive statistics at the level of an individual hospitalization using medians and interquartile ranges for continuous data and frequencies and percentages for categorical data. We used chi‐square tests of association and KruskalWallis analysis of variance to compare H‐PA and RES teams.

Missing Data

Because we lacked data on whether a primary outpatient care provider was available for 284 (2.9%) of our study hospitalizations, we dropped them from our multivariable analyses. We used an arbitrary discharge time of noon for the 11 hospitalizations which did not have a discharge time recorded.

Multivariable Analysis

We used multivariable mixed models to risk adjust for a wide variety of variables. We included age, gender, race, insurance, presence of primary care physician, and total number of comorbidities as fixed effects in all models because of the high face validity of these variables. We then added admission source, ward, time, day of week, discharge day of week, and comorbidity measures one by one as fixed effects, including them only if significant at P < 0.01. For assessing LOS, charges, and readmissions, we added a variable identifying each patient as a random effect to account for multiple admissions for the same patient. We then added variables identifying attending physician, principal diagnostic group, and ZIP code of residence as random effects to account for clustering of hospitalizations within these categories, including them only if significant at P < 0.01. For the model assessing mortality we included variables for attending physician, principal diagnostic group, and ZIP code of residence as random effects if significant at P < 0.01. We log transformed LOS and charges because they were extremely skewed in nature. Readmissions were analyzed after excluding patients who died or were discharged alive within 7, 14, or 30 days of the end of the study period.

Sensitivity Analyses

To assess the influence of LOS outliers, we changed LOS to 6 hours if it was less than 6 hours, and 45 days if it was more than 45 daysa process called winsorizing. We consider winsorizing superior to dropping outliers because it acknowledges that outliers contribute information, but prevent them from being too influential. We chose the 6 hour cut off because we believed that was the minimum time required to admit and then discharge a patient. We chose the upper limit of 45 days on reviewing the frequency distribution for outliers. Similarly, we winsorized charges at the first and 99th percentile after reviewing the frequency distribution for outliers. We then log transformed the winsorized data before analysis.

Inpatient deaths reduce the LOS and charges associated with a hospitalization. Thus excess mortality may provide a false concession in terms of lower LOS or charges. To check if this occurred in our study we repeated the analyses after excluding inpatient deaths.

ICU stays are associated with higher LOS, charges, and mortality. In our model of care, some patients transferred to the ICU are not cared for by the original team on transfer out. Moreover, care in the ICU is not controlled by the team that discharges them. Since this might obscure differences in outcomes achieved by RES vs. H‐PA teams, we repeated these analyses after excluding hospitalizations with an ICU stay.

Since mortality can only occur during 1 hospitalization per patient, we repeated the mortality analysis using only each patient's first admission or last admission and using a randomly selected single admission for each patient.

Subgroup Analysis

To limit the effect of different physician characteristics on H‐PA and RES teams we separately analyzed the hospitalizations under the care of hospitalists who served on both H‐PA and RES teams.

To limit the effect of different admission schedules of H‐PA and RES teams we analyzed the hospitalizations with admission times between 11.00 AM and 4.00 PM. Such hospitalizations were likely to be assigned during the noon to 3 PM period when they could be assigned to either an H‐PA or RES team.

Interactions

Finally we explored interactions between the type of team and the fixed effect variables included in each model.

Statistical Software

We performed the statistical analysis using SAS software version 9.0 for UNIX (SAS Institute, Inc., Cary, NC) and R software (The R Project for Statistical Computing).

This study protocol was approved by the hospital's institutional review board.

Results

Study Population

Of the 52,391 hospitalizations to our hospital during the study period, 13,058 were admitted to general medicine. We excluded 3102 weekend admissions and 209 who met other exclusion criteria. We could not determine the team assignment for 66. Of the remaining 9681 hospitalizations, we assigned 2171 to H‐PA teams and 7510 to RES teams (Figure 1).

Figure 1
Study population (H‐PA, hospitalist‐physician assistant team; RES, traditional resident team).

Descriptive Analysis

We compare patients assigned to H‐PA and RES teams in Table 2. They were similar in age, gender, race, having a primary care provider or not, and insurance status. Clinically, they had similar comorbidities and a similar distribution of common principal diagnoses. Consistent with their admitting schedule, H‐PA teams admitted and discharged more patients earlier in the day and admitted more patients earlier in the work week. Patients cared for by H‐PA teams were admitted from the Emergency Room (ER) less often and were more likely to reside on wards designated as nonmedicine by nursing specialty. Hospitalizations to H‐PA teams more often included an ICU stay.

Characteristics of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA (n = 2171)RES (n = 7510)P Value
  • Abbreviations: CI, confidence interval; ER, emergency room; H‐PA, hospitalist‐physician assistant; ICU, Intensive care unit; RES, traditional resident.

Age   
Mean56.8057.04 
Median56560.15
Interquartile range43‐7243‐73 
Age group (years), n (%)   
< 2010 (0.5)57 (0.8) 
20‐29186 (8.6)632 (8.7) 
30‐39221 (10.2)766 (10.3) 
40‐49387 (17.8)1341 (18.1) 
50‐59434 (20.0)1492 (20.2)0.28
60‐69325 (15.0)974 (12.8) 
70‐79271 (12.5)1035 (13.6) 
80‐89262 (12.0)951(12.3) 
90<75 (3.5)262 (3.4) 
Female, n (%)1175 (54.1)4138 (55.1)0.42
Race, n (%)   
White1282 (59.1)4419 (58.9) 
Black793 (36.5)2754 (36.7)0.98
Other96 (4.4)337 (4.5) 
Primary care provider, n (%)  0.16
Yes1537 (73.2)5451 (74.7) 
Missing: 28471 (3.3)213 (2.8) 
Insurance status, n (%)   
Commercial/worker's comp440 (20.3)1442 (19.2) 
Medicare1017 (46.8)3589 (47.8)0.52
Medicaid/others714 (32.9)2479 (33.0) 
Time of admission, n (%)   
0000‐0259167 (7.7)1068 (14.2) 
0300‐0559244 (11.2)485 (6.5) 
0600‐0859456 (21.0)270 (3.6) 
0900‐1159782 (36.0)1146 (15.3)<0.001
1200‐1459299 (13.8)1750 (23.3) 
1500‐1759155 (7.1)1676 (22.3) 
1800‐235968 (3.1)1115 (14.9) 
Time of discharge, n (%)   
2100‐085936 (1.7)174 (2.3) 
0900‐1159275 (12.7)495 (6.6) 
1200‐1459858 (39.6)2608 (34.8)<0.001
1500‐1759749 (34.6)3122 (41.6) 
1800‐2059249 (11.5)1104 (14.7) 
Missing47 
Day of week of admission, n (%)   
Monday462 (21.3)1549 (20.6) 
Tuesday499 (23.0)1470 (19.6) 
Wednesday430 (19.8)1479 (19.7)0.001
Thursday400 (18.4)1482 (19.7) 
Friday380 (17.5)1530 (20.4) 
Day of week of discharge, n (%)   
Monday207 (9.5)829 (11.0) 
Tuesday268 (12.3)973 (13.0) 
Wednesday334 (15.4)1142 (15.2) 
Thursday362 (16.7)1297 (17.3)0.16
Friday485 (22.3)1523 (20.3) 
Saturday330 (15.2)1165 (15.5) 
Sunday185 (8.5)581 (7.7) 
Admit to non‐medicine wards, n (%)1332 (61.4)2624 (34.9)<0.001
Transfer to ICU (at least once), n (%)299 (13.8)504 (6.7)<0.001
Admit from ER No (%)1663 (76.6)6063 (80.7)<0.001
10 most frequent diagnosis (%)Pneumonia (4.9)Pneumonia (5.5) 
 Congestive heart failure; nonhypertensive (4.2)Congestive heart failure; nonhypertensive (3.9) 
 Sickle cell anemia (3.9)Nonspecific chest pain (3.7) 
 Chronic obstructive pulmonary disease and Bronchiectasis (3.3)Urinary tract infections(3.6) 
 Diabetes mellitus with complications (3.2)Skin and subcutaneous tissue infections (3.3) 
 Urinary tract infections (3.2)Sickle cell anemia (3.3) 
 Asthma (3.0)Pancreatic disorders (not diabetes) (2.8) 
 Nonspecific chest pain (3.0)Asthma (2.8) 
 Pancreatic disorders (not diabetes) (2.9)Chronic obstructive pulmonary disease and Bronchiectasis (2.6) 
 Septicemia (2.2)Diabetes mellitus with complications (2.6) 
Average number of comorbidities mean (95% CI)0.39 (0.37‐0.42)0.38 (0.36‐0.39)0.23

In unadjusted comparisons of outcomes (Table 3), hospitalizations on H‐PA teams had higher lengths of stay and charges than hospitalizations on RES teams, possibly higher inpatient mortality rates but similar unadjusted readmission rates at 7, 14, and 30 days

Unadjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA (n = 2171)RES (n = 7150)% Difference* (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; IQR, interquartile range; LOS, length of stay;

  • On comparing log transformed LOS;

  • RES is reference group.

LOSMedian (IQR)Median (IQR)  
Days3.17 (2.03‐5.30)2.99 (1.80‐5.08)+8.9% (4.71‐13.29%)<0.001
Charges    
US Dollars9390 (6196‐16,239)9044 (6106‐14,805)+5.56% (1.96‐9.28%)0.002
Readmissionsn (%)n (%)Odds Ratio (CI) 
Within 7 days147 (6.96)571 (7.78)0.88 (0.73‐1.06)0.19
Within14 days236 (11.34)924 (12.76)0.87 (0.75‐1.01)0.07
Within 30 days383 (18.91)1436 (20.31)0.91 (0.80‐1.03)0.14
Inpatient deaths39 (1.8)95 (1.3)1.36 (0.90‐2.00)0.06

Multivariable Analysis

LOS

Hospitalizations to H‐PA teams were associated with a 6.73% longer LOS (P = 0.005) (Table 4). This difference persisted when we used the winsorized data (6.45% increase, P = 0.006), excluded inpatient deaths (6.81% increase, P = 0.005), or excluded hospitalizations that involved an ICU stay (6.40%increase, P = 0.011) (Table 5).

Adjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams (RES is the reference group)
 OverallSubgroup: Restricted to Physicians Attending on Both H‐PA and RES Teams*Subgroup: Restricted to Hospitalizations Between 11.00 AM and 4.00 PM
% Difference (CI)P Value% Difference (CI)P Value% Difference (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; LOS, length of stay; OR, odds ratio;

  • Number of observations included in subgroup ranges from 2992 to 3196;

  • Number of observations included in subgroup ranges from 3174 to 3384.

LOS6.73% (1.99% to 11.70%)0.0055.44% (0.65% to 11.91%)0.082.97% (4.47% to 10.98%)0.44
Charges2.75% (1.30% to 6.97%)0.191.55% (3.76% to 7.16%)0.576.45% (0.62% to 14.03%)0.07
Risk of ReadmissionAdjusted OR (95%CI)P ValueAdjusted OR (95% CI)P ValueAdjusted OR (95% CI)P Value
Within 7 days0.88 (0.64‐1.20)0.420.74 (0.40‐1.35)0.320.90 (0.40‐2.00)0.78
Within14 days0.90 (0.69‐1.19)0.460.71 (0.51‐0.99)0.050.87 (0.36‐2.13)0.77
Within 30 days0.89 (0.75‐1.06)0.200.75 (0.51‐1.08)0.120.92 (0.55‐1.54)0.75
Inpatient mortality1.27 (0.82‐1.97)0.281.46 (0.67‐3.17)0.331.14 (0.47‐2.74)0.77
Sensitivity Analysis: Adjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams (RES Is the Reference Group)
 Analysis With Winsorized DataAnalysis After Excluding Inpatient DeathsAnalysis After Excluding Patients With ICU Stays
% Difference (CI)P Value% Difference (CI)P Value% Difference (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; ICU, intensive care unit; LOS, length of stay; OR, odds ratio.

LOS6.45% (4.04 to 8.91%)0.0066.81% (2.03 to 11.80%)0.0056.40% (1.46 to 11.58%)0.011
Charges2.67 (1.27 to 6.76%)0.1872.89% (1.16 to 7.11%)0.1640.74% (3.11 to 4.76%)0.710

Charges

Hospitalizations to H‐PA and RES teams were associated with similar charges (Table 4). The results were similar when we used winsorized data, excluded inpatient deaths or excluded hospitalizations involving an ICU stay (Table 5).

Readmissions

The risk of readmission at 7, 14, and 30 days was similar between hospitalizations to H‐PA and RES teams (Table 4).

Mortality

The risk of inpatient death was similar between all hospitalizations to H‐PA and RES teams or only hospitalizations without an ICU stay (Table 4). The results also remained the same in analyses restricted to first admissions, last admissions, or 1 randomly selected admission per patient.

Sub‐Group Analysis

On restricting the multivariable analyses to the subset of hospitalists who staffed both types of teams (Table 4), the increase in LOS associated with H‐PA care was no longer significant (5.44% higher, P = 0.081). The charges, risk of readmission at 7 and 30 days, and risk of inpatient mortality remained similar. The risk of readmission at 14 days was slightly lower following hospitalizations to H‐PA teams (odds ratio 0.71, 95% confidence interval [CI] 0.51‐0.99).

The increase in LOS associated with H‐PA care was further attenuated in analyses of the subset of admissions between 11.00 AM and 4.00 PM (2.97% higher, P = 0.444). The difference in charges approached significance (6.45% higher, P = 0.07), but risk of readmission at 7, 14, and 30 days and risk of inpatient mortality were no different (Table 4).

Interactions

On adding interaction terms between the team assignment and the fixed effect variables in each model we detected that the effect of H‐PA care on LOS (P < 0.001) and charges (P < 0.001) varied by time of admission (Figure 2a and b). Hospitalizations to H‐PA teams from 6.00 PM to 6.00 AM had greater relative increases in LOS as compared to hospitalizations to RES teams during those times. Similarly, hospitalizations during the period 3.00 PM to 3.00 AM had relatively higher charges associated with H‐PA care compared to RES care.

Figure 2
(A) Relative difference in length of stay associated with care by H‐PA teams by times of admission (in percent change with RES as reference). (B) Relative difference in charges associated with care by H‐PA teams by time of admission (in percent with RES as reference). Abbreviations: H‐PA, hospitalist‐physician assistant team; RES traditional resident team. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

Discussion

We found that hospitalizations to our H‐PA teams had longer LOS but similar charges, readmission rates, and mortality as compared to traditional resident‐based teams. These findings were robust to multiple sensitivity and subgroup analyses but when we examined times when both types of teams could receive admissions, the difference in LOS was markedly attenuated and nonsignificant.

We note that most prior reports comparing PA‐based models of inpatient care predate the ACGME work hour regulations. In a randomized control trial (1987‐1988) Simmer et al.5 showed lower lengths of stay and charges but possibly higher risk of readmission for PA based teams as compared to resident based teams. Van Rhee et al.7 conducted a nonrandomized retrospective cohort study (1994‐1995) using administrative data which showed lower resource utilization for PA‐based inpatient care. Our results from 2005 to 2006 reflect the important changes in the organization and delivery of inpatient care since these previous investigations.

Roy et al.8 report the only previously published comparison of PA and resident based GM inpatient care after the ACGME mandated work hour regulations. They found PA‐based care was associated with lower costs, whereas we found similar charges for admissions to RES and H‐PA teams. They also found that LOS was similar for PA and resident‐based care, while we found a higher LOS for admissions to our H‐PA team. We note that although the design of Roy's study was similar to our own, patients cared for by PA‐based teams were geographically localized in their model. This may contribute to the differences in results noted between our studies.

Despite no designed differences in patients assigned to either type of team other than time of admission we noted some differences between the H‐PA and RES teams in the descriptive analysis. These differences, such as a higher proportion of hospitalizations to H‐PA teams being admitted from the ER, residing on nonmedicine wards or having an ICU stay are likely a result of our system of assigning admissions to H‐PA teams early during the workday. For example patients on H‐PA teams were more often located on nonmedicine wards as a result of later discharges and bed availability on medicine wards. The difference that deserves special comment is the much higher proportion (13.8% vs. 6.7%) of hospitalizations with an ICU stay on the H‐PA teams. Hospitalizations directly to the ICU were excluded from our study which means that the hospitalizations with an ICU stay in our study were initially admitted to either H‐PA or RES teams and then transferred to the ICU. Transfers out of the ICU usually occur early in the workday when H‐PA teams accepted patients per our admission schedule. These patients may have been preferentially assigned to H‐PA teams, if on returning from the ICU the original team's resident had changed (and the bounce back rule was not in effect). Importantly, the conclusions of our research are not altered on controlling for this difference in the teams by excluding hospitalizations with an ICU stay.

Hospitalizations to H‐PA teams were associated with higher resource utilization if they occurred later in the day or overnight (Figure 2a and b). During these times a transition of care occurred shortly after admission. For a late day admission the H‐PA teams would transfer care for overnight cross cover soon after the admission and for patients admitted overnight as overflow they would assume care of a patient from the nighttime covering physician performing the admission. On the other hand, on RES teams, interns admitting patients overnight continued to care for their patients for part of the following day (30‐hour call). Similar findings of higher resource utilization associated with transfer of care after admission in the daytime11 and nighttime12 have been previously reported. An alternative hypothesis for our findings is that the hospital maybe busier and thus less efficient during times when H‐PA teams had to admit later in the day or accept patients admitted overnight as overflow. Future research to determine the cause of this significant interaction between team assignment and time of admission on resource utilization is important as the large increases in LOS (up to 30%) and charges (up to 50%) noted, could have a potentially large impact if a higher proportion of hospitalizations were affected by this phenomenon.

Our H‐PA teams were assigned equally complex patients as our RES teams, in contrast to previous reports.8, 13 This was accomplished while improving the resident's educational experience and we have previously reported increases in our resident's board pass rates and in‐service training exam scores with that introduction of our H‐PA teams.14 We thus believe that selection of less complex patients to H‐PA teams such as ours is unnecessary and may give them a second tier status in academic settings.

Our report has limitations. It is a retrospective, nonrandomized investigation using a single institution's administrative database and has the limitations of not being able to account for unmeasured confounders, severity of illness, errors in the database, selection bias and has limited generalizability. We measured charges not actual costs,15 but we feel charges are a true reflection of relative resource use when compared between similar patients within a single institution. We also did not account for the readmissions that occur to other hospitals16 and our results do not reflect resource utilization for the healthcare system in total. For example, we could not tell if higher LOS on H‐PA teams resulted in lower readmissions for their patients in all hospitals in the region, which may reveal an overall resource savings. Additionally, we measured in‐hospital mortality and could not capture deaths related to hospital care that may occur shortly after discharge.

ACGME has proposed revised standards that may further restrict resident duty hours when they take effect in July 2011.3 This may lead to further decreases in resident‐based inpatient care. Teaching hospitals will need to continue to develop alternate models for inpatient care that do not depend on house staff. Our findings provide important evidence to inform the development of such models. Our study shows that one such model: PAs paired with hospitalists, accepting admissions early in the workday, with hospitalist coverage over the weekend and nights can care for GM inpatients as complex as those cared for by resident‐based teams without increasing readmission rates, inpatient mortality, or charges but at the cost of slightly higher LOS.

In 2003 the Accreditation Council for Graduate Medical Education (ACGME) prescribed residency reform in the form of work hour restrictions without prescribing alternatives to resident based care.1 As a response, many academic medical centers have developed innovative models for providing inpatient care, some of which incorporate Physician Assistants (PAs).2 With further restrictions in resident work hours possible,3 teaching hospitals may increase use of these alternate models to provide inpatient care. Widespread implementation of such new and untested models could impact the care of the approximately 20 million hospitalizations that occur every year in US teaching hospitals.4

Few reports have compared the care delivered by these alternate models with the care provided by traditional resident‐based models of care.58 Roy et al.8 have provided the only recent comparison of a PA‐based model of care with a resident‐based model. They showed lower adjusted costs of inpatient care associated with PA based care but other outcomes were similar to resident‐based teams.

The objective of this study is to provide a valid and usable comparison of the outcomes of a hospitalist‐PA (H‐PA) model of inpatient care with the traditional resident‐based model. This will add to the quantity and quality of the limited research on PA‐based inpatient care, and informs the anticipated increase in the involvement of PAs in this arena.

Methods

Study Design and Setting

We conducted a retrospective cohort study at a 430‐bed urban academic medical center in the Midwestern United States.

Models of General Medical (GM) Inpatient Care at the Study Hospital During the Study Period

In November 2004, as a response to the ACGME‐mandated work hour regulations, we formed 2 Hospitalist‐PA teams (H‐PA) to supplement the 6 preexisting general medicine resident teams (RES).

The H‐PA and RES teams differed in staffing, admitting times and weekend/overnight cross coverage structure (Table 1). There were no predesigned differences between the teams in the ward location of their patients, availability of laboratory/radiology services, specialty consultation, social services/case management resources, nursing resources or documentation requirements for admission, daily care, and discharge.

Differences in Structure and Function Between Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA TeamsRES Teams
Attending physicianAlways a hospitalistHospitalist, non‐hospitalist general internist or rarely a specialist
Attending physician roleSupervisory for some patients (about half) and sole care provider for others.Supervisory for all patients
Team compositionOne attending paired with 1 PAAttending + senior resident + (2) interns + (2‐3) medical students
Rotation schedule  
AttendingEvery 2 weeksEvery 2 weeks
Physician assistantOff on weekends 
House staff & medical students Every month
WeekendNo new admissions & hospitalist manages all patientsAccept new admissions
Admission times (weekdays)7 AM to 3 PMNoon to 7 AM
Source of admissionsEmergency room, clinics, other hospitalsEmergency room, clinics, other hospitals
Number of admissions (weekdays)4‐6 patients per day per teamNoon to 5 PM: 2 teams admit a maximum of 9 patients total
  5 PM to 7 AM: 3 teams admit a maximum 5 patients each.
Overnight coverageroles and responsibilitiesOne in‐house faculty3 on call interns
 Cross‐covering 2 H‐PA teamsCross‐covering 2 teams each
 Performing triageAdmitting up to 5 patients each
 Admitting patients if necessary 
 Assisting residents if necessary 
 General medical consultation 

Admission Schedule for H‐PA or RES Teams

The admitting schedule was designed to decrease the workload of the house staff and to do so specifically during the periods of peak educational activity (morning report, attending‐led teaching rounds, and noon report). A faculty admitting medical officer (AMO) assigned patients strictly based on the time an admission was requested. Importantly, the request for admission preceded the time of actual admission recorded when the patient reached the ward. The time difference between request for admission and actual admission depended on the source of admission and the delay associated with assigning a patient room. The AMO assigned 8 to 12 new patients to the H‐PA teams every weekday between 7 AM and 3 PM and to the RES teams between noon and 7 AM the next day. There was a designed period of overlap from noon to 3 PM during which both H‐PA and RES teams could admit patients. This period allowed for flexibility in assigning patients to either type of team depending on their workload. The AMO did not use patient complexity or teaching value to assign patients.

Exceptions to Admission Schedule

Patients admitted overnight after the on call RES had reached their admission limits were assigned to H‐PA teams the next morning. In addition, recently discharged patients who were readmitted while the discharging hospitalist (H‐PA teams) or the discharging resident (RES teams) was still scheduled for inpatient duties, were assigned back to the discharging team irrespective of the admitting schedule.

The same medicine team cared for a patient from admission to discharge but on transfer to the intensive care unit (ICU), an intensivist led critical care team assumed care. On transfer out of the ICU these patients were assigned back to the original team irrespective of admitting schedulethe so called bounce back rule to promote inpatient continuity of care. But if the residents (RES teams) or the hospitalist (H‐PA teams) had changedthe bounce back rule was no longer in effect and these patients were assigned to a team according to the admission schedule.

Study Population and Study Period

We included all hospitalizations of adult patients to GM teams if both their date of admission and their date of discharge fell within the study period (January 1, 2005 to December 31, 2006). We excluded hospitalizations with admissions during the weekendwhen H‐PA teams did not admit patients; hospitalizations to GM services with transfer to nonGM service (excluding ICU) and hospitalizations involving comanagement with specialty servicesas the contribution of GM teams for these was variable; and hospitalizations of private patients.

Data Collection and Team Assignment

We collected patient data from our hospital's discharge abstract database. This database did not contain team information so to assign teams we matched the discharging attending and the day of discharge to the type of team that the discharging attending was leading that day.

We collected patient age, gender, race, insurance status, zip‐code, primary care provider, source of admission, ward type, time and day of admission, and time and day of discharge for use as independent variables. The time of admission captured in the database was the time of actual admission and not the time the admission was requested.

We grouped the principal diagnosis International Statistical Classification of Diseases and Related Health Problems, 9th edition (ICD‐9) codes into clinically relevant categories using the Clinical Classification Software.9 We created comorbidity measures using Healthcare Cost and Utilization Project Comorbidity Software, version 3.4.10

Outcome Measures

We used length of stay (LOS), charges, readmissions within 7, 14, and 30 days and inpatient mortality as our outcome measures. We calculated LOS by subtracting the discharge day and time from the admission day and time. The LOS included time spent in the ICU. We summed all charges accrued during the entire hospitalization including any stay in the ICU but did not include professional fees. We considered any repeat hospitalization to our hospital within 7, 14, and 30 days following a discharge to be a readmission except that we excluded readmissions for a planned procedure or for inpatient rehabilitation.

Statistical Analysis

Descriptive Analysis

We performed unadjusted descriptive statistics at the level of an individual hospitalization using medians and interquartile ranges for continuous data and frequencies and percentages for categorical data. We used chi‐square tests of association and KruskalWallis analysis of variance to compare H‐PA and RES teams.

Missing Data

Because we lacked data on whether a primary outpatient care provider was available for 284 (2.9%) of our study hospitalizations, we dropped them from our multivariable analyses. We used an arbitrary discharge time of noon for the 11 hospitalizations which did not have a discharge time recorded.

Multivariable Analysis

We used multivariable mixed models to risk adjust for a wide variety of variables. We included age, gender, race, insurance, presence of primary care physician, and total number of comorbidities as fixed effects in all models because of the high face validity of these variables. We then added admission source, ward, time, day of week, discharge day of week, and comorbidity measures one by one as fixed effects, including them only if significant at P < 0.01. For assessing LOS, charges, and readmissions, we added a variable identifying each patient as a random effect to account for multiple admissions for the same patient. We then added variables identifying attending physician, principal diagnostic group, and ZIP code of residence as random effects to account for clustering of hospitalizations within these categories, including them only if significant at P < 0.01. For the model assessing mortality we included variables for attending physician, principal diagnostic group, and ZIP code of residence as random effects if significant at P < 0.01. We log transformed LOS and charges because they were extremely skewed in nature. Readmissions were analyzed after excluding patients who died or were discharged alive within 7, 14, or 30 days of the end of the study period.

Sensitivity Analyses

To assess the influence of LOS outliers, we changed LOS to 6 hours if it was less than 6 hours, and 45 days if it was more than 45 daysa process called winsorizing. We consider winsorizing superior to dropping outliers because it acknowledges that outliers contribute information, but prevent them from being too influential. We chose the 6 hour cut off because we believed that was the minimum time required to admit and then discharge a patient. We chose the upper limit of 45 days on reviewing the frequency distribution for outliers. Similarly, we winsorized charges at the first and 99th percentile after reviewing the frequency distribution for outliers. We then log transformed the winsorized data before analysis.

Inpatient deaths reduce the LOS and charges associated with a hospitalization. Thus excess mortality may provide a false concession in terms of lower LOS or charges. To check if this occurred in our study we repeated the analyses after excluding inpatient deaths.

ICU stays are associated with higher LOS, charges, and mortality. In our model of care, some patients transferred to the ICU are not cared for by the original team on transfer out. Moreover, care in the ICU is not controlled by the team that discharges them. Since this might obscure differences in outcomes achieved by RES vs. H‐PA teams, we repeated these analyses after excluding hospitalizations with an ICU stay.

Since mortality can only occur during 1 hospitalization per patient, we repeated the mortality analysis using only each patient's first admission or last admission and using a randomly selected single admission for each patient.

Subgroup Analysis

To limit the effect of different physician characteristics on H‐PA and RES teams we separately analyzed the hospitalizations under the care of hospitalists who served on both H‐PA and RES teams.

To limit the effect of different admission schedules of H‐PA and RES teams we analyzed the hospitalizations with admission times between 11.00 AM and 4.00 PM. Such hospitalizations were likely to be assigned during the noon to 3 PM period when they could be assigned to either an H‐PA or RES team.

Interactions

Finally we explored interactions between the type of team and the fixed effect variables included in each model.

Statistical Software

We performed the statistical analysis using SAS software version 9.0 for UNIX (SAS Institute, Inc., Cary, NC) and R software (The R Project for Statistical Computing).

This study protocol was approved by the hospital's institutional review board.

Results

Study Population

Of the 52,391 hospitalizations to our hospital during the study period, 13,058 were admitted to general medicine. We excluded 3102 weekend admissions and 209 who met other exclusion criteria. We could not determine the team assignment for 66. Of the remaining 9681 hospitalizations, we assigned 2171 to H‐PA teams and 7510 to RES teams (Figure 1).

Figure 1
Study population (H‐PA, hospitalist‐physician assistant team; RES, traditional resident team).

Descriptive Analysis

We compare patients assigned to H‐PA and RES teams in Table 2. They were similar in age, gender, race, having a primary care provider or not, and insurance status. Clinically, they had similar comorbidities and a similar distribution of common principal diagnoses. Consistent with their admitting schedule, H‐PA teams admitted and discharged more patients earlier in the day and admitted more patients earlier in the work week. Patients cared for by H‐PA teams were admitted from the Emergency Room (ER) less often and were more likely to reside on wards designated as nonmedicine by nursing specialty. Hospitalizations to H‐PA teams more often included an ICU stay.

Characteristics of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA (n = 2171)RES (n = 7510)P Value
  • Abbreviations: CI, confidence interval; ER, emergency room; H‐PA, hospitalist‐physician assistant; ICU, Intensive care unit; RES, traditional resident.

Age   
Mean56.8057.04 
Median56560.15
Interquartile range43‐7243‐73 
Age group (years), n (%)   
< 2010 (0.5)57 (0.8) 
20‐29186 (8.6)632 (8.7) 
30‐39221 (10.2)766 (10.3) 
40‐49387 (17.8)1341 (18.1) 
50‐59434 (20.0)1492 (20.2)0.28
60‐69325 (15.0)974 (12.8) 
70‐79271 (12.5)1035 (13.6) 
80‐89262 (12.0)951(12.3) 
90<75 (3.5)262 (3.4) 
Female, n (%)1175 (54.1)4138 (55.1)0.42
Race, n (%)   
White1282 (59.1)4419 (58.9) 
Black793 (36.5)2754 (36.7)0.98
Other96 (4.4)337 (4.5) 
Primary care provider, n (%)  0.16
Yes1537 (73.2)5451 (74.7) 
Missing: 28471 (3.3)213 (2.8) 
Insurance status, n (%)   
Commercial/worker's comp440 (20.3)1442 (19.2) 
Medicare1017 (46.8)3589 (47.8)0.52
Medicaid/others714 (32.9)2479 (33.0) 
Time of admission, n (%)   
0000‐0259167 (7.7)1068 (14.2) 
0300‐0559244 (11.2)485 (6.5) 
0600‐0859456 (21.0)270 (3.6) 
0900‐1159782 (36.0)1146 (15.3)<0.001
1200‐1459299 (13.8)1750 (23.3) 
1500‐1759155 (7.1)1676 (22.3) 
1800‐235968 (3.1)1115 (14.9) 
Time of discharge, n (%)   
2100‐085936 (1.7)174 (2.3) 
0900‐1159275 (12.7)495 (6.6) 
1200‐1459858 (39.6)2608 (34.8)<0.001
1500‐1759749 (34.6)3122 (41.6) 
1800‐2059249 (11.5)1104 (14.7) 
Missing47 
Day of week of admission, n (%)   
Monday462 (21.3)1549 (20.6) 
Tuesday499 (23.0)1470 (19.6) 
Wednesday430 (19.8)1479 (19.7)0.001
Thursday400 (18.4)1482 (19.7) 
Friday380 (17.5)1530 (20.4) 
Day of week of discharge, n (%)   
Monday207 (9.5)829 (11.0) 
Tuesday268 (12.3)973 (13.0) 
Wednesday334 (15.4)1142 (15.2) 
Thursday362 (16.7)1297 (17.3)0.16
Friday485 (22.3)1523 (20.3) 
Saturday330 (15.2)1165 (15.5) 
Sunday185 (8.5)581 (7.7) 
Admit to non‐medicine wards, n (%)1332 (61.4)2624 (34.9)<0.001
Transfer to ICU (at least once), n (%)299 (13.8)504 (6.7)<0.001
Admit from ER No (%)1663 (76.6)6063 (80.7)<0.001
10 most frequent diagnosis (%)Pneumonia (4.9)Pneumonia (5.5) 
 Congestive heart failure; nonhypertensive (4.2)Congestive heart failure; nonhypertensive (3.9) 
 Sickle cell anemia (3.9)Nonspecific chest pain (3.7) 
 Chronic obstructive pulmonary disease and Bronchiectasis (3.3)Urinary tract infections(3.6) 
 Diabetes mellitus with complications (3.2)Skin and subcutaneous tissue infections (3.3) 
 Urinary tract infections (3.2)Sickle cell anemia (3.3) 
 Asthma (3.0)Pancreatic disorders (not diabetes) (2.8) 
 Nonspecific chest pain (3.0)Asthma (2.8) 
 Pancreatic disorders (not diabetes) (2.9)Chronic obstructive pulmonary disease and Bronchiectasis (2.6) 
 Septicemia (2.2)Diabetes mellitus with complications (2.6) 
Average number of comorbidities mean (95% CI)0.39 (0.37‐0.42)0.38 (0.36‐0.39)0.23

In unadjusted comparisons of outcomes (Table 3), hospitalizations on H‐PA teams had higher lengths of stay and charges than hospitalizations on RES teams, possibly higher inpatient mortality rates but similar unadjusted readmission rates at 7, 14, and 30 days

Unadjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA (n = 2171)RES (n = 7150)% Difference* (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; IQR, interquartile range; LOS, length of stay;

  • On comparing log transformed LOS;

  • RES is reference group.

LOSMedian (IQR)Median (IQR)  
Days3.17 (2.03‐5.30)2.99 (1.80‐5.08)+8.9% (4.71‐13.29%)<0.001
Charges    
US Dollars9390 (6196‐16,239)9044 (6106‐14,805)+5.56% (1.96‐9.28%)0.002
Readmissionsn (%)n (%)Odds Ratio (CI) 
Within 7 days147 (6.96)571 (7.78)0.88 (0.73‐1.06)0.19
Within14 days236 (11.34)924 (12.76)0.87 (0.75‐1.01)0.07
Within 30 days383 (18.91)1436 (20.31)0.91 (0.80‐1.03)0.14
Inpatient deaths39 (1.8)95 (1.3)1.36 (0.90‐2.00)0.06

Multivariable Analysis

LOS

Hospitalizations to H‐PA teams were associated with a 6.73% longer LOS (P = 0.005) (Table 4). This difference persisted when we used the winsorized data (6.45% increase, P = 0.006), excluded inpatient deaths (6.81% increase, P = 0.005), or excluded hospitalizations that involved an ICU stay (6.40%increase, P = 0.011) (Table 5).

Adjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams (RES is the reference group)
 OverallSubgroup: Restricted to Physicians Attending on Both H‐PA and RES Teams*Subgroup: Restricted to Hospitalizations Between 11.00 AM and 4.00 PM
% Difference (CI)P Value% Difference (CI)P Value% Difference (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; LOS, length of stay; OR, odds ratio;

  • Number of observations included in subgroup ranges from 2992 to 3196;

  • Number of observations included in subgroup ranges from 3174 to 3384.

LOS6.73% (1.99% to 11.70%)0.0055.44% (0.65% to 11.91%)0.082.97% (4.47% to 10.98%)0.44
Charges2.75% (1.30% to 6.97%)0.191.55% (3.76% to 7.16%)0.576.45% (0.62% to 14.03%)0.07
Risk of ReadmissionAdjusted OR (95%CI)P ValueAdjusted OR (95% CI)P ValueAdjusted OR (95% CI)P Value
Within 7 days0.88 (0.64‐1.20)0.420.74 (0.40‐1.35)0.320.90 (0.40‐2.00)0.78
Within14 days0.90 (0.69‐1.19)0.460.71 (0.51‐0.99)0.050.87 (0.36‐2.13)0.77
Within 30 days0.89 (0.75‐1.06)0.200.75 (0.51‐1.08)0.120.92 (0.55‐1.54)0.75
Inpatient mortality1.27 (0.82‐1.97)0.281.46 (0.67‐3.17)0.331.14 (0.47‐2.74)0.77
Sensitivity Analysis: Adjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams (RES Is the Reference Group)
 Analysis With Winsorized DataAnalysis After Excluding Inpatient DeathsAnalysis After Excluding Patients With ICU Stays
% Difference (CI)P Value% Difference (CI)P Value% Difference (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; ICU, intensive care unit; LOS, length of stay; OR, odds ratio.

LOS6.45% (4.04 to 8.91%)0.0066.81% (2.03 to 11.80%)0.0056.40% (1.46 to 11.58%)0.011
Charges2.67 (1.27 to 6.76%)0.1872.89% (1.16 to 7.11%)0.1640.74% (3.11 to 4.76%)0.710

Charges

Hospitalizations to H‐PA and RES teams were associated with similar charges (Table 4). The results were similar when we used winsorized data, excluded inpatient deaths or excluded hospitalizations involving an ICU stay (Table 5).

Readmissions

The risk of readmission at 7, 14, and 30 days was similar between hospitalizations to H‐PA and RES teams (Table 4).

Mortality

The risk of inpatient death was similar between all hospitalizations to H‐PA and RES teams or only hospitalizations without an ICU stay (Table 4). The results also remained the same in analyses restricted to first admissions, last admissions, or 1 randomly selected admission per patient.

Sub‐Group Analysis

On restricting the multivariable analyses to the subset of hospitalists who staffed both types of teams (Table 4), the increase in LOS associated with H‐PA care was no longer significant (5.44% higher, P = 0.081). The charges, risk of readmission at 7 and 30 days, and risk of inpatient mortality remained similar. The risk of readmission at 14 days was slightly lower following hospitalizations to H‐PA teams (odds ratio 0.71, 95% confidence interval [CI] 0.51‐0.99).

The increase in LOS associated with H‐PA care was further attenuated in analyses of the subset of admissions between 11.00 AM and 4.00 PM (2.97% higher, P = 0.444). The difference in charges approached significance (6.45% higher, P = 0.07), but risk of readmission at 7, 14, and 30 days and risk of inpatient mortality were no different (Table 4).

Interactions

On adding interaction terms between the team assignment and the fixed effect variables in each model we detected that the effect of H‐PA care on LOS (P < 0.001) and charges (P < 0.001) varied by time of admission (Figure 2a and b). Hospitalizations to H‐PA teams from 6.00 PM to 6.00 AM had greater relative increases in LOS as compared to hospitalizations to RES teams during those times. Similarly, hospitalizations during the period 3.00 PM to 3.00 AM had relatively higher charges associated with H‐PA care compared to RES care.

Figure 2
(A) Relative difference in length of stay associated with care by H‐PA teams by times of admission (in percent change with RES as reference). (B) Relative difference in charges associated with care by H‐PA teams by time of admission (in percent with RES as reference). Abbreviations: H‐PA, hospitalist‐physician assistant team; RES traditional resident team. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

Discussion

We found that hospitalizations to our H‐PA teams had longer LOS but similar charges, readmission rates, and mortality as compared to traditional resident‐based teams. These findings were robust to multiple sensitivity and subgroup analyses but when we examined times when both types of teams could receive admissions, the difference in LOS was markedly attenuated and nonsignificant.

We note that most prior reports comparing PA‐based models of inpatient care predate the ACGME work hour regulations. In a randomized control trial (1987‐1988) Simmer et al.5 showed lower lengths of stay and charges but possibly higher risk of readmission for PA based teams as compared to resident based teams. Van Rhee et al.7 conducted a nonrandomized retrospective cohort study (1994‐1995) using administrative data which showed lower resource utilization for PA‐based inpatient care. Our results from 2005 to 2006 reflect the important changes in the organization and delivery of inpatient care since these previous investigations.

Roy et al.8 report the only previously published comparison of PA and resident based GM inpatient care after the ACGME mandated work hour regulations. They found PA‐based care was associated with lower costs, whereas we found similar charges for admissions to RES and H‐PA teams. They also found that LOS was similar for PA and resident‐based care, while we found a higher LOS for admissions to our H‐PA team. We note that although the design of Roy's study was similar to our own, patients cared for by PA‐based teams were geographically localized in their model. This may contribute to the differences in results noted between our studies.

Despite no designed differences in patients assigned to either type of team other than time of admission we noted some differences between the H‐PA and RES teams in the descriptive analysis. These differences, such as a higher proportion of hospitalizations to H‐PA teams being admitted from the ER, residing on nonmedicine wards or having an ICU stay are likely a result of our system of assigning admissions to H‐PA teams early during the workday. For example patients on H‐PA teams were more often located on nonmedicine wards as a result of later discharges and bed availability on medicine wards. The difference that deserves special comment is the much higher proportion (13.8% vs. 6.7%) of hospitalizations with an ICU stay on the H‐PA teams. Hospitalizations directly to the ICU were excluded from our study which means that the hospitalizations with an ICU stay in our study were initially admitted to either H‐PA or RES teams and then transferred to the ICU. Transfers out of the ICU usually occur early in the workday when H‐PA teams accepted patients per our admission schedule. These patients may have been preferentially assigned to H‐PA teams, if on returning from the ICU the original team's resident had changed (and the bounce back rule was not in effect). Importantly, the conclusions of our research are not altered on controlling for this difference in the teams by excluding hospitalizations with an ICU stay.

Hospitalizations to H‐PA teams were associated with higher resource utilization if they occurred later in the day or overnight (Figure 2a and b). During these times a transition of care occurred shortly after admission. For a late day admission the H‐PA teams would transfer care for overnight cross cover soon after the admission and for patients admitted overnight as overflow they would assume care of a patient from the nighttime covering physician performing the admission. On the other hand, on RES teams, interns admitting patients overnight continued to care for their patients for part of the following day (30‐hour call). Similar findings of higher resource utilization associated with transfer of care after admission in the daytime11 and nighttime12 have been previously reported. An alternative hypothesis for our findings is that the hospital maybe busier and thus less efficient during times when H‐PA teams had to admit later in the day or accept patients admitted overnight as overflow. Future research to determine the cause of this significant interaction between team assignment and time of admission on resource utilization is important as the large increases in LOS (up to 30%) and charges (up to 50%) noted, could have a potentially large impact if a higher proportion of hospitalizations were affected by this phenomenon.

Our H‐PA teams were assigned equally complex patients as our RES teams, in contrast to previous reports.8, 13 This was accomplished while improving the resident's educational experience and we have previously reported increases in our resident's board pass rates and in‐service training exam scores with that introduction of our H‐PA teams.14 We thus believe that selection of less complex patients to H‐PA teams such as ours is unnecessary and may give them a second tier status in academic settings.

Our report has limitations. It is a retrospective, nonrandomized investigation using a single institution's administrative database and has the limitations of not being able to account for unmeasured confounders, severity of illness, errors in the database, selection bias and has limited generalizability. We measured charges not actual costs,15 but we feel charges are a true reflection of relative resource use when compared between similar patients within a single institution. We also did not account for the readmissions that occur to other hospitals16 and our results do not reflect resource utilization for the healthcare system in total. For example, we could not tell if higher LOS on H‐PA teams resulted in lower readmissions for their patients in all hospitals in the region, which may reveal an overall resource savings. Additionally, we measured in‐hospital mortality and could not capture deaths related to hospital care that may occur shortly after discharge.

ACGME has proposed revised standards that may further restrict resident duty hours when they take effect in July 2011.3 This may lead to further decreases in resident‐based inpatient care. Teaching hospitals will need to continue to develop alternate models for inpatient care that do not depend on house staff. Our findings provide important evidence to inform the development of such models. Our study shows that one such model: PAs paired with hospitalists, accepting admissions early in the workday, with hospitalist coverage over the weekend and nights can care for GM inpatients as complex as those cared for by resident‐based teams without increasing readmission rates, inpatient mortality, or charges but at the cost of slightly higher LOS.

References
  1. ACGME‐Common Program Requirements for Resident Duty Hours. Available at: http://www.acgme.org/acWebsite/dutyHours/dh_ComProgrRequirmentsDutyHours0707.pdf. Accessed July 2010.
  2. Sehgal NL,Shah HM,Parekh VI,Roy CL,Williams MV.Non‐housestaff medicine services in academic centers: models and challenges.J Hosp Med.2008;3(3):247255.
  3. ACGME. Duty Hours: Proposed Standards for Review and comment. Available at: http://acgme‐2010standards.org/pdf/Proposed_Standards. pdf. Accessed July 22,2010.
  4. Agency for Health Care Policy and Research. HCUPnet: A tool for identifying, tracking, and analyzing national hospital statistics. Available at: http://hcup.ahrq.gov/HCUPnet.asp. Accessed July2010.
  5. Simmer TL,Nerenz DR,Rutt WM,Newcomb CS,Benfer DW.A randomized, controlled trial of an attending staff service in general internal medicine.Med Care.1991;29(7 suppl):JS31JS40.
  6. Dhuper S,Choksi S.Replacing an academic internal medicine residency program with a physician assistant‐‐hospitalist model: a Comparative Analysis Study.Am J Med Qual.2009;24(2):132139.
  7. Rhee JV,Ritchie J,Eward AM.Resource use by physician assistant services versus teaching services.JAAPA.2002;15(1):3342.
  8. Roy CL,Liang CL,Lund M, et al.Implementation of a physician assistant/hospitalist service in an academic medical center: impact on efficiency and patient outcomes.J Hosp Med.2008;3(5):361368.
  9. AHRQ. Clinical Classifications Software (CCS) for ICD‐9‐CM. Available at: http://www.hcup‐us.ahrq.gov/toolssoftware/ccs/ccs.jsp#overview. Accessed July2010.
  10. AHRQ. HCUP: Comorbidity Software, Version 3.4.;Available at: http://www.hcup‐us.ahrq.gov/toolssoftware/comorbidity/comorbidity.jsp. Accessed July2010.
  11. Schuberth JL,Elasy TA,Butler J, et al.Effect of short call admission on length of stay and quality of care for acute decompensated heart failure.Circulation.2008;117(20):26372644.
  12. Lofgren RP,Gottlieb D,Williams RA,Rich EC.Post‐call transfer of resident responsibility: its effect on patient care.J Gen Intern Med.1990;5(6):501505.
  13. O'Connor AB,Lang VJ,Lurie SJ,Lambert DR,Rudmann A,Robbins B.The effect of nonteaching services on the distribution of inpatient cases for internal medicine residents.Acad Med.2009:84(2):220225.
  14. Singh S,Petkova JH,Gill A, et al.Allowing for better resident education and improving patient care: hospitalist‐physician assistant teams fill in the gaps.J Hosp Med.2007;2[S2]:139.
  15. Finkler SA.The distinction between cost and charges.Ann Intern Med.1982;96(1):102109.
  16. Jencks SF,Williams MV,Coleman EA.Rehospitalizations among patients in the Medicare Fee‐for‐Service Program.N Engl J Med.2009;360(14):14181428.
References
  1. ACGME‐Common Program Requirements for Resident Duty Hours. Available at: http://www.acgme.org/acWebsite/dutyHours/dh_ComProgrRequirmentsDutyHours0707.pdf. Accessed July 2010.
  2. Sehgal NL,Shah HM,Parekh VI,Roy CL,Williams MV.Non‐housestaff medicine services in academic centers: models and challenges.J Hosp Med.2008;3(3):247255.
  3. ACGME. Duty Hours: Proposed Standards for Review and comment. Available at: http://acgme‐2010standards.org/pdf/Proposed_Standards. pdf. Accessed July 22,2010.
  4. Agency for Health Care Policy and Research. HCUPnet: A tool for identifying, tracking, and analyzing national hospital statistics. Available at: http://hcup.ahrq.gov/HCUPnet.asp. Accessed July2010.
  5. Simmer TL,Nerenz DR,Rutt WM,Newcomb CS,Benfer DW.A randomized, controlled trial of an attending staff service in general internal medicine.Med Care.1991;29(7 suppl):JS31JS40.
  6. Dhuper S,Choksi S.Replacing an academic internal medicine residency program with a physician assistant‐‐hospitalist model: a Comparative Analysis Study.Am J Med Qual.2009;24(2):132139.
  7. Rhee JV,Ritchie J,Eward AM.Resource use by physician assistant services versus teaching services.JAAPA.2002;15(1):3342.
  8. Roy CL,Liang CL,Lund M, et al.Implementation of a physician assistant/hospitalist service in an academic medical center: impact on efficiency and patient outcomes.J Hosp Med.2008;3(5):361368.
  9. AHRQ. Clinical Classifications Software (CCS) for ICD‐9‐CM. Available at: http://www.hcup‐us.ahrq.gov/toolssoftware/ccs/ccs.jsp#overview. Accessed July2010.
  10. AHRQ. HCUP: Comorbidity Software, Version 3.4.;Available at: http://www.hcup‐us.ahrq.gov/toolssoftware/comorbidity/comorbidity.jsp. Accessed July2010.
  11. Schuberth JL,Elasy TA,Butler J, et al.Effect of short call admission on length of stay and quality of care for acute decompensated heart failure.Circulation.2008;117(20):26372644.
  12. Lofgren RP,Gottlieb D,Williams RA,Rich EC.Post‐call transfer of resident responsibility: its effect on patient care.J Gen Intern Med.1990;5(6):501505.
  13. O'Connor AB,Lang VJ,Lurie SJ,Lambert DR,Rudmann A,Robbins B.The effect of nonteaching services on the distribution of inpatient cases for internal medicine residents.Acad Med.2009:84(2):220225.
  14. Singh S,Petkova JH,Gill A, et al.Allowing for better resident education and improving patient care: hospitalist‐physician assistant teams fill in the gaps.J Hosp Med.2007;2[S2]:139.
  15. Finkler SA.The distinction between cost and charges.Ann Intern Med.1982;96(1):102109.
  16. Jencks SF,Williams MV,Coleman EA.Rehospitalizations among patients in the Medicare Fee‐for‐Service Program.N Engl J Med.2009;360(14):14181428.
Issue
Journal of Hospital Medicine - 6(3)
Issue
Journal of Hospital Medicine - 6(3)
Page Number
122-130
Page Number
122-130
Publications
Publications
Article Type
Display Headline
A comparison of outcomes of general medical inpatient care provided by a hospitalist‐physician assistant model vs a traditional resident‐based model
Display Headline
A comparison of outcomes of general medical inpatient care provided by a hospitalist‐physician assistant model vs a traditional resident‐based model
Legacy Keywords
education, outcomes measurement, physician assistant, resident
Legacy Keywords
education, outcomes measurement, physician assistant, resident
Sections
Article Source

Copyright © 2011 Society of Hospital Medicine

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media