Affiliations
Department of Internal Medicine, Medical College of Wisconsin, Milwaukee, Wisconsin
Clement J. Zablocki VAMC, Milwaukee, Wisconsin
Given name(s)
Marilyn M.
Family name
Schapira
Degrees
MD, MPH

When Reducing Low-Value Care in Hospital Medicine Saves Money, Who Benefits?

Article Type
Changed
Fri, 01/12/2018 - 09:37

Physicians face growing pressure to reduce their use of “low value” care—services that provide either little to no benefit, little benefit relative to cost, or outsized potential harm compared to benefit. One emerging policy solution for deterring such services is to financially penalize physicians who prescribe them.1,2

Physicians’ willingness to support such policies may depend on who they believe benefits from reductions in low-value care. In previous studies of cancer screening, the more that primary care physicians felt that the money saved from cost-containment efforts went to insurance company profits rather than to patients, the less willing they were to use less expensive cancer screening approaches.3

Similarly, physicians may be more likely to support financial penalty policies if they perceive that the benefits from reducing low-value care accrue to patients (eg, lower out-of-pocket costs) rather than insurers or hospitals (eg, profits and salaries of their leaders). If present, such perceptions could inform incentive design. We explored the hypothesis that support of financial penalties for low-value care would be associated with where physicians thought the money goes.

METHODS

Study Sample

By using a panel of internists maintained by the American College of Physicians, we conducted a randomized, web-based survey among 484 physicians who were either internal medicine residents or internal medicine physicians practicing hospital medicine.

Survey Instrument

Respondents used a 5-point scale (“strongly disagree” to “strongly agree”) to indicate their agreement with a policy that financially penalizes physicians for prescribing services that provide few benefits to patients. Respondents were asked to simultaneously consider the following hospital medicine services, deemed to be low value based on medical evidence and consensus guidelines4: (1) placing, and leaving in, urinary catheters for urine output monitoring in noncritically ill patients, (2) ordering continuous telemetry monitoring for nonintensive care unit patients without a protocol governing continuation, and (3) prescribing stress ulcer prophylaxis for medical patients not at a high risk for gastrointestinal complications. Policy support was defined as “somewhat” or “strongly” agreeing with the policy. As part of another study of this physician cohort, this question varied in how the harm of low-value services was framed: either as harm to patients, to society, or to hospitals and insurers as institutions. Respondent characteristics were balanced across survey versions, and for the current analysis, we pooled responses across all versions.

All other questions in the survey, described in detail elsewhere,5 were identical for all respondents. For this analysis, we focused on a question that asked physicians to assume that reducing these services saves money without harming the quality of care and to rate on a 4-point scale (“none” to “a lot”) how much of the money saved would ultimately go to the following 6 nonmutually exclusive areas: (a) other healthcare services for patients, (b) reduced charges to patients’ employers or insurers, (c) reduced out-of-pocket costs for patients, (d) salaries and bonuses for physicians, (e) salaries and profits for insurance companies and their leaders, and (f) salaries and profits for hospitals and/or health systems and their leaders.

Based on the positive correlation identified between the first 4 items (a to d) and negative correlation with the other 2 items (e and f), we reverse-coded the latter 2 and summed all 6 into a single-outcome scale, effectively representing the degree to which the money saved from reducing low-value services accrues generally to patients or physicians instead of to hospitals, insurance companies, and their leaders. The Cronbach alpha for the scale was 0.74, indicating acceptable reliability. Based on scale responses, we dichotomized respondents at the median into those who believe that the money saved from reducing low-value services would accrue as benefits to patients or physicians and those who believe benefits accrue to insurance companies or hospitals and/or health systems and their leaders. The protocol was exempted by the University of Pennsylvania Institutional Review Board.

 

 

Statistical Analysis

We used a χ2 test and multivariable logistic regression analysis to evaluate the association between policy support and physician beliefs about who benefits from reductions in low-value care. A χ2 test and a Kruskal-Wallis test were also used to evaluate the association between other respondent characteristics and beliefs about who benefits from reductions in low-value care. Analyses were performed by using Stata version 14.1 (StataCorp, College Station, TX). Tests of significance were 2-tailed at an alpha of .05.

RESULTS

Compared with nonrespondents, the 187 physicians who responded (39% response rate) were more likely to be female (30% vs 26%, P = 0.001), older (mean age 41 vs 36 years old, P < 0.001), and practicing clinicians rather than internal medicine residents (87% vs 69%, P < 0.001). Twenty-one percent reported that their personal compensation was tied to cost incentives.

Overall, respondents believed that more of any money saved from reducing low-value services would go to profits and leadership salaries for insurance companies and hospitals and/or health systems rather than to patients (panel A of Figure). Few respondents felt that the money saved would ultimately go toward physician compensation.

Physician beliefs about where the majority of any money saved goes were associated with policy support (panel B of Figure). Among those who did not support penalties, 52% believed that the majority of any money saved would go to salaries and profits for insurance companies and their leaders, and 39% believed it would go to salaries and profits for hospitals and/or health systems and their leaders, compared to 35% (P = 0.02) and 32% (P = 0.37), respectively, among physicians who supported penalties.

Sixty-six percent of physicians who supported penalties believed that benefits from reducing low-value care accrue to patients or physicians, compared to 39% among those not supporting penalties (P < 0.001). In multivariable analyses, policy support was associated with the belief that the money saved from reducing low-value services would accrue as benefits to patients or physicians rather than as salaries and profits for insurance companies or hospitals and/or health systems and their leaders (Table). There were no statistically significant associations between respondent age, gender, or professional status and beliefs about who benefits from reductions in low-value care.

DISCUSSION

Despite ongoing efforts to highlight how reducing low-value care benefits patients, physicians in our sample did not believe that much of the money saved would benefit patients.

This result may reflect that while some care patterns are considered low value because they provide little benefit at a high cost, others yield potential harm, regardless of cost. For example, limiting stress ulcer prophylaxis largely aims to avoid clinical harm (eg, adverse drug effects and nosocomial infections). Limiting telemetric monitoring largely aims to reduce costly care that provides only limited benefit. Therefore, the nature of potential benefit to patients is very different—improved clinical outcomes in the former and potential cost savings in the latter. Future studies could separately assess physician attitudes about these 2 different definitions of low-value services.

Our study also demonstrates that the more physicians believe that much of any money saved goes to the profits and salaries of insurance companies, hospitals and/or health systems, and their leaders rather than to patients, the less likely they are to support policies financially penalizing physicians for prescribing low-value services.

Our study does not address why physicians have the beliefs that they have, but a likely explanation, at least in part, is that financial flows in healthcare are complex and tangled. Indeed, a clear understanding of who actually benefits is so hard to determine that these stated beliefs may really derive from views of power or justice rather than from some understanding of funds flow. Whether or not ideological attitudes underlie these expressed beliefs, policymakers and healthcare institutions might be advised to increase transparency about how cost savings are realized and whom they benefit.

Our analysis has limitations. Although it provides insight into where physicians believe relative amounts of money saved go with respect to 6 common options, the study did not include an exhaustive list of possibilities. The response rate also limits the representativeness of our results. Additionally, the study design prevents conclusions about causality; we cannot determine whether the belief that savings go to insurance companies and their executives is what reduces physicians’ enthusiasm for penalties, whether the causal association is in the opposite direction, or whether the 2 factors are linked in another way.

Nonetheless, our findings are consistent with a sense of healthcare justice in which physicians support penalties imposed on themselves only if the resulting benefits accrue to patients rather than to corporate or organizational interests. Effective physician penalties will likely need to address the belief that insurers and provider organizations stand to gain more than patients when low-value care services are reduced.

 

 

Disclosure 

Drs. Liao, Schapira, Mitra, and Weissman have no conflicts to disclose. Dr. Navathe serves as advisor to Navvis and Company, Navigant Inc., Lynx Medical, Indegene Inc., and Sutherland Global Services and receives an honorarium from Elsevier Press, none of which have relationship to this manuscript. Dr. Asch is a partner and partial owner of VAL Health, which has no relationship to this manuscript.


Funding

This work was supported by The Leonard Davis Institute of Health Economics at the University of Pennsylvania, which had no role in the study design, data collection, analysis, or interpretation of results.

References

1. Berwick DM. Avoiding overuse – the next quality frontier. Lancet. 2017;390(10090):102-104. PubMed
2. Centers for Medicare and Medicaid Services. CMS response to Public Comments on Non-Recommended PSA-Based Screening Measure. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/MMS/Downloads/eCQM-Development-and-Maintenance-for-Eligible-Professionals_CMS_PSA_Response_Public-Comment.pdf. Accessed September 18, 2017.
3. Asch DA, Jepson C, Hershey JC, Baron J, Ubel PA. When Money is Saved by Reducing Healthcare Costs, Where Do US Primary Care Physicians Think the Money Goes? Am J Manag Care. 2003;9(6):438-442. PubMed
4. Society of Hospital Medicine. Choosing Wisely. https://www.hospitalmedicine.org/choosingwisely. Accessed September 18, 2017.
5. Liao JM, Navathe AS, Schapira MS, Weissman A, Mitra N, Asch DAA. Penalizing Physicians for Low Value Care in Hospital Medicine: A Randomized Survey. J Hosp Med. 2017. (In press). PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(1)
Publications
Topics
Page Number
45-48. Published online first November 22, 2017
Sections
Article PDF
Article PDF

Physicians face growing pressure to reduce their use of “low value” care—services that provide either little to no benefit, little benefit relative to cost, or outsized potential harm compared to benefit. One emerging policy solution for deterring such services is to financially penalize physicians who prescribe them.1,2

Physicians’ willingness to support such policies may depend on who they believe benefits from reductions in low-value care. In previous studies of cancer screening, the more that primary care physicians felt that the money saved from cost-containment efforts went to insurance company profits rather than to patients, the less willing they were to use less expensive cancer screening approaches.3

Similarly, physicians may be more likely to support financial penalty policies if they perceive that the benefits from reducing low-value care accrue to patients (eg, lower out-of-pocket costs) rather than insurers or hospitals (eg, profits and salaries of their leaders). If present, such perceptions could inform incentive design. We explored the hypothesis that support of financial penalties for low-value care would be associated with where physicians thought the money goes.

METHODS

Study Sample

By using a panel of internists maintained by the American College of Physicians, we conducted a randomized, web-based survey among 484 physicians who were either internal medicine residents or internal medicine physicians practicing hospital medicine.

Survey Instrument

Respondents used a 5-point scale (“strongly disagree” to “strongly agree”) to indicate their agreement with a policy that financially penalizes physicians for prescribing services that provide few benefits to patients. Respondents were asked to simultaneously consider the following hospital medicine services, deemed to be low value based on medical evidence and consensus guidelines4: (1) placing, and leaving in, urinary catheters for urine output monitoring in noncritically ill patients, (2) ordering continuous telemetry monitoring for nonintensive care unit patients without a protocol governing continuation, and (3) prescribing stress ulcer prophylaxis for medical patients not at a high risk for gastrointestinal complications. Policy support was defined as “somewhat” or “strongly” agreeing with the policy. As part of another study of this physician cohort, this question varied in how the harm of low-value services was framed: either as harm to patients, to society, or to hospitals and insurers as institutions. Respondent characteristics were balanced across survey versions, and for the current analysis, we pooled responses across all versions.

All other questions in the survey, described in detail elsewhere,5 were identical for all respondents. For this analysis, we focused on a question that asked physicians to assume that reducing these services saves money without harming the quality of care and to rate on a 4-point scale (“none” to “a lot”) how much of the money saved would ultimately go to the following 6 nonmutually exclusive areas: (a) other healthcare services for patients, (b) reduced charges to patients’ employers or insurers, (c) reduced out-of-pocket costs for patients, (d) salaries and bonuses for physicians, (e) salaries and profits for insurance companies and their leaders, and (f) salaries and profits for hospitals and/or health systems and their leaders.

Based on the positive correlation identified between the first 4 items (a to d) and negative correlation with the other 2 items (e and f), we reverse-coded the latter 2 and summed all 6 into a single-outcome scale, effectively representing the degree to which the money saved from reducing low-value services accrues generally to patients or physicians instead of to hospitals, insurance companies, and their leaders. The Cronbach alpha for the scale was 0.74, indicating acceptable reliability. Based on scale responses, we dichotomized respondents at the median into those who believe that the money saved from reducing low-value services would accrue as benefits to patients or physicians and those who believe benefits accrue to insurance companies or hospitals and/or health systems and their leaders. The protocol was exempted by the University of Pennsylvania Institutional Review Board.

 

 

Statistical Analysis

We used a χ2 test and multivariable logistic regression analysis to evaluate the association between policy support and physician beliefs about who benefits from reductions in low-value care. A χ2 test and a Kruskal-Wallis test were also used to evaluate the association between other respondent characteristics and beliefs about who benefits from reductions in low-value care. Analyses were performed by using Stata version 14.1 (StataCorp, College Station, TX). Tests of significance were 2-tailed at an alpha of .05.

RESULTS

Compared with nonrespondents, the 187 physicians who responded (39% response rate) were more likely to be female (30% vs 26%, P = 0.001), older (mean age 41 vs 36 years old, P < 0.001), and practicing clinicians rather than internal medicine residents (87% vs 69%, P < 0.001). Twenty-one percent reported that their personal compensation was tied to cost incentives.

Overall, respondents believed that more of any money saved from reducing low-value services would go to profits and leadership salaries for insurance companies and hospitals and/or health systems rather than to patients (panel A of Figure). Few respondents felt that the money saved would ultimately go toward physician compensation.

Physician beliefs about where the majority of any money saved goes were associated with policy support (panel B of Figure). Among those who did not support penalties, 52% believed that the majority of any money saved would go to salaries and profits for insurance companies and their leaders, and 39% believed it would go to salaries and profits for hospitals and/or health systems and their leaders, compared to 35% (P = 0.02) and 32% (P = 0.37), respectively, among physicians who supported penalties.

Sixty-six percent of physicians who supported penalties believed that benefits from reducing low-value care accrue to patients or physicians, compared to 39% among those not supporting penalties (P < 0.001). In multivariable analyses, policy support was associated with the belief that the money saved from reducing low-value services would accrue as benefits to patients or physicians rather than as salaries and profits for insurance companies or hospitals and/or health systems and their leaders (Table). There were no statistically significant associations between respondent age, gender, or professional status and beliefs about who benefits from reductions in low-value care.

DISCUSSION

Despite ongoing efforts to highlight how reducing low-value care benefits patients, physicians in our sample did not believe that much of the money saved would benefit patients.

This result may reflect that while some care patterns are considered low value because they provide little benefit at a high cost, others yield potential harm, regardless of cost. For example, limiting stress ulcer prophylaxis largely aims to avoid clinical harm (eg, adverse drug effects and nosocomial infections). Limiting telemetric monitoring largely aims to reduce costly care that provides only limited benefit. Therefore, the nature of potential benefit to patients is very different—improved clinical outcomes in the former and potential cost savings in the latter. Future studies could separately assess physician attitudes about these 2 different definitions of low-value services.

Our study also demonstrates that the more physicians believe that much of any money saved goes to the profits and salaries of insurance companies, hospitals and/or health systems, and their leaders rather than to patients, the less likely they are to support policies financially penalizing physicians for prescribing low-value services.

Our study does not address why physicians have the beliefs that they have, but a likely explanation, at least in part, is that financial flows in healthcare are complex and tangled. Indeed, a clear understanding of who actually benefits is so hard to determine that these stated beliefs may really derive from views of power or justice rather than from some understanding of funds flow. Whether or not ideological attitudes underlie these expressed beliefs, policymakers and healthcare institutions might be advised to increase transparency about how cost savings are realized and whom they benefit.

Our analysis has limitations. Although it provides insight into where physicians believe relative amounts of money saved go with respect to 6 common options, the study did not include an exhaustive list of possibilities. The response rate also limits the representativeness of our results. Additionally, the study design prevents conclusions about causality; we cannot determine whether the belief that savings go to insurance companies and their executives is what reduces physicians’ enthusiasm for penalties, whether the causal association is in the opposite direction, or whether the 2 factors are linked in another way.

Nonetheless, our findings are consistent with a sense of healthcare justice in which physicians support penalties imposed on themselves only if the resulting benefits accrue to patients rather than to corporate or organizational interests. Effective physician penalties will likely need to address the belief that insurers and provider organizations stand to gain more than patients when low-value care services are reduced.

 

 

Disclosure 

Drs. Liao, Schapira, Mitra, and Weissman have no conflicts to disclose. Dr. Navathe serves as advisor to Navvis and Company, Navigant Inc., Lynx Medical, Indegene Inc., and Sutherland Global Services and receives an honorarium from Elsevier Press, none of which have relationship to this manuscript. Dr. Asch is a partner and partial owner of VAL Health, which has no relationship to this manuscript.


Funding

This work was supported by The Leonard Davis Institute of Health Economics at the University of Pennsylvania, which had no role in the study design, data collection, analysis, or interpretation of results.

Physicians face growing pressure to reduce their use of “low value” care—services that provide either little to no benefit, little benefit relative to cost, or outsized potential harm compared to benefit. One emerging policy solution for deterring such services is to financially penalize physicians who prescribe them.1,2

Physicians’ willingness to support such policies may depend on who they believe benefits from reductions in low-value care. In previous studies of cancer screening, the more that primary care physicians felt that the money saved from cost-containment efforts went to insurance company profits rather than to patients, the less willing they were to use less expensive cancer screening approaches.3

Similarly, physicians may be more likely to support financial penalty policies if they perceive that the benefits from reducing low-value care accrue to patients (eg, lower out-of-pocket costs) rather than insurers or hospitals (eg, profits and salaries of their leaders). If present, such perceptions could inform incentive design. We explored the hypothesis that support of financial penalties for low-value care would be associated with where physicians thought the money goes.

METHODS

Study Sample

By using a panel of internists maintained by the American College of Physicians, we conducted a randomized, web-based survey among 484 physicians who were either internal medicine residents or internal medicine physicians practicing hospital medicine.

Survey Instrument

Respondents used a 5-point scale (“strongly disagree” to “strongly agree”) to indicate their agreement with a policy that financially penalizes physicians for prescribing services that provide few benefits to patients. Respondents were asked to simultaneously consider the following hospital medicine services, deemed to be low value based on medical evidence and consensus guidelines4: (1) placing, and leaving in, urinary catheters for urine output monitoring in noncritically ill patients, (2) ordering continuous telemetry monitoring for nonintensive care unit patients without a protocol governing continuation, and (3) prescribing stress ulcer prophylaxis for medical patients not at a high risk for gastrointestinal complications. Policy support was defined as “somewhat” or “strongly” agreeing with the policy. As part of another study of this physician cohort, this question varied in how the harm of low-value services was framed: either as harm to patients, to society, or to hospitals and insurers as institutions. Respondent characteristics were balanced across survey versions, and for the current analysis, we pooled responses across all versions.

All other questions in the survey, described in detail elsewhere,5 were identical for all respondents. For this analysis, we focused on a question that asked physicians to assume that reducing these services saves money without harming the quality of care and to rate on a 4-point scale (“none” to “a lot”) how much of the money saved would ultimately go to the following 6 nonmutually exclusive areas: (a) other healthcare services for patients, (b) reduced charges to patients’ employers or insurers, (c) reduced out-of-pocket costs for patients, (d) salaries and bonuses for physicians, (e) salaries and profits for insurance companies and their leaders, and (f) salaries and profits for hospitals and/or health systems and their leaders.

Based on the positive correlation identified between the first 4 items (a to d) and negative correlation with the other 2 items (e and f), we reverse-coded the latter 2 and summed all 6 into a single-outcome scale, effectively representing the degree to which the money saved from reducing low-value services accrues generally to patients or physicians instead of to hospitals, insurance companies, and their leaders. The Cronbach alpha for the scale was 0.74, indicating acceptable reliability. Based on scale responses, we dichotomized respondents at the median into those who believe that the money saved from reducing low-value services would accrue as benefits to patients or physicians and those who believe benefits accrue to insurance companies or hospitals and/or health systems and their leaders. The protocol was exempted by the University of Pennsylvania Institutional Review Board.

 

 

Statistical Analysis

We used a χ2 test and multivariable logistic regression analysis to evaluate the association between policy support and physician beliefs about who benefits from reductions in low-value care. A χ2 test and a Kruskal-Wallis test were also used to evaluate the association between other respondent characteristics and beliefs about who benefits from reductions in low-value care. Analyses were performed by using Stata version 14.1 (StataCorp, College Station, TX). Tests of significance were 2-tailed at an alpha of .05.

RESULTS

Compared with nonrespondents, the 187 physicians who responded (39% response rate) were more likely to be female (30% vs 26%, P = 0.001), older (mean age 41 vs 36 years old, P < 0.001), and practicing clinicians rather than internal medicine residents (87% vs 69%, P < 0.001). Twenty-one percent reported that their personal compensation was tied to cost incentives.

Overall, respondents believed that more of any money saved from reducing low-value services would go to profits and leadership salaries for insurance companies and hospitals and/or health systems rather than to patients (panel A of Figure). Few respondents felt that the money saved would ultimately go toward physician compensation.

Physician beliefs about where the majority of any money saved goes were associated with policy support (panel B of Figure). Among those who did not support penalties, 52% believed that the majority of any money saved would go to salaries and profits for insurance companies and their leaders, and 39% believed it would go to salaries and profits for hospitals and/or health systems and their leaders, compared to 35% (P = 0.02) and 32% (P = 0.37), respectively, among physicians who supported penalties.

Sixty-six percent of physicians who supported penalties believed that benefits from reducing low-value care accrue to patients or physicians, compared to 39% among those not supporting penalties (P < 0.001). In multivariable analyses, policy support was associated with the belief that the money saved from reducing low-value services would accrue as benefits to patients or physicians rather than as salaries and profits for insurance companies or hospitals and/or health systems and their leaders (Table). There were no statistically significant associations between respondent age, gender, or professional status and beliefs about who benefits from reductions in low-value care.

DISCUSSION

Despite ongoing efforts to highlight how reducing low-value care benefits patients, physicians in our sample did not believe that much of the money saved would benefit patients.

This result may reflect that while some care patterns are considered low value because they provide little benefit at a high cost, others yield potential harm, regardless of cost. For example, limiting stress ulcer prophylaxis largely aims to avoid clinical harm (eg, adverse drug effects and nosocomial infections). Limiting telemetric monitoring largely aims to reduce costly care that provides only limited benefit. Therefore, the nature of potential benefit to patients is very different—improved clinical outcomes in the former and potential cost savings in the latter. Future studies could separately assess physician attitudes about these 2 different definitions of low-value services.

Our study also demonstrates that the more physicians believe that much of any money saved goes to the profits and salaries of insurance companies, hospitals and/or health systems, and their leaders rather than to patients, the less likely they are to support policies financially penalizing physicians for prescribing low-value services.

Our study does not address why physicians have the beliefs that they have, but a likely explanation, at least in part, is that financial flows in healthcare are complex and tangled. Indeed, a clear understanding of who actually benefits is so hard to determine that these stated beliefs may really derive from views of power or justice rather than from some understanding of funds flow. Whether or not ideological attitudes underlie these expressed beliefs, policymakers and healthcare institutions might be advised to increase transparency about how cost savings are realized and whom they benefit.

Our analysis has limitations. Although it provides insight into where physicians believe relative amounts of money saved go with respect to 6 common options, the study did not include an exhaustive list of possibilities. The response rate also limits the representativeness of our results. Additionally, the study design prevents conclusions about causality; we cannot determine whether the belief that savings go to insurance companies and their executives is what reduces physicians’ enthusiasm for penalties, whether the causal association is in the opposite direction, or whether the 2 factors are linked in another way.

Nonetheless, our findings are consistent with a sense of healthcare justice in which physicians support penalties imposed on themselves only if the resulting benefits accrue to patients rather than to corporate or organizational interests. Effective physician penalties will likely need to address the belief that insurers and provider organizations stand to gain more than patients when low-value care services are reduced.

 

 

Disclosure 

Drs. Liao, Schapira, Mitra, and Weissman have no conflicts to disclose. Dr. Navathe serves as advisor to Navvis and Company, Navigant Inc., Lynx Medical, Indegene Inc., and Sutherland Global Services and receives an honorarium from Elsevier Press, none of which have relationship to this manuscript. Dr. Asch is a partner and partial owner of VAL Health, which has no relationship to this manuscript.


Funding

This work was supported by The Leonard Davis Institute of Health Economics at the University of Pennsylvania, which had no role in the study design, data collection, analysis, or interpretation of results.

References

1. Berwick DM. Avoiding overuse – the next quality frontier. Lancet. 2017;390(10090):102-104. PubMed
2. Centers for Medicare and Medicaid Services. CMS response to Public Comments on Non-Recommended PSA-Based Screening Measure. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/MMS/Downloads/eCQM-Development-and-Maintenance-for-Eligible-Professionals_CMS_PSA_Response_Public-Comment.pdf. Accessed September 18, 2017.
3. Asch DA, Jepson C, Hershey JC, Baron J, Ubel PA. When Money is Saved by Reducing Healthcare Costs, Where Do US Primary Care Physicians Think the Money Goes? Am J Manag Care. 2003;9(6):438-442. PubMed
4. Society of Hospital Medicine. Choosing Wisely. https://www.hospitalmedicine.org/choosingwisely. Accessed September 18, 2017.
5. Liao JM, Navathe AS, Schapira MS, Weissman A, Mitra N, Asch DAA. Penalizing Physicians for Low Value Care in Hospital Medicine: A Randomized Survey. J Hosp Med. 2017. (In press). PubMed

References

1. Berwick DM. Avoiding overuse – the next quality frontier. Lancet. 2017;390(10090):102-104. PubMed
2. Centers for Medicare and Medicaid Services. CMS response to Public Comments on Non-Recommended PSA-Based Screening Measure. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/MMS/Downloads/eCQM-Development-and-Maintenance-for-Eligible-Professionals_CMS_PSA_Response_Public-Comment.pdf. Accessed September 18, 2017.
3. Asch DA, Jepson C, Hershey JC, Baron J, Ubel PA. When Money is Saved by Reducing Healthcare Costs, Where Do US Primary Care Physicians Think the Money Goes? Am J Manag Care. 2003;9(6):438-442. PubMed
4. Society of Hospital Medicine. Choosing Wisely. https://www.hospitalmedicine.org/choosingwisely. Accessed September 18, 2017.
5. Liao JM, Navathe AS, Schapira MS, Weissman A, Mitra N, Asch DAA. Penalizing Physicians for Low Value Care in Hospital Medicine: A Randomized Survey. J Hosp Med. 2017. (In press). PubMed

Issue
Journal of Hospital Medicine 13(1)
Issue
Journal of Hospital Medicine 13(1)
Page Number
45-48. Published online first November 22, 2017
Page Number
45-48. Published online first November 22, 2017
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Joshua M. Liao, MD, MSc, UWMC Health Sciences, BB 1240, 1959 NE Pacific Street, Seattle, WA 98195; Telephone: 206-616-6934; Fax: 206-616-1895; E-mail: joshliao@uw.edu
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gating Strategy
First Peek Free
Article PDF Media

Effort of Inpatient Work

Article Type
Changed
Mon, 05/22/2017 - 18:50
Display Headline
Defining and measuring the effort needed for inpatient medicine work

In internal medicine residency training, the most commonly used metric for measuring workload of physicians is the number of patients being followed or the number being admitted. There are data to support the importance of these census numbers. One study conducted at an academic medical center demonstrated that for patients admitted to medical services, the number of patients admitted on a call night was positively associated with mortality, even after adjustment in multivariable models.1

The problem with a census is that it is only a rough indicator of the amount of work that a given intern or resident will have. In a focus group study that our group conducted with internal medicine residents, several contributors to patient care errors were identified. Workload was identified as a major factor contributing to patient care mistakes.2 In describing workload, residents noted not only census but the complexity of the patient as contributing factors to workload.

A more comprehensive method than relying on census data has been used in anesthesia.3, 4 In 2 studies, anesthesiologists were asked to rate the effort or intensity associated with the tasks that they performed in the operating room.4, 5 In subsequent studies, this group used a trained observer to record the tasks anesthesiologists performed during a case.6, 7 Work density was calculated by multiplying the duration of each task by the previously developed task intensity score. In this way, work per unit of time can be calculated as can a cumulative workload score for a certain duration of time.

These methods provide the background for the work that we conducted in this study. The purpose of this study was to assign a task effort score to the tasks performed during periods that include admitting patients to the hospital.

METHODS

Study Site

A single 500‐bed Midwest academic institution. Residents rotate through 3 hospitals (a private community hospital, a Veterans hospital, and an academic medical center) during a typical 3‐year internal medicine residency program.

Study Design and Subjects

A cross‐sectional survey was conducted. Subjects recruited for the survey included internal medicine interns and residents, internal medicine ward attending physicians and hospitalists. Attending physicians had to have been on the wards in the past year. The survey was conducted in November, when all eligible house staff should have had at least 1 ward month. Nearly every hospitalist recruited had spent time on both teaching and nonteaching services.

Task List Compilation and Survey Development

An expert panel was convened consisting of 10 physicians representing 3 hospitals, including residents and faculty, some of which were hospitalists. During the session, the participants developed a task list and discussed the work intensity associated with some of the tasks. The task list was reviewed by the study team and organized into categories. The final list included 99 tasks divided into 6 categories: (1) direct patient care, (2) indirect patient care, (3) search for/finding things, (4) educational/academic activities, (5) personal/downtime activities, and (6) other. Table 1 gives examples of items found in each category. We used the terminology that the study participants used to describe their work (eg, they used the term eyeballing a patient to describe the process of making an initial assessment of the patient's status). This list of 99 items was formatted into a survey to allow study participants to rate each task across 3 domains: physical effort, mental effort, and psychological effort, based on previous studies in anesthesia4 (see Supporting Information). The term mental refers to cognitive effort, whereas psychological refers to emotional effort. We used the same scales with the same anchors as described in the anesthesia literature,4 but substituted the internal medicine specific tasks. Each item was rated on a 7‐point Likert‐type scale (1 = almost no stress or effort; 7 = most effort). The survey also included demographic information regarding the respondent and instructions. The instructions directed respondents to rate each item based on their average experience in performing each task. They were further instructed not to rate tasks they had never performed.

Categories of Inpatient Internal Medicine Tasks and Examples
Categories of TasksExamples
  • Abbreviation: H&P, history and physical.

Direct patient careConducting the physical examination, hand washing, putting on isolation gear
Indirect patient careWriting H&P, writing orders, ordering additional labs or tests
Searching for/finding thingsFinding a computer, finding materials for procedures, finding the patient
Personal/downtime activitiesEating dinner, sleep, socializing, calling family members
Educational/academic activitiesLiterature search, teaching medical students, preparing a talk
OtherTransporting patients, traveling from place to place, billing

Survey Process

The potential survey participants were notified via e‐mail that they would be asked to complete the survey during a regularly scheduled meeting. The interns, residents, and faculty met during separate time slots. Data from residents and interns were obtained from teaching sessions they were required to attend (as long as their schedule permitted them to). Survey data for attending physicians were obtained from a general internal medicine meeting and a hospitalist meeting. Because of the type of meeting, subspecialists were less likely to have been included. The objectives of the study and its voluntary nature were presented to the groups, and the survey was given to all attendees at the meetings. Due to the anonymous nature of the survey, a waiver of written informed consent was granted. Time was reserved during the course of the meeting to complete the survey. Before distributing the survey, we counted the total number of people in the room so that a participation rate could be calculated. Respondents were instructed to place the survey in a designated envelope after completing it or to return a blank survey if they did not wish to complete it. There was no time limit for completion of the survey. At all of these sessions, this survey was one part of the meeting agenda.

Data Analysis

Surveys were entered into a Microsoft Excel (Redmond, WA) spreadsheet and then transferred into Stata version 8.0 (College Station, TX), which was used for analysis. Our analysis focused on (1) the description of the effort associated with individual tasks, (2) the description of the effort associated with task categories and comparisons across key categories, and (3) a comparison of effort across the task categories' physical, mental, and psychological domains.

Each task had 3 individual domain scores associated with it: physical, mental (ie, cognitive work), and psychological (ie, emotional work). A composite task effort score was calculated for each task by determining the mean of the 3 domain scores for that task.

An overall effort score was calculated for each of the 6 task categories by determining the mean of the composite task effort scores within each category. We used the composite effort score for each task to calculate the Cronbach's value for each category except other. We compared the overall category effort scores for direct versus indirect patient care using 2‐tailed paired t tests with a significance level of P < 0.05. We further evaluated differences in overall category effort scores for direct patient care between physicians of different genders and between house staff and faculty, using 2‐tailed unpaired t tests, with a significance level of P < 0.05.

Finally, we compared the physical, mental, and psychological domain scores for direct versus indirect patient care categories, using paired t tests.

Ethics

This study was approved by the Institutional Review Board at the Medical College of Wisconsin.

RESULTS

The study participation rate was 69% (59/85). The sample consisted of 31 (52%) women and 40 (68%) house staff (see Table 2). The mean age was 34 years. This participation rate represents approximately 1/3 of the internal medicine house staff and a smaller percentage of the faculty that would have been eligible.

Demographics of Survey Respondents (n = 59)
DemographicValue
  • Abbreviation: SD, standard deviation.

Age, y, mean (SD)34 (8.8)
Female gender, no. (%)31 (52)
Physician description, no. (%) 
Intern7 (12)
Resident33 (56)
Hospitalist4 (7)
Nonhospitalist faculty15 (25)

Individual Task Effort

The mean composite effort score of all 99 tasks is provided in the Supporting Information Table. Overall, the most difficult task was going to codes (in the direct patient care category), with a mean composite rating of 5.37 (standard deviation [SD] 1.5); this was also the most difficult psychological task (5.78 [SD 1.65]). The most difficult mental task was transferring an unstable patient to the intensive care unit (5.47 [SD 1.53]). The most difficult physical task was placing a central line (5.02 [SD 1.63]). The easiest task was using the Internet (in the personal/downtime activities category), with a mean composite rating of 1.41 (SD 0.74); this was also the easiest mental (1.52 [SD 1.01]), psychological (1.3 [SD 0.68]), and physical (1.42 [SD 0.76]) task.

Analysis of Task Categories

The overall and domain characteristics of each task category are given in Table 3. Categories contained between 5 and 41 tasks. The Cronbach's ranged from 0.83 for the personal/downtime activities category to 0.98 for the direct patient care category. The mean overall effort ranged from least difficult for the personal/downtime category (1.72 [SD 0.76]) to most difficult for the education category (3.61 [SD 1.06]).

Overall Effort Stratified by Task Category
CategoryNo. of ItemsCronbach'sEffort Score, Mean (SD)*
Composite EffortPhysical EffortMental EffortPsychological Effort
  • Abbreviation: NC, not calculated.

  • Measured on a scale of 17, where 1 = least effort and 7 = most effort.

Direct patient care320.973.55 (0.91)3.22 (1.06)3.89 (0.99)3.52 (1.04)
Indirect patient care410.983.21 (0.92)2.71 (1.09)3.80 (1.02)3.20 (1.08)
Education80.923.61 (1.06)3.12 (1.26)4.27 (1.17)3.43 (1.30)
Finding things50.852.94 (0.91)3.59 (1.23)2.43 (1.05)2.79 (1.13)
Personal70.831.72 (0.76)1.86 (0.92)1.69 (0.85)1.63 (0.72)
Other6NCNCNCNCNC

Using paired t tests, we determined that the direct patient care category was more difficult than the indirect patient care category overall (3.58 versus 3.21, P < 0.001). Direct patient care was statistically significantly more challenging than indirect patient care on the physical (3.23 vs 2.71; P < 0.001), mental (3.90 vs 3.84; P < 0.05), and psychological domains (3.57 vs 3.20; P < 0.001) as well. There were no significant differences between men and women or between house staff and faculty on the difficulty of direct patient care. We found a trend toward increased difficulty of indirect patient care for house staff versus faculty (3.36 vs 2.92; P 0.10), but no differences by gender.

DISCUSSION

In this study, we used a comprehensive list of tasks performed by internal medicine doctors while admitting patients and produced a numeric assessment of the effort associated with each. The list was generated by an expert panel and comprised 6 categories and 99 items. Residents and attending physicians then rated each task based on level of difficulty, specifically looking at the mental, psychological, and physical effort required by each.

Indirect patient care was the task category in our study that had the most tasks associated with it (41 out of 99). Direct patient care included 32 items, but 10 of these were procedures (eg, lumbar puncture), some of which are uncommonly performed. Several time‐motion studies have been performed to document the work done by residents815 and hospitalists.16, 17 Although our study did not assess the time spent on each task, the distribution of tasks across categories is consistent with these time‐motion studies, which show that the amount of time spent in direct patient care is a small fraction of the amount of time spent in the hospital,12 and that work such as interprofessional communication10 and documentation16 consume the majority of time.

This project allowed us to consider the effort required for inpatient internal medicine work on a more granular level than has been described previously. Although the difficulty of tasks associated with anesthesia and surgical work has been described,3, 4, 7, 1820 our study is a unique contribution to the internal medicine literature. Understanding the difficulty of tasks performed by inpatient physicians is an important step toward better management of workload. With concerns about burnout in hospitalists21, 22 and residents,2325 it seems wise to take the difficulty of the work they do into consideration in a more proactive manner. In addition, understanding workload may have patient safety applications. In one study of mistakes made by house staff, 51% of the survey respondents identified workload as a contributing factor.26

We assessed effort for inpatient work by generating a task list and then measuring 3 domains of each task: physical, mental, and psychological. As a result, we were able to further quantify the difficulty of work completed by physicians. Recent work from outside of medicine suggests that individuals have a finite capacity for mental workload, and when this is breached, decision‐making quality is impaired.27 This suggests that it is important to take work intensity into account when assigning work to individuals. For example, a detailed assessment of workload at the task level combined with the amount of time spent on each task would allow us to know how much effort is typically involved with admitting a new patient. This information would allow for more equal distribution of workload across admitting teams. In addition, these methods could be expanded to understand how much effort is involved in the discharge process. This could be taken into account at the beginning of a day when allocating work such as admissions and discharges between members of a team.

This methodology has the potential to be used in other ways to help quantify the effort required for the work that physicians do. Many departments are struggling to develop a system for giving credit to faculty for the time they spend on nonpatient care activities. Perhaps these methods could be used to develop effort scores associated with administrative tasks, and administrative relative value units could be calculated accordingly. Similar techniques have been used with educational relative value units.28

We know from the nursing literature that workload is related to both burnout and patient safety. Burnout is a process related to the emotional work of providing care to people.29 Our methods clearly incorporate the psychological stress of work into the workload assessment. Evaluating the amount of time spent on tasks with high psychological scores may be helpful in identifying work patterns that are more likely to produce burnout in physicians and nurses.

With respect to patient safety, higher patient‐to‐nurse ratios are associated with failure to rescue30 and nosocomial infections.31 Furthermore, researchers have demonstrated that systems issues can add substantially to nursing workload.32 Methods such as those described in our study take into account both patient‐related and systems‐related tasks, and therefore could result in more detailed workload assessments. With more detailed information about contributors to workload, better predictions about optimal staffing could be made, which would ultimately lead to fewer adverse patient events.

Our study has limitations. First, the initial task list was based on the compilation efforts from only 10 physicians. However, this group of physicians represented 3 hospitals and included both resident and attending physicians. Second, the survey data were gathered from a single institution. Although we included trainees and faculty, more participants would be needed to answer questions about how experience and setting/environmental factors affect these assessments. However, participants were instructed to reflect on their whole experience with each task, which presumably includes multiple institutions and training levels. Third, the sample size is fairly small, with more house staff than faculty (hospitalists and nonhospitalists) represented. Regardless, this study is the first attempt to define and quantify workload for internal medicine physicians using these methods. In future studies, we will expand the number of institutions and levels of experience to validate our current data. Finally, the difficulty of the tasks is clearly a subjective assessment. Although this methodology has face validity, further work needs to be done to validate these findings against other measurements of workload, such as census, or more general subjective workload assessments, such as the NASA task load index.33

In conclusion, we have described the tasks performed by inpatient physicians and the difficulty associated with them. Moreover, we have described a methodology that could be replicated at other centers for the purpose of validating our findings or quantifying workload of other types of tasks. We believe that this is the first step toward a more comprehensive understanding of the workload encountered by inpatient physicians. Because workload has implications for physician burnout and patient safety, it is essential that we fully understand the contributors to workload, including the innate difficulty of the tasks that comprise it.

Acknowledgements

The authors Alexis Visotcky, MS, and Sergey Tarima, PhD, for their assistance with statistics.

This work was presented in poster form at the Society of Hospital Medicine Annual Meeting in April 2010, the Society of General Internal Medicine Annual Meeting in May 2010, and the Society of General Internal Medicine regional meeting in September 2010.

Funding Source: The study team was supported by the following funds during this work: VA grants PPO 0925901 (Marilyn M. Schapira and Kathlyn E. Fletcher) and IIR 07201 (Marilyn M. Schapira, Siddhartha Singh, and Kathlyn E. Fletcher).

Files
References
  1. Ong M,Bostrom A,Vidyarthi A,McCulloch C,Auerbach A.House staff team workload and organization effects on patient outcomes in an academic general internal medicine inpatient service.Arch Intern Med.2007;167:4752.
  2. Fletcher KE,Parekh V,Halasyamani L, et al.The work hour rules and contributors to patient care mistakes: A focus group study with internal medicine residentsJ Hosp Med.2008;3:228237.
  3. Weinger MB,Reddy SB,Slagle JM.Multiple measures of anesthesia workload during teaching and nonteaching cases.Anesth Analg.2004;98:14191425.
  4. Vredenburgh AG,Weinger MB,Williams KJ,Kalsher MJ,Macario A.Developing a technique to measure anesthesiologists' real‐time workload.Proceedings of the Human Factors and Ergonomics Society Annual Meeting.2000;44:241244.
  5. Weinger MB,Vredenburgh AG,Schumann CM, et al.Quantitative description of the workload associated with airway management procedures.J Clin Anesth.2000;12:273282.
  6. Weinger MB,Herndon OW,Zornow MH,Paulus MP,Gaba DM,Dallen LT.An objective methodology for task analysis and workload assessment in anesthesia providers.Anesthesiology.1994;80:7792.
  7. Slagle JM,Weinger MB.Effects of intraoperative reading on vigilance and workload during anesthesia care in an academic medical center.Anesthesiology.2009;110:275283.
  8. Brasel KJ,Pierre AL,Weigelt JA.Resident work hours: what they are really doing.Arch Surg.2004;139:490493; discussion, 493–494.
  9. Dresselhaus TR,Luck J,Wright BC,Spragg RG,Lee ML,Bozzette SA.Analyzing the time and value of housestaff inpatient work.J Gen Intern Med.1998;13:534540.
  10. Westbrook JI,Ampt A,Kearney L,Rob MI.All in a day's work: an observational study to quantify how and with whom doctors on hospital wards spend their time.[see comment].Med J Aust.2008;188:506509.
  11. Lurie N,Rank B,Parenti C,Woolley T,Snoke W.How do house officers spend their nights? A time study of internal medicine house staff on call.N Engl J Med.1989;320:16731677.
  12. Tipping MD,Forth VE,Magill DB,Englert K,Williams MV.Systematic review of time studies evaluating physicians in the hospital setting.J Hosp Med.2010;5:353359.
  13. Guarisco S,Oddone E,Simel D.Time analysis of a general medicine service: results from a random work sampling study.J Gen Intern Med.1994;9:272277.
  14. Hayward RS,Rockwood K,Sheehan GJ,Bass EB.A phenomenology of scut.Ann Intern Med.1991;115:372376.
  15. Nerenz D,Rosman H,Newcomb C, et al.The on‐call experience of interns in internal medicine. Medical Education Task Force of Henry Ford Hospital.Arch Intern Med.1990;150:22942297.
  16. Tipping MD,Forth VE,O'Leary KJ, et al.Where did the day go? A time‐motion study of hospitalists.J Hosp Med.2010;5:323328.
  17. O'Leary KJ,Liebovitz DM,Baker DW.How hospitalists spend their time: insights on efficiency and safety.J Hosp Med.2006;1:8893.
  18. Cao CG,Weinger MB,Slagle J, et al.Differences in day and night shift clinical performance in anesthesiology.Hum Factors.2008;50:276290.
  19. Slagle J,Weinger MB,Dinh MT,Brumer VV,Williams K.Assessment of the intrarater and interrater reliability of an established clinical task analysis methodology.Anesthesiology.2002;96:11291139.
  20. Weinger MB,Herndon OW,Gaba DM.The effect of electronic record keeping and transesophageal echocardiography on task distribution, workload, and vigilance during cardiac anesthesia.Anesthesiology.1997;87:144155.
  21. Shaw G.Fight burnout while fostering experience: investing in hospitalist programs now can fight burnout later.ACP Hospitalist. July2008.
  22. Jerrard J.Hospitalist burnout: recognize it in yourself and others, and avoid or eliminate it.The Hospitalist. March2006.
  23. Gopal R,Glasheen JJ,Miyoshi TJ,Prochazka AV.Burnout and internal medicine resident work‐hour restrictions.Arch Intern Med.2005;165:25952600.
  24. Goitein L,Shanafelt TD,Wipf JE,Slatore CG,Back AL.The effects of work‐hour limitations on resident well‐being, patient care, and education in an internal medicine residency program.Arch Intern Med.2005;165:26012606.
  25. Shanafelt TD,Bradley KA,Wipf JE,Back AL.Burnout and self‐reported patient care in an internal medicine residency program.Ann Intern Med.2002;136:358367.
  26. Wu AW,Folkman S,McPhee SJ,Lo B.Do house officers learn from their mistakes?Qual Saf Health Care.2003;12:221226; discussion, 227–228.
  27. Danziger S,Levav J,Avnaim‐Pesso L.Extraneous factors in judicial decisions.Proc Natl Acad Sci U S A.2011;108:68896892.
  28. Yeh M,Cahill D.Quantifying physician teaching productivity using clinical relative value units.J Gen Intern Med.1999;14:617621.
  29. Maslach C JS.Maslach Burnout Inventory Manual.3rd ed.Palo Alto, CA:Consulting Psychology Press;1986.
  30. Aiken LH,Clarke SP,Sloane DM,Sochalski J,Silber JH.Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction.JAMA.2002;288:19871993.
  31. Archibald LK,Manning ML,Bell LM,Banerjee S,Jarvis WR.Patient density, nurse‐to‐patient ratio and nosocomial infection risk in a pediatric cardiac intensive care unit.Pediatr Infect Dis J.1997;16:10451048.
  32. Tucker AL,Spear SJ.Operational failures and interruptions in hospital nursing.Health Serv Res.2006;41:643662.
  33. Hart SG,Staveland LE.Development of NASA‐TLX (Task Load Index): results of empirical and theoretical research. In: Hancock PA, Meshkati N, eds.Human Mental Workload.Amsterdam, Netherlands:North Holland Press;1988:239250.
Article PDF
Issue
Journal of Hospital Medicine - 7(5)
Publications
Page Number
426-430
Sections
Files
Files
Article PDF
Article PDF

In internal medicine residency training, the most commonly used metric for measuring workload of physicians is the number of patients being followed or the number being admitted. There are data to support the importance of these census numbers. One study conducted at an academic medical center demonstrated that for patients admitted to medical services, the number of patients admitted on a call night was positively associated with mortality, even after adjustment in multivariable models.1

The problem with a census is that it is only a rough indicator of the amount of work that a given intern or resident will have. In a focus group study that our group conducted with internal medicine residents, several contributors to patient care errors were identified. Workload was identified as a major factor contributing to patient care mistakes.2 In describing workload, residents noted not only census but the complexity of the patient as contributing factors to workload.

A more comprehensive method than relying on census data has been used in anesthesia.3, 4 In 2 studies, anesthesiologists were asked to rate the effort or intensity associated with the tasks that they performed in the operating room.4, 5 In subsequent studies, this group used a trained observer to record the tasks anesthesiologists performed during a case.6, 7 Work density was calculated by multiplying the duration of each task by the previously developed task intensity score. In this way, work per unit of time can be calculated as can a cumulative workload score for a certain duration of time.

These methods provide the background for the work that we conducted in this study. The purpose of this study was to assign a task effort score to the tasks performed during periods that include admitting patients to the hospital.

METHODS

Study Site

A single 500‐bed Midwest academic institution. Residents rotate through 3 hospitals (a private community hospital, a Veterans hospital, and an academic medical center) during a typical 3‐year internal medicine residency program.

Study Design and Subjects

A cross‐sectional survey was conducted. Subjects recruited for the survey included internal medicine interns and residents, internal medicine ward attending physicians and hospitalists. Attending physicians had to have been on the wards in the past year. The survey was conducted in November, when all eligible house staff should have had at least 1 ward month. Nearly every hospitalist recruited had spent time on both teaching and nonteaching services.

Task List Compilation and Survey Development

An expert panel was convened consisting of 10 physicians representing 3 hospitals, including residents and faculty, some of which were hospitalists. During the session, the participants developed a task list and discussed the work intensity associated with some of the tasks. The task list was reviewed by the study team and organized into categories. The final list included 99 tasks divided into 6 categories: (1) direct patient care, (2) indirect patient care, (3) search for/finding things, (4) educational/academic activities, (5) personal/downtime activities, and (6) other. Table 1 gives examples of items found in each category. We used the terminology that the study participants used to describe their work (eg, they used the term eyeballing a patient to describe the process of making an initial assessment of the patient's status). This list of 99 items was formatted into a survey to allow study participants to rate each task across 3 domains: physical effort, mental effort, and psychological effort, based on previous studies in anesthesia4 (see Supporting Information). The term mental refers to cognitive effort, whereas psychological refers to emotional effort. We used the same scales with the same anchors as described in the anesthesia literature,4 but substituted the internal medicine specific tasks. Each item was rated on a 7‐point Likert‐type scale (1 = almost no stress or effort; 7 = most effort). The survey also included demographic information regarding the respondent and instructions. The instructions directed respondents to rate each item based on their average experience in performing each task. They were further instructed not to rate tasks they had never performed.

Categories of Inpatient Internal Medicine Tasks and Examples
Categories of TasksExamples
  • Abbreviation: H&P, history and physical.

Direct patient careConducting the physical examination, hand washing, putting on isolation gear
Indirect patient careWriting H&P, writing orders, ordering additional labs or tests
Searching for/finding thingsFinding a computer, finding materials for procedures, finding the patient
Personal/downtime activitiesEating dinner, sleep, socializing, calling family members
Educational/academic activitiesLiterature search, teaching medical students, preparing a talk
OtherTransporting patients, traveling from place to place, billing

Survey Process

The potential survey participants were notified via e‐mail that they would be asked to complete the survey during a regularly scheduled meeting. The interns, residents, and faculty met during separate time slots. Data from residents and interns were obtained from teaching sessions they were required to attend (as long as their schedule permitted them to). Survey data for attending physicians were obtained from a general internal medicine meeting and a hospitalist meeting. Because of the type of meeting, subspecialists were less likely to have been included. The objectives of the study and its voluntary nature were presented to the groups, and the survey was given to all attendees at the meetings. Due to the anonymous nature of the survey, a waiver of written informed consent was granted. Time was reserved during the course of the meeting to complete the survey. Before distributing the survey, we counted the total number of people in the room so that a participation rate could be calculated. Respondents were instructed to place the survey in a designated envelope after completing it or to return a blank survey if they did not wish to complete it. There was no time limit for completion of the survey. At all of these sessions, this survey was one part of the meeting agenda.

Data Analysis

Surveys were entered into a Microsoft Excel (Redmond, WA) spreadsheet and then transferred into Stata version 8.0 (College Station, TX), which was used for analysis. Our analysis focused on (1) the description of the effort associated with individual tasks, (2) the description of the effort associated with task categories and comparisons across key categories, and (3) a comparison of effort across the task categories' physical, mental, and psychological domains.

Each task had 3 individual domain scores associated with it: physical, mental (ie, cognitive work), and psychological (ie, emotional work). A composite task effort score was calculated for each task by determining the mean of the 3 domain scores for that task.

An overall effort score was calculated for each of the 6 task categories by determining the mean of the composite task effort scores within each category. We used the composite effort score for each task to calculate the Cronbach's value for each category except other. We compared the overall category effort scores for direct versus indirect patient care using 2‐tailed paired t tests with a significance level of P < 0.05. We further evaluated differences in overall category effort scores for direct patient care between physicians of different genders and between house staff and faculty, using 2‐tailed unpaired t tests, with a significance level of P < 0.05.

Finally, we compared the physical, mental, and psychological domain scores for direct versus indirect patient care categories, using paired t tests.

Ethics

This study was approved by the Institutional Review Board at the Medical College of Wisconsin.

RESULTS

The study participation rate was 69% (59/85). The sample consisted of 31 (52%) women and 40 (68%) house staff (see Table 2). The mean age was 34 years. This participation rate represents approximately 1/3 of the internal medicine house staff and a smaller percentage of the faculty that would have been eligible.

Demographics of Survey Respondents (n = 59)
DemographicValue
  • Abbreviation: SD, standard deviation.

Age, y, mean (SD)34 (8.8)
Female gender, no. (%)31 (52)
Physician description, no. (%) 
Intern7 (12)
Resident33 (56)
Hospitalist4 (7)
Nonhospitalist faculty15 (25)

Individual Task Effort

The mean composite effort score of all 99 tasks is provided in the Supporting Information Table. Overall, the most difficult task was going to codes (in the direct patient care category), with a mean composite rating of 5.37 (standard deviation [SD] 1.5); this was also the most difficult psychological task (5.78 [SD 1.65]). The most difficult mental task was transferring an unstable patient to the intensive care unit (5.47 [SD 1.53]). The most difficult physical task was placing a central line (5.02 [SD 1.63]). The easiest task was using the Internet (in the personal/downtime activities category), with a mean composite rating of 1.41 (SD 0.74); this was also the easiest mental (1.52 [SD 1.01]), psychological (1.3 [SD 0.68]), and physical (1.42 [SD 0.76]) task.

Analysis of Task Categories

The overall and domain characteristics of each task category are given in Table 3. Categories contained between 5 and 41 tasks. The Cronbach's ranged from 0.83 for the personal/downtime activities category to 0.98 for the direct patient care category. The mean overall effort ranged from least difficult for the personal/downtime category (1.72 [SD 0.76]) to most difficult for the education category (3.61 [SD 1.06]).

Overall Effort Stratified by Task Category
CategoryNo. of ItemsCronbach'sEffort Score, Mean (SD)*
Composite EffortPhysical EffortMental EffortPsychological Effort
  • Abbreviation: NC, not calculated.

  • Measured on a scale of 17, where 1 = least effort and 7 = most effort.

Direct patient care320.973.55 (0.91)3.22 (1.06)3.89 (0.99)3.52 (1.04)
Indirect patient care410.983.21 (0.92)2.71 (1.09)3.80 (1.02)3.20 (1.08)
Education80.923.61 (1.06)3.12 (1.26)4.27 (1.17)3.43 (1.30)
Finding things50.852.94 (0.91)3.59 (1.23)2.43 (1.05)2.79 (1.13)
Personal70.831.72 (0.76)1.86 (0.92)1.69 (0.85)1.63 (0.72)
Other6NCNCNCNCNC

Using paired t tests, we determined that the direct patient care category was more difficult than the indirect patient care category overall (3.58 versus 3.21, P < 0.001). Direct patient care was statistically significantly more challenging than indirect patient care on the physical (3.23 vs 2.71; P < 0.001), mental (3.90 vs 3.84; P < 0.05), and psychological domains (3.57 vs 3.20; P < 0.001) as well. There were no significant differences between men and women or between house staff and faculty on the difficulty of direct patient care. We found a trend toward increased difficulty of indirect patient care for house staff versus faculty (3.36 vs 2.92; P 0.10), but no differences by gender.

DISCUSSION

In this study, we used a comprehensive list of tasks performed by internal medicine doctors while admitting patients and produced a numeric assessment of the effort associated with each. The list was generated by an expert panel and comprised 6 categories and 99 items. Residents and attending physicians then rated each task based on level of difficulty, specifically looking at the mental, psychological, and physical effort required by each.

Indirect patient care was the task category in our study that had the most tasks associated with it (41 out of 99). Direct patient care included 32 items, but 10 of these were procedures (eg, lumbar puncture), some of which are uncommonly performed. Several time‐motion studies have been performed to document the work done by residents815 and hospitalists.16, 17 Although our study did not assess the time spent on each task, the distribution of tasks across categories is consistent with these time‐motion studies, which show that the amount of time spent in direct patient care is a small fraction of the amount of time spent in the hospital,12 and that work such as interprofessional communication10 and documentation16 consume the majority of time.

This project allowed us to consider the effort required for inpatient internal medicine work on a more granular level than has been described previously. Although the difficulty of tasks associated with anesthesia and surgical work has been described,3, 4, 7, 1820 our study is a unique contribution to the internal medicine literature. Understanding the difficulty of tasks performed by inpatient physicians is an important step toward better management of workload. With concerns about burnout in hospitalists21, 22 and residents,2325 it seems wise to take the difficulty of the work they do into consideration in a more proactive manner. In addition, understanding workload may have patient safety applications. In one study of mistakes made by house staff, 51% of the survey respondents identified workload as a contributing factor.26

We assessed effort for inpatient work by generating a task list and then measuring 3 domains of each task: physical, mental, and psychological. As a result, we were able to further quantify the difficulty of work completed by physicians. Recent work from outside of medicine suggests that individuals have a finite capacity for mental workload, and when this is breached, decision‐making quality is impaired.27 This suggests that it is important to take work intensity into account when assigning work to individuals. For example, a detailed assessment of workload at the task level combined with the amount of time spent on each task would allow us to know how much effort is typically involved with admitting a new patient. This information would allow for more equal distribution of workload across admitting teams. In addition, these methods could be expanded to understand how much effort is involved in the discharge process. This could be taken into account at the beginning of a day when allocating work such as admissions and discharges between members of a team.

This methodology has the potential to be used in other ways to help quantify the effort required for the work that physicians do. Many departments are struggling to develop a system for giving credit to faculty for the time they spend on nonpatient care activities. Perhaps these methods could be used to develop effort scores associated with administrative tasks, and administrative relative value units could be calculated accordingly. Similar techniques have been used with educational relative value units.28

We know from the nursing literature that workload is related to both burnout and patient safety. Burnout is a process related to the emotional work of providing care to people.29 Our methods clearly incorporate the psychological stress of work into the workload assessment. Evaluating the amount of time spent on tasks with high psychological scores may be helpful in identifying work patterns that are more likely to produce burnout in physicians and nurses.

With respect to patient safety, higher patient‐to‐nurse ratios are associated with failure to rescue30 and nosocomial infections.31 Furthermore, researchers have demonstrated that systems issues can add substantially to nursing workload.32 Methods such as those described in our study take into account both patient‐related and systems‐related tasks, and therefore could result in more detailed workload assessments. With more detailed information about contributors to workload, better predictions about optimal staffing could be made, which would ultimately lead to fewer adverse patient events.

Our study has limitations. First, the initial task list was based on the compilation efforts from only 10 physicians. However, this group of physicians represented 3 hospitals and included both resident and attending physicians. Second, the survey data were gathered from a single institution. Although we included trainees and faculty, more participants would be needed to answer questions about how experience and setting/environmental factors affect these assessments. However, participants were instructed to reflect on their whole experience with each task, which presumably includes multiple institutions and training levels. Third, the sample size is fairly small, with more house staff than faculty (hospitalists and nonhospitalists) represented. Regardless, this study is the first attempt to define and quantify workload for internal medicine physicians using these methods. In future studies, we will expand the number of institutions and levels of experience to validate our current data. Finally, the difficulty of the tasks is clearly a subjective assessment. Although this methodology has face validity, further work needs to be done to validate these findings against other measurements of workload, such as census, or more general subjective workload assessments, such as the NASA task load index.33

In conclusion, we have described the tasks performed by inpatient physicians and the difficulty associated with them. Moreover, we have described a methodology that could be replicated at other centers for the purpose of validating our findings or quantifying workload of other types of tasks. We believe that this is the first step toward a more comprehensive understanding of the workload encountered by inpatient physicians. Because workload has implications for physician burnout and patient safety, it is essential that we fully understand the contributors to workload, including the innate difficulty of the tasks that comprise it.

Acknowledgements

The authors Alexis Visotcky, MS, and Sergey Tarima, PhD, for their assistance with statistics.

This work was presented in poster form at the Society of Hospital Medicine Annual Meeting in April 2010, the Society of General Internal Medicine Annual Meeting in May 2010, and the Society of General Internal Medicine regional meeting in September 2010.

Funding Source: The study team was supported by the following funds during this work: VA grants PPO 0925901 (Marilyn M. Schapira and Kathlyn E. Fletcher) and IIR 07201 (Marilyn M. Schapira, Siddhartha Singh, and Kathlyn E. Fletcher).

In internal medicine residency training, the most commonly used metric for measuring workload of physicians is the number of patients being followed or the number being admitted. There are data to support the importance of these census numbers. One study conducted at an academic medical center demonstrated that for patients admitted to medical services, the number of patients admitted on a call night was positively associated with mortality, even after adjustment in multivariable models.1

The problem with a census is that it is only a rough indicator of the amount of work that a given intern or resident will have. In a focus group study that our group conducted with internal medicine residents, several contributors to patient care errors were identified. Workload was identified as a major factor contributing to patient care mistakes.2 In describing workload, residents noted not only census but the complexity of the patient as contributing factors to workload.

A more comprehensive method than relying on census data has been used in anesthesia.3, 4 In 2 studies, anesthesiologists were asked to rate the effort or intensity associated with the tasks that they performed in the operating room.4, 5 In subsequent studies, this group used a trained observer to record the tasks anesthesiologists performed during a case.6, 7 Work density was calculated by multiplying the duration of each task by the previously developed task intensity score. In this way, work per unit of time can be calculated as can a cumulative workload score for a certain duration of time.

These methods provide the background for the work that we conducted in this study. The purpose of this study was to assign a task effort score to the tasks performed during periods that include admitting patients to the hospital.

METHODS

Study Site

A single 500‐bed Midwest academic institution. Residents rotate through 3 hospitals (a private community hospital, a Veterans hospital, and an academic medical center) during a typical 3‐year internal medicine residency program.

Study Design and Subjects

A cross‐sectional survey was conducted. Subjects recruited for the survey included internal medicine interns and residents, internal medicine ward attending physicians and hospitalists. Attending physicians had to have been on the wards in the past year. The survey was conducted in November, when all eligible house staff should have had at least 1 ward month. Nearly every hospitalist recruited had spent time on both teaching and nonteaching services.

Task List Compilation and Survey Development

An expert panel was convened consisting of 10 physicians representing 3 hospitals, including residents and faculty, some of which were hospitalists. During the session, the participants developed a task list and discussed the work intensity associated with some of the tasks. The task list was reviewed by the study team and organized into categories. The final list included 99 tasks divided into 6 categories: (1) direct patient care, (2) indirect patient care, (3) search for/finding things, (4) educational/academic activities, (5) personal/downtime activities, and (6) other. Table 1 gives examples of items found in each category. We used the terminology that the study participants used to describe their work (eg, they used the term eyeballing a patient to describe the process of making an initial assessment of the patient's status). This list of 99 items was formatted into a survey to allow study participants to rate each task across 3 domains: physical effort, mental effort, and psychological effort, based on previous studies in anesthesia4 (see Supporting Information). The term mental refers to cognitive effort, whereas psychological refers to emotional effort. We used the same scales with the same anchors as described in the anesthesia literature,4 but substituted the internal medicine specific tasks. Each item was rated on a 7‐point Likert‐type scale (1 = almost no stress or effort; 7 = most effort). The survey also included demographic information regarding the respondent and instructions. The instructions directed respondents to rate each item based on their average experience in performing each task. They were further instructed not to rate tasks they had never performed.

Categories of Inpatient Internal Medicine Tasks and Examples
Categories of TasksExamples
  • Abbreviation: H&P, history and physical.

Direct patient careConducting the physical examination, hand washing, putting on isolation gear
Indirect patient careWriting H&P, writing orders, ordering additional labs or tests
Searching for/finding thingsFinding a computer, finding materials for procedures, finding the patient
Personal/downtime activitiesEating dinner, sleep, socializing, calling family members
Educational/academic activitiesLiterature search, teaching medical students, preparing a talk
OtherTransporting patients, traveling from place to place, billing

Survey Process

The potential survey participants were notified via e‐mail that they would be asked to complete the survey during a regularly scheduled meeting. The interns, residents, and faculty met during separate time slots. Data from residents and interns were obtained from teaching sessions they were required to attend (as long as their schedule permitted them to). Survey data for attending physicians were obtained from a general internal medicine meeting and a hospitalist meeting. Because of the type of meeting, subspecialists were less likely to have been included. The objectives of the study and its voluntary nature were presented to the groups, and the survey was given to all attendees at the meetings. Due to the anonymous nature of the survey, a waiver of written informed consent was granted. Time was reserved during the course of the meeting to complete the survey. Before distributing the survey, we counted the total number of people in the room so that a participation rate could be calculated. Respondents were instructed to place the survey in a designated envelope after completing it or to return a blank survey if they did not wish to complete it. There was no time limit for completion of the survey. At all of these sessions, this survey was one part of the meeting agenda.

Data Analysis

Surveys were entered into a Microsoft Excel (Redmond, WA) spreadsheet and then transferred into Stata version 8.0 (College Station, TX), which was used for analysis. Our analysis focused on (1) the description of the effort associated with individual tasks, (2) the description of the effort associated with task categories and comparisons across key categories, and (3) a comparison of effort across the task categories' physical, mental, and psychological domains.

Each task had 3 individual domain scores associated with it: physical, mental (ie, cognitive work), and psychological (ie, emotional work). A composite task effort score was calculated for each task by determining the mean of the 3 domain scores for that task.

An overall effort score was calculated for each of the 6 task categories by determining the mean of the composite task effort scores within each category. We used the composite effort score for each task to calculate the Cronbach's value for each category except other. We compared the overall category effort scores for direct versus indirect patient care using 2‐tailed paired t tests with a significance level of P < 0.05. We further evaluated differences in overall category effort scores for direct patient care between physicians of different genders and between house staff and faculty, using 2‐tailed unpaired t tests, with a significance level of P < 0.05.

Finally, we compared the physical, mental, and psychological domain scores for direct versus indirect patient care categories, using paired t tests.

Ethics

This study was approved by the Institutional Review Board at the Medical College of Wisconsin.

RESULTS

The study participation rate was 69% (59/85). The sample consisted of 31 (52%) women and 40 (68%) house staff (see Table 2). The mean age was 34 years. This participation rate represents approximately 1/3 of the internal medicine house staff and a smaller percentage of the faculty that would have been eligible.

Demographics of Survey Respondents (n = 59)
DemographicValue
  • Abbreviation: SD, standard deviation.

Age, y, mean (SD)34 (8.8)
Female gender, no. (%)31 (52)
Physician description, no. (%) 
Intern7 (12)
Resident33 (56)
Hospitalist4 (7)
Nonhospitalist faculty15 (25)

Individual Task Effort

The mean composite effort score of all 99 tasks is provided in the Supporting Information Table. Overall, the most difficult task was going to codes (in the direct patient care category), with a mean composite rating of 5.37 (standard deviation [SD] 1.5); this was also the most difficult psychological task (5.78 [SD 1.65]). The most difficult mental task was transferring an unstable patient to the intensive care unit (5.47 [SD 1.53]). The most difficult physical task was placing a central line (5.02 [SD 1.63]). The easiest task was using the Internet (in the personal/downtime activities category), with a mean composite rating of 1.41 (SD 0.74); this was also the easiest mental (1.52 [SD 1.01]), psychological (1.3 [SD 0.68]), and physical (1.42 [SD 0.76]) task.

Analysis of Task Categories

The overall and domain characteristics of each task category are given in Table 3. Categories contained between 5 and 41 tasks. The Cronbach's ranged from 0.83 for the personal/downtime activities category to 0.98 for the direct patient care category. The mean overall effort ranged from least difficult for the personal/downtime category (1.72 [SD 0.76]) to most difficult for the education category (3.61 [SD 1.06]).

Overall Effort Stratified by Task Category
CategoryNo. of ItemsCronbach'sEffort Score, Mean (SD)*
Composite EffortPhysical EffortMental EffortPsychological Effort
  • Abbreviation: NC, not calculated.

  • Measured on a scale of 17, where 1 = least effort and 7 = most effort.

Direct patient care320.973.55 (0.91)3.22 (1.06)3.89 (0.99)3.52 (1.04)
Indirect patient care410.983.21 (0.92)2.71 (1.09)3.80 (1.02)3.20 (1.08)
Education80.923.61 (1.06)3.12 (1.26)4.27 (1.17)3.43 (1.30)
Finding things50.852.94 (0.91)3.59 (1.23)2.43 (1.05)2.79 (1.13)
Personal70.831.72 (0.76)1.86 (0.92)1.69 (0.85)1.63 (0.72)
Other6NCNCNCNCNC

Using paired t tests, we determined that the direct patient care category was more difficult than the indirect patient care category overall (3.58 versus 3.21, P < 0.001). Direct patient care was statistically significantly more challenging than indirect patient care on the physical (3.23 vs 2.71; P < 0.001), mental (3.90 vs 3.84; P < 0.05), and psychological domains (3.57 vs 3.20; P < 0.001) as well. There were no significant differences between men and women or between house staff and faculty on the difficulty of direct patient care. We found a trend toward increased difficulty of indirect patient care for house staff versus faculty (3.36 vs 2.92; P 0.10), but no differences by gender.

DISCUSSION

In this study, we used a comprehensive list of tasks performed by internal medicine doctors while admitting patients and produced a numeric assessment of the effort associated with each. The list was generated by an expert panel and comprised 6 categories and 99 items. Residents and attending physicians then rated each task based on level of difficulty, specifically looking at the mental, psychological, and physical effort required by each.

Indirect patient care was the task category in our study that had the most tasks associated with it (41 out of 99). Direct patient care included 32 items, but 10 of these were procedures (eg, lumbar puncture), some of which are uncommonly performed. Several time‐motion studies have been performed to document the work done by residents815 and hospitalists.16, 17 Although our study did not assess the time spent on each task, the distribution of tasks across categories is consistent with these time‐motion studies, which show that the amount of time spent in direct patient care is a small fraction of the amount of time spent in the hospital,12 and that work such as interprofessional communication10 and documentation16 consume the majority of time.

This project allowed us to consider the effort required for inpatient internal medicine work on a more granular level than has been described previously. Although the difficulty of tasks associated with anesthesia and surgical work has been described,3, 4, 7, 1820 our study is a unique contribution to the internal medicine literature. Understanding the difficulty of tasks performed by inpatient physicians is an important step toward better management of workload. With concerns about burnout in hospitalists21, 22 and residents,2325 it seems wise to take the difficulty of the work they do into consideration in a more proactive manner. In addition, understanding workload may have patient safety applications. In one study of mistakes made by house staff, 51% of the survey respondents identified workload as a contributing factor.26

We assessed effort for inpatient work by generating a task list and then measuring 3 domains of each task: physical, mental, and psychological. As a result, we were able to further quantify the difficulty of work completed by physicians. Recent work from outside of medicine suggests that individuals have a finite capacity for mental workload, and when this is breached, decision‐making quality is impaired.27 This suggests that it is important to take work intensity into account when assigning work to individuals. For example, a detailed assessment of workload at the task level combined with the amount of time spent on each task would allow us to know how much effort is typically involved with admitting a new patient. This information would allow for more equal distribution of workload across admitting teams. In addition, these methods could be expanded to understand how much effort is involved in the discharge process. This could be taken into account at the beginning of a day when allocating work such as admissions and discharges between members of a team.

This methodology has the potential to be used in other ways to help quantify the effort required for the work that physicians do. Many departments are struggling to develop a system for giving credit to faculty for the time they spend on nonpatient care activities. Perhaps these methods could be used to develop effort scores associated with administrative tasks, and administrative relative value units could be calculated accordingly. Similar techniques have been used with educational relative value units.28

We know from the nursing literature that workload is related to both burnout and patient safety. Burnout is a process related to the emotional work of providing care to people.29 Our methods clearly incorporate the psychological stress of work into the workload assessment. Evaluating the amount of time spent on tasks with high psychological scores may be helpful in identifying work patterns that are more likely to produce burnout in physicians and nurses.

With respect to patient safety, higher patient‐to‐nurse ratios are associated with failure to rescue30 and nosocomial infections.31 Furthermore, researchers have demonstrated that systems issues can add substantially to nursing workload.32 Methods such as those described in our study take into account both patient‐related and systems‐related tasks, and therefore could result in more detailed workload assessments. With more detailed information about contributors to workload, better predictions about optimal staffing could be made, which would ultimately lead to fewer adverse patient events.

Our study has limitations. First, the initial task list was based on the compilation efforts from only 10 physicians. However, this group of physicians represented 3 hospitals and included both resident and attending physicians. Second, the survey data were gathered from a single institution. Although we included trainees and faculty, more participants would be needed to answer questions about how experience and setting/environmental factors affect these assessments. However, participants were instructed to reflect on their whole experience with each task, which presumably includes multiple institutions and training levels. Third, the sample size is fairly small, with more house staff than faculty (hospitalists and nonhospitalists) represented. Regardless, this study is the first attempt to define and quantify workload for internal medicine physicians using these methods. In future studies, we will expand the number of institutions and levels of experience to validate our current data. Finally, the difficulty of the tasks is clearly a subjective assessment. Although this methodology has face validity, further work needs to be done to validate these findings against other measurements of workload, such as census, or more general subjective workload assessments, such as the NASA task load index.33

In conclusion, we have described the tasks performed by inpatient physicians and the difficulty associated with them. Moreover, we have described a methodology that could be replicated at other centers for the purpose of validating our findings or quantifying workload of other types of tasks. We believe that this is the first step toward a more comprehensive understanding of the workload encountered by inpatient physicians. Because workload has implications for physician burnout and patient safety, it is essential that we fully understand the contributors to workload, including the innate difficulty of the tasks that comprise it.

Acknowledgements

The authors Alexis Visotcky, MS, and Sergey Tarima, PhD, for their assistance with statistics.

This work was presented in poster form at the Society of Hospital Medicine Annual Meeting in April 2010, the Society of General Internal Medicine Annual Meeting in May 2010, and the Society of General Internal Medicine regional meeting in September 2010.

Funding Source: The study team was supported by the following funds during this work: VA grants PPO 0925901 (Marilyn M. Schapira and Kathlyn E. Fletcher) and IIR 07201 (Marilyn M. Schapira, Siddhartha Singh, and Kathlyn E. Fletcher).

References
  1. Ong M,Bostrom A,Vidyarthi A,McCulloch C,Auerbach A.House staff team workload and organization effects on patient outcomes in an academic general internal medicine inpatient service.Arch Intern Med.2007;167:4752.
  2. Fletcher KE,Parekh V,Halasyamani L, et al.The work hour rules and contributors to patient care mistakes: A focus group study with internal medicine residentsJ Hosp Med.2008;3:228237.
  3. Weinger MB,Reddy SB,Slagle JM.Multiple measures of anesthesia workload during teaching and nonteaching cases.Anesth Analg.2004;98:14191425.
  4. Vredenburgh AG,Weinger MB,Williams KJ,Kalsher MJ,Macario A.Developing a technique to measure anesthesiologists' real‐time workload.Proceedings of the Human Factors and Ergonomics Society Annual Meeting.2000;44:241244.
  5. Weinger MB,Vredenburgh AG,Schumann CM, et al.Quantitative description of the workload associated with airway management procedures.J Clin Anesth.2000;12:273282.
  6. Weinger MB,Herndon OW,Zornow MH,Paulus MP,Gaba DM,Dallen LT.An objective methodology for task analysis and workload assessment in anesthesia providers.Anesthesiology.1994;80:7792.
  7. Slagle JM,Weinger MB.Effects of intraoperative reading on vigilance and workload during anesthesia care in an academic medical center.Anesthesiology.2009;110:275283.
  8. Brasel KJ,Pierre AL,Weigelt JA.Resident work hours: what they are really doing.Arch Surg.2004;139:490493; discussion, 493–494.
  9. Dresselhaus TR,Luck J,Wright BC,Spragg RG,Lee ML,Bozzette SA.Analyzing the time and value of housestaff inpatient work.J Gen Intern Med.1998;13:534540.
  10. Westbrook JI,Ampt A,Kearney L,Rob MI.All in a day's work: an observational study to quantify how and with whom doctors on hospital wards spend their time.[see comment].Med J Aust.2008;188:506509.
  11. Lurie N,Rank B,Parenti C,Woolley T,Snoke W.How do house officers spend their nights? A time study of internal medicine house staff on call.N Engl J Med.1989;320:16731677.
  12. Tipping MD,Forth VE,Magill DB,Englert K,Williams MV.Systematic review of time studies evaluating physicians in the hospital setting.J Hosp Med.2010;5:353359.
  13. Guarisco S,Oddone E,Simel D.Time analysis of a general medicine service: results from a random work sampling study.J Gen Intern Med.1994;9:272277.
  14. Hayward RS,Rockwood K,Sheehan GJ,Bass EB.A phenomenology of scut.Ann Intern Med.1991;115:372376.
  15. Nerenz D,Rosman H,Newcomb C, et al.The on‐call experience of interns in internal medicine. Medical Education Task Force of Henry Ford Hospital.Arch Intern Med.1990;150:22942297.
  16. Tipping MD,Forth VE,O'Leary KJ, et al.Where did the day go? A time‐motion study of hospitalists.J Hosp Med.2010;5:323328.
  17. O'Leary KJ,Liebovitz DM,Baker DW.How hospitalists spend their time: insights on efficiency and safety.J Hosp Med.2006;1:8893.
  18. Cao CG,Weinger MB,Slagle J, et al.Differences in day and night shift clinical performance in anesthesiology.Hum Factors.2008;50:276290.
  19. Slagle J,Weinger MB,Dinh MT,Brumer VV,Williams K.Assessment of the intrarater and interrater reliability of an established clinical task analysis methodology.Anesthesiology.2002;96:11291139.
  20. Weinger MB,Herndon OW,Gaba DM.The effect of electronic record keeping and transesophageal echocardiography on task distribution, workload, and vigilance during cardiac anesthesia.Anesthesiology.1997;87:144155.
  21. Shaw G.Fight burnout while fostering experience: investing in hospitalist programs now can fight burnout later.ACP Hospitalist. July2008.
  22. Jerrard J.Hospitalist burnout: recognize it in yourself and others, and avoid or eliminate it.The Hospitalist. March2006.
  23. Gopal R,Glasheen JJ,Miyoshi TJ,Prochazka AV.Burnout and internal medicine resident work‐hour restrictions.Arch Intern Med.2005;165:25952600.
  24. Goitein L,Shanafelt TD,Wipf JE,Slatore CG,Back AL.The effects of work‐hour limitations on resident well‐being, patient care, and education in an internal medicine residency program.Arch Intern Med.2005;165:26012606.
  25. Shanafelt TD,Bradley KA,Wipf JE,Back AL.Burnout and self‐reported patient care in an internal medicine residency program.Ann Intern Med.2002;136:358367.
  26. Wu AW,Folkman S,McPhee SJ,Lo B.Do house officers learn from their mistakes?Qual Saf Health Care.2003;12:221226; discussion, 227–228.
  27. Danziger S,Levav J,Avnaim‐Pesso L.Extraneous factors in judicial decisions.Proc Natl Acad Sci U S A.2011;108:68896892.
  28. Yeh M,Cahill D.Quantifying physician teaching productivity using clinical relative value units.J Gen Intern Med.1999;14:617621.
  29. Maslach C JS.Maslach Burnout Inventory Manual.3rd ed.Palo Alto, CA:Consulting Psychology Press;1986.
  30. Aiken LH,Clarke SP,Sloane DM,Sochalski J,Silber JH.Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction.JAMA.2002;288:19871993.
  31. Archibald LK,Manning ML,Bell LM,Banerjee S,Jarvis WR.Patient density, nurse‐to‐patient ratio and nosocomial infection risk in a pediatric cardiac intensive care unit.Pediatr Infect Dis J.1997;16:10451048.
  32. Tucker AL,Spear SJ.Operational failures and interruptions in hospital nursing.Health Serv Res.2006;41:643662.
  33. Hart SG,Staveland LE.Development of NASA‐TLX (Task Load Index): results of empirical and theoretical research. In: Hancock PA, Meshkati N, eds.Human Mental Workload.Amsterdam, Netherlands:North Holland Press;1988:239250.
References
  1. Ong M,Bostrom A,Vidyarthi A,McCulloch C,Auerbach A.House staff team workload and organization effects on patient outcomes in an academic general internal medicine inpatient service.Arch Intern Med.2007;167:4752.
  2. Fletcher KE,Parekh V,Halasyamani L, et al.The work hour rules and contributors to patient care mistakes: A focus group study with internal medicine residentsJ Hosp Med.2008;3:228237.
  3. Weinger MB,Reddy SB,Slagle JM.Multiple measures of anesthesia workload during teaching and nonteaching cases.Anesth Analg.2004;98:14191425.
  4. Vredenburgh AG,Weinger MB,Williams KJ,Kalsher MJ,Macario A.Developing a technique to measure anesthesiologists' real‐time workload.Proceedings of the Human Factors and Ergonomics Society Annual Meeting.2000;44:241244.
  5. Weinger MB,Vredenburgh AG,Schumann CM, et al.Quantitative description of the workload associated with airway management procedures.J Clin Anesth.2000;12:273282.
  6. Weinger MB,Herndon OW,Zornow MH,Paulus MP,Gaba DM,Dallen LT.An objective methodology for task analysis and workload assessment in anesthesia providers.Anesthesiology.1994;80:7792.
  7. Slagle JM,Weinger MB.Effects of intraoperative reading on vigilance and workload during anesthesia care in an academic medical center.Anesthesiology.2009;110:275283.
  8. Brasel KJ,Pierre AL,Weigelt JA.Resident work hours: what they are really doing.Arch Surg.2004;139:490493; discussion, 493–494.
  9. Dresselhaus TR,Luck J,Wright BC,Spragg RG,Lee ML,Bozzette SA.Analyzing the time and value of housestaff inpatient work.J Gen Intern Med.1998;13:534540.
  10. Westbrook JI,Ampt A,Kearney L,Rob MI.All in a day's work: an observational study to quantify how and with whom doctors on hospital wards spend their time.[see comment].Med J Aust.2008;188:506509.
  11. Lurie N,Rank B,Parenti C,Woolley T,Snoke W.How do house officers spend their nights? A time study of internal medicine house staff on call.N Engl J Med.1989;320:16731677.
  12. Tipping MD,Forth VE,Magill DB,Englert K,Williams MV.Systematic review of time studies evaluating physicians in the hospital setting.J Hosp Med.2010;5:353359.
  13. Guarisco S,Oddone E,Simel D.Time analysis of a general medicine service: results from a random work sampling study.J Gen Intern Med.1994;9:272277.
  14. Hayward RS,Rockwood K,Sheehan GJ,Bass EB.A phenomenology of scut.Ann Intern Med.1991;115:372376.
  15. Nerenz D,Rosman H,Newcomb C, et al.The on‐call experience of interns in internal medicine. Medical Education Task Force of Henry Ford Hospital.Arch Intern Med.1990;150:22942297.
  16. Tipping MD,Forth VE,O'Leary KJ, et al.Where did the day go? A time‐motion study of hospitalists.J Hosp Med.2010;5:323328.
  17. O'Leary KJ,Liebovitz DM,Baker DW.How hospitalists spend their time: insights on efficiency and safety.J Hosp Med.2006;1:8893.
  18. Cao CG,Weinger MB,Slagle J, et al.Differences in day and night shift clinical performance in anesthesiology.Hum Factors.2008;50:276290.
  19. Slagle J,Weinger MB,Dinh MT,Brumer VV,Williams K.Assessment of the intrarater and interrater reliability of an established clinical task analysis methodology.Anesthesiology.2002;96:11291139.
  20. Weinger MB,Herndon OW,Gaba DM.The effect of electronic record keeping and transesophageal echocardiography on task distribution, workload, and vigilance during cardiac anesthesia.Anesthesiology.1997;87:144155.
  21. Shaw G.Fight burnout while fostering experience: investing in hospitalist programs now can fight burnout later.ACP Hospitalist. July2008.
  22. Jerrard J.Hospitalist burnout: recognize it in yourself and others, and avoid or eliminate it.The Hospitalist. March2006.
  23. Gopal R,Glasheen JJ,Miyoshi TJ,Prochazka AV.Burnout and internal medicine resident work‐hour restrictions.Arch Intern Med.2005;165:25952600.
  24. Goitein L,Shanafelt TD,Wipf JE,Slatore CG,Back AL.The effects of work‐hour limitations on resident well‐being, patient care, and education in an internal medicine residency program.Arch Intern Med.2005;165:26012606.
  25. Shanafelt TD,Bradley KA,Wipf JE,Back AL.Burnout and self‐reported patient care in an internal medicine residency program.Ann Intern Med.2002;136:358367.
  26. Wu AW,Folkman S,McPhee SJ,Lo B.Do house officers learn from their mistakes?Qual Saf Health Care.2003;12:221226; discussion, 227–228.
  27. Danziger S,Levav J,Avnaim‐Pesso L.Extraneous factors in judicial decisions.Proc Natl Acad Sci U S A.2011;108:68896892.
  28. Yeh M,Cahill D.Quantifying physician teaching productivity using clinical relative value units.J Gen Intern Med.1999;14:617621.
  29. Maslach C JS.Maslach Burnout Inventory Manual.3rd ed.Palo Alto, CA:Consulting Psychology Press;1986.
  30. Aiken LH,Clarke SP,Sloane DM,Sochalski J,Silber JH.Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction.JAMA.2002;288:19871993.
  31. Archibald LK,Manning ML,Bell LM,Banerjee S,Jarvis WR.Patient density, nurse‐to‐patient ratio and nosocomial infection risk in a pediatric cardiac intensive care unit.Pediatr Infect Dis J.1997;16:10451048.
  32. Tucker AL,Spear SJ.Operational failures and interruptions in hospital nursing.Health Serv Res.2006;41:643662.
  33. Hart SG,Staveland LE.Development of NASA‐TLX (Task Load Index): results of empirical and theoretical research. In: Hancock PA, Meshkati N, eds.Human Mental Workload.Amsterdam, Netherlands:North Holland Press;1988:239250.
Issue
Journal of Hospital Medicine - 7(5)
Issue
Journal of Hospital Medicine - 7(5)
Page Number
426-430
Page Number
426-430
Publications
Publications
Article Type
Display Headline
Defining and measuring the effort needed for inpatient medicine work
Display Headline
Defining and measuring the effort needed for inpatient medicine work
Sections
Article Source

Copyright © 2012 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
5000 W. National Ave., PC Division, Milwaukee, WI 53295===
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Physician Assistant‐Based General Medical Inpatient Care

Article Type
Changed
Thu, 05/25/2017 - 21:18
Display Headline
A comparison of outcomes of general medical inpatient care provided by a hospitalist‐physician assistant model vs a traditional resident‐based model

In 2003 the Accreditation Council for Graduate Medical Education (ACGME) prescribed residency reform in the form of work hour restrictions without prescribing alternatives to resident based care.1 As a response, many academic medical centers have developed innovative models for providing inpatient care, some of which incorporate Physician Assistants (PAs).2 With further restrictions in resident work hours possible,3 teaching hospitals may increase use of these alternate models to provide inpatient care. Widespread implementation of such new and untested models could impact the care of the approximately 20 million hospitalizations that occur every year in US teaching hospitals.4

Few reports have compared the care delivered by these alternate models with the care provided by traditional resident‐based models of care.58 Roy et al.8 have provided the only recent comparison of a PA‐based model of care with a resident‐based model. They showed lower adjusted costs of inpatient care associated with PA based care but other outcomes were similar to resident‐based teams.

The objective of this study is to provide a valid and usable comparison of the outcomes of a hospitalist‐PA (H‐PA) model of inpatient care with the traditional resident‐based model. This will add to the quantity and quality of the limited research on PA‐based inpatient care, and informs the anticipated increase in the involvement of PAs in this arena.

Methods

Study Design and Setting

We conducted a retrospective cohort study at a 430‐bed urban academic medical center in the Midwestern United States.

Models of General Medical (GM) Inpatient Care at the Study Hospital During the Study Period

In November 2004, as a response to the ACGME‐mandated work hour regulations, we formed 2 Hospitalist‐PA teams (H‐PA) to supplement the 6 preexisting general medicine resident teams (RES).

The H‐PA and RES teams differed in staffing, admitting times and weekend/overnight cross coverage structure (Table 1). There were no predesigned differences between the teams in the ward location of their patients, availability of laboratory/radiology services, specialty consultation, social services/case management resources, nursing resources or documentation requirements for admission, daily care, and discharge.

Differences in Structure and Function Between Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA TeamsRES Teams
Attending physicianAlways a hospitalistHospitalist, non‐hospitalist general internist or rarely a specialist
Attending physician roleSupervisory for some patients (about half) and sole care provider for others.Supervisory for all patients
Team compositionOne attending paired with 1 PAAttending + senior resident + (2) interns + (2‐3) medical students
Rotation schedule  
AttendingEvery 2 weeksEvery 2 weeks
Physician assistantOff on weekends 
House staff & medical students Every month
WeekendNo new admissions & hospitalist manages all patientsAccept new admissions
Admission times (weekdays)7 AM to 3 PMNoon to 7 AM
Source of admissionsEmergency room, clinics, other hospitalsEmergency room, clinics, other hospitals
Number of admissions (weekdays)4‐6 patients per day per teamNoon to 5 PM: 2 teams admit a maximum of 9 patients total
  5 PM to 7 AM: 3 teams admit a maximum 5 patients each.
Overnight coverageroles and responsibilitiesOne in‐house faculty3 on call interns
 Cross‐covering 2 H‐PA teamsCross‐covering 2 teams each
 Performing triageAdmitting up to 5 patients each
 Admitting patients if necessary 
 Assisting residents if necessary 
 General medical consultation 

Admission Schedule for H‐PA or RES Teams

The admitting schedule was designed to decrease the workload of the house staff and to do so specifically during the periods of peak educational activity (morning report, attending‐led teaching rounds, and noon report). A faculty admitting medical officer (AMO) assigned patients strictly based on the time an admission was requested. Importantly, the request for admission preceded the time of actual admission recorded when the patient reached the ward. The time difference between request for admission and actual admission depended on the source of admission and the delay associated with assigning a patient room. The AMO assigned 8 to 12 new patients to the H‐PA teams every weekday between 7 AM and 3 PM and to the RES teams between noon and 7 AM the next day. There was a designed period of overlap from noon to 3 PM during which both H‐PA and RES teams could admit patients. This period allowed for flexibility in assigning patients to either type of team depending on their workload. The AMO did not use patient complexity or teaching value to assign patients.

Exceptions to Admission Schedule

Patients admitted overnight after the on call RES had reached their admission limits were assigned to H‐PA teams the next morning. In addition, recently discharged patients who were readmitted while the discharging hospitalist (H‐PA teams) or the discharging resident (RES teams) was still scheduled for inpatient duties, were assigned back to the discharging team irrespective of the admitting schedule.

The same medicine team cared for a patient from admission to discharge but on transfer to the intensive care unit (ICU), an intensivist led critical care team assumed care. On transfer out of the ICU these patients were assigned back to the original team irrespective of admitting schedulethe so called bounce back rule to promote inpatient continuity of care. But if the residents (RES teams) or the hospitalist (H‐PA teams) had changedthe bounce back rule was no longer in effect and these patients were assigned to a team according to the admission schedule.

Study Population and Study Period

We included all hospitalizations of adult patients to GM teams if both their date of admission and their date of discharge fell within the study period (January 1, 2005 to December 31, 2006). We excluded hospitalizations with admissions during the weekendwhen H‐PA teams did not admit patients; hospitalizations to GM services with transfer to nonGM service (excluding ICU) and hospitalizations involving comanagement with specialty servicesas the contribution of GM teams for these was variable; and hospitalizations of private patients.

Data Collection and Team Assignment

We collected patient data from our hospital's discharge abstract database. This database did not contain team information so to assign teams we matched the discharging attending and the day of discharge to the type of team that the discharging attending was leading that day.

We collected patient age, gender, race, insurance status, zip‐code, primary care provider, source of admission, ward type, time and day of admission, and time and day of discharge for use as independent variables. The time of admission captured in the database was the time of actual admission and not the time the admission was requested.

We grouped the principal diagnosis International Statistical Classification of Diseases and Related Health Problems, 9th edition (ICD‐9) codes into clinically relevant categories using the Clinical Classification Software.9 We created comorbidity measures using Healthcare Cost and Utilization Project Comorbidity Software, version 3.4.10

Outcome Measures

We used length of stay (LOS), charges, readmissions within 7, 14, and 30 days and inpatient mortality as our outcome measures. We calculated LOS by subtracting the discharge day and time from the admission day and time. The LOS included time spent in the ICU. We summed all charges accrued during the entire hospitalization including any stay in the ICU but did not include professional fees. We considered any repeat hospitalization to our hospital within 7, 14, and 30 days following a discharge to be a readmission except that we excluded readmissions for a planned procedure or for inpatient rehabilitation.

Statistical Analysis

Descriptive Analysis

We performed unadjusted descriptive statistics at the level of an individual hospitalization using medians and interquartile ranges for continuous data and frequencies and percentages for categorical data. We used chi‐square tests of association and KruskalWallis analysis of variance to compare H‐PA and RES teams.

Missing Data

Because we lacked data on whether a primary outpatient care provider was available for 284 (2.9%) of our study hospitalizations, we dropped them from our multivariable analyses. We used an arbitrary discharge time of noon for the 11 hospitalizations which did not have a discharge time recorded.

Multivariable Analysis

We used multivariable mixed models to risk adjust for a wide variety of variables. We included age, gender, race, insurance, presence of primary care physician, and total number of comorbidities as fixed effects in all models because of the high face validity of these variables. We then added admission source, ward, time, day of week, discharge day of week, and comorbidity measures one by one as fixed effects, including them only if significant at P < 0.01. For assessing LOS, charges, and readmissions, we added a variable identifying each patient as a random effect to account for multiple admissions for the same patient. We then added variables identifying attending physician, principal diagnostic group, and ZIP code of residence as random effects to account for clustering of hospitalizations within these categories, including them only if significant at P < 0.01. For the model assessing mortality we included variables for attending physician, principal diagnostic group, and ZIP code of residence as random effects if significant at P < 0.01. We log transformed LOS and charges because they were extremely skewed in nature. Readmissions were analyzed after excluding patients who died or were discharged alive within 7, 14, or 30 days of the end of the study period.

Sensitivity Analyses

To assess the influence of LOS outliers, we changed LOS to 6 hours if it was less than 6 hours, and 45 days if it was more than 45 daysa process called winsorizing. We consider winsorizing superior to dropping outliers because it acknowledges that outliers contribute information, but prevent them from being too influential. We chose the 6 hour cut off because we believed that was the minimum time required to admit and then discharge a patient. We chose the upper limit of 45 days on reviewing the frequency distribution for outliers. Similarly, we winsorized charges at the first and 99th percentile after reviewing the frequency distribution for outliers. We then log transformed the winsorized data before analysis.

Inpatient deaths reduce the LOS and charges associated with a hospitalization. Thus excess mortality may provide a false concession in terms of lower LOS or charges. To check if this occurred in our study we repeated the analyses after excluding inpatient deaths.

ICU stays are associated with higher LOS, charges, and mortality. In our model of care, some patients transferred to the ICU are not cared for by the original team on transfer out. Moreover, care in the ICU is not controlled by the team that discharges them. Since this might obscure differences in outcomes achieved by RES vs. H‐PA teams, we repeated these analyses after excluding hospitalizations with an ICU stay.

Since mortality can only occur during 1 hospitalization per patient, we repeated the mortality analysis using only each patient's first admission or last admission and using a randomly selected single admission for each patient.

Subgroup Analysis

To limit the effect of different physician characteristics on H‐PA and RES teams we separately analyzed the hospitalizations under the care of hospitalists who served on both H‐PA and RES teams.

To limit the effect of different admission schedules of H‐PA and RES teams we analyzed the hospitalizations with admission times between 11.00 AM and 4.00 PM. Such hospitalizations were likely to be assigned during the noon to 3 PM period when they could be assigned to either an H‐PA or RES team.

Interactions

Finally we explored interactions between the type of team and the fixed effect variables included in each model.

Statistical Software

We performed the statistical analysis using SAS software version 9.0 for UNIX (SAS Institute, Inc., Cary, NC) and R software (The R Project for Statistical Computing).

This study protocol was approved by the hospital's institutional review board.

Results

Study Population

Of the 52,391 hospitalizations to our hospital during the study period, 13,058 were admitted to general medicine. We excluded 3102 weekend admissions and 209 who met other exclusion criteria. We could not determine the team assignment for 66. Of the remaining 9681 hospitalizations, we assigned 2171 to H‐PA teams and 7510 to RES teams (Figure 1).

Figure 1
Study population (H‐PA, hospitalist‐physician assistant team; RES, traditional resident team).

Descriptive Analysis

We compare patients assigned to H‐PA and RES teams in Table 2. They were similar in age, gender, race, having a primary care provider or not, and insurance status. Clinically, they had similar comorbidities and a similar distribution of common principal diagnoses. Consistent with their admitting schedule, H‐PA teams admitted and discharged more patients earlier in the day and admitted more patients earlier in the work week. Patients cared for by H‐PA teams were admitted from the Emergency Room (ER) less often and were more likely to reside on wards designated as nonmedicine by nursing specialty. Hospitalizations to H‐PA teams more often included an ICU stay.

Characteristics of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA (n = 2171)RES (n = 7510)P Value
  • Abbreviations: CI, confidence interval; ER, emergency room; H‐PA, hospitalist‐physician assistant; ICU, Intensive care unit; RES, traditional resident.

Age   
Mean56.8057.04 
Median56560.15
Interquartile range43‐7243‐73 
Age group (years), n (%)   
< 2010 (0.5)57 (0.8) 
20‐29186 (8.6)632 (8.7) 
30‐39221 (10.2)766 (10.3) 
40‐49387 (17.8)1341 (18.1) 
50‐59434 (20.0)1492 (20.2)0.28
60‐69325 (15.0)974 (12.8) 
70‐79271 (12.5)1035 (13.6) 
80‐89262 (12.0)951(12.3) 
90<75 (3.5)262 (3.4) 
Female, n (%)1175 (54.1)4138 (55.1)0.42
Race, n (%)   
White1282 (59.1)4419 (58.9) 
Black793 (36.5)2754 (36.7)0.98
Other96 (4.4)337 (4.5) 
Primary care provider, n (%)  0.16
Yes1537 (73.2)5451 (74.7) 
Missing: 28471 (3.3)213 (2.8) 
Insurance status, n (%)   
Commercial/worker's comp440 (20.3)1442 (19.2) 
Medicare1017 (46.8)3589 (47.8)0.52
Medicaid/others714 (32.9)2479 (33.0) 
Time of admission, n (%)   
0000‐0259167 (7.7)1068 (14.2) 
0300‐0559244 (11.2)485 (6.5) 
0600‐0859456 (21.0)270 (3.6) 
0900‐1159782 (36.0)1146 (15.3)<0.001
1200‐1459299 (13.8)1750 (23.3) 
1500‐1759155 (7.1)1676 (22.3) 
1800‐235968 (3.1)1115 (14.9) 
Time of discharge, n (%)   
2100‐085936 (1.7)174 (2.3) 
0900‐1159275 (12.7)495 (6.6) 
1200‐1459858 (39.6)2608 (34.8)<0.001
1500‐1759749 (34.6)3122 (41.6) 
1800‐2059249 (11.5)1104 (14.7) 
Missing47 
Day of week of admission, n (%)   
Monday462 (21.3)1549 (20.6) 
Tuesday499 (23.0)1470 (19.6) 
Wednesday430 (19.8)1479 (19.7)0.001
Thursday400 (18.4)1482 (19.7) 
Friday380 (17.5)1530 (20.4) 
Day of week of discharge, n (%)   
Monday207 (9.5)829 (11.0) 
Tuesday268 (12.3)973 (13.0) 
Wednesday334 (15.4)1142 (15.2) 
Thursday362 (16.7)1297 (17.3)0.16
Friday485 (22.3)1523 (20.3) 
Saturday330 (15.2)1165 (15.5) 
Sunday185 (8.5)581 (7.7) 
Admit to non‐medicine wards, n (%)1332 (61.4)2624 (34.9)<0.001
Transfer to ICU (at least once), n (%)299 (13.8)504 (6.7)<0.001
Admit from ER No (%)1663 (76.6)6063 (80.7)<0.001
10 most frequent diagnosis (%)Pneumonia (4.9)Pneumonia (5.5) 
 Congestive heart failure; nonhypertensive (4.2)Congestive heart failure; nonhypertensive (3.9) 
 Sickle cell anemia (3.9)Nonspecific chest pain (3.7) 
 Chronic obstructive pulmonary disease and Bronchiectasis (3.3)Urinary tract infections(3.6) 
 Diabetes mellitus with complications (3.2)Skin and subcutaneous tissue infections (3.3) 
 Urinary tract infections (3.2)Sickle cell anemia (3.3) 
 Asthma (3.0)Pancreatic disorders (not diabetes) (2.8) 
 Nonspecific chest pain (3.0)Asthma (2.8) 
 Pancreatic disorders (not diabetes) (2.9)Chronic obstructive pulmonary disease and Bronchiectasis (2.6) 
 Septicemia (2.2)Diabetes mellitus with complications (2.6) 
Average number of comorbidities mean (95% CI)0.39 (0.37‐0.42)0.38 (0.36‐0.39)0.23

In unadjusted comparisons of outcomes (Table 3), hospitalizations on H‐PA teams had higher lengths of stay and charges than hospitalizations on RES teams, possibly higher inpatient mortality rates but similar unadjusted readmission rates at 7, 14, and 30 days

Unadjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA (n = 2171)RES (n = 7150)% Difference* (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; IQR, interquartile range; LOS, length of stay;

  • On comparing log transformed LOS;

  • RES is reference group.

LOSMedian (IQR)Median (IQR)  
Days3.17 (2.03‐5.30)2.99 (1.80‐5.08)+8.9% (4.71‐13.29%)<0.001
Charges    
US Dollars9390 (6196‐16,239)9044 (6106‐14,805)+5.56% (1.96‐9.28%)0.002
Readmissionsn (%)n (%)Odds Ratio (CI) 
Within 7 days147 (6.96)571 (7.78)0.88 (0.73‐1.06)0.19
Within14 days236 (11.34)924 (12.76)0.87 (0.75‐1.01)0.07
Within 30 days383 (18.91)1436 (20.31)0.91 (0.80‐1.03)0.14
Inpatient deaths39 (1.8)95 (1.3)1.36 (0.90‐2.00)0.06

Multivariable Analysis

LOS

Hospitalizations to H‐PA teams were associated with a 6.73% longer LOS (P = 0.005) (Table 4). This difference persisted when we used the winsorized data (6.45% increase, P = 0.006), excluded inpatient deaths (6.81% increase, P = 0.005), or excluded hospitalizations that involved an ICU stay (6.40%increase, P = 0.011) (Table 5).

Adjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams (RES is the reference group)
 OverallSubgroup: Restricted to Physicians Attending on Both H‐PA and RES Teams*Subgroup: Restricted to Hospitalizations Between 11.00 AM and 4.00 PM
% Difference (CI)P Value% Difference (CI)P Value% Difference (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; LOS, length of stay; OR, odds ratio;

  • Number of observations included in subgroup ranges from 2992 to 3196;

  • Number of observations included in subgroup ranges from 3174 to 3384.

LOS6.73% (1.99% to 11.70%)0.0055.44% (0.65% to 11.91%)0.082.97% (4.47% to 10.98%)0.44
Charges2.75% (1.30% to 6.97%)0.191.55% (3.76% to 7.16%)0.576.45% (0.62% to 14.03%)0.07
Risk of ReadmissionAdjusted OR (95%CI)P ValueAdjusted OR (95% CI)P ValueAdjusted OR (95% CI)P Value
Within 7 days0.88 (0.64‐1.20)0.420.74 (0.40‐1.35)0.320.90 (0.40‐2.00)0.78
Within14 days0.90 (0.69‐1.19)0.460.71 (0.51‐0.99)0.050.87 (0.36‐2.13)0.77
Within 30 days0.89 (0.75‐1.06)0.200.75 (0.51‐1.08)0.120.92 (0.55‐1.54)0.75
Inpatient mortality1.27 (0.82‐1.97)0.281.46 (0.67‐3.17)0.331.14 (0.47‐2.74)0.77
Sensitivity Analysis: Adjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams (RES Is the Reference Group)
 Analysis With Winsorized DataAnalysis After Excluding Inpatient DeathsAnalysis After Excluding Patients With ICU Stays
% Difference (CI)P Value% Difference (CI)P Value% Difference (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; ICU, intensive care unit; LOS, length of stay; OR, odds ratio.

LOS6.45% (4.04 to 8.91%)0.0066.81% (2.03 to 11.80%)0.0056.40% (1.46 to 11.58%)0.011
Charges2.67 (1.27 to 6.76%)0.1872.89% (1.16 to 7.11%)0.1640.74% (3.11 to 4.76%)0.710

Charges

Hospitalizations to H‐PA and RES teams were associated with similar charges (Table 4). The results were similar when we used winsorized data, excluded inpatient deaths or excluded hospitalizations involving an ICU stay (Table 5).

Readmissions

The risk of readmission at 7, 14, and 30 days was similar between hospitalizations to H‐PA and RES teams (Table 4).

Mortality

The risk of inpatient death was similar between all hospitalizations to H‐PA and RES teams or only hospitalizations without an ICU stay (Table 4). The results also remained the same in analyses restricted to first admissions, last admissions, or 1 randomly selected admission per patient.

Sub‐Group Analysis

On restricting the multivariable analyses to the subset of hospitalists who staffed both types of teams (Table 4), the increase in LOS associated with H‐PA care was no longer significant (5.44% higher, P = 0.081). The charges, risk of readmission at 7 and 30 days, and risk of inpatient mortality remained similar. The risk of readmission at 14 days was slightly lower following hospitalizations to H‐PA teams (odds ratio 0.71, 95% confidence interval [CI] 0.51‐0.99).

The increase in LOS associated with H‐PA care was further attenuated in analyses of the subset of admissions between 11.00 AM and 4.00 PM (2.97% higher, P = 0.444). The difference in charges approached significance (6.45% higher, P = 0.07), but risk of readmission at 7, 14, and 30 days and risk of inpatient mortality were no different (Table 4).

Interactions

On adding interaction terms between the team assignment and the fixed effect variables in each model we detected that the effect of H‐PA care on LOS (P < 0.001) and charges (P < 0.001) varied by time of admission (Figure 2a and b). Hospitalizations to H‐PA teams from 6.00 PM to 6.00 AM had greater relative increases in LOS as compared to hospitalizations to RES teams during those times. Similarly, hospitalizations during the period 3.00 PM to 3.00 AM had relatively higher charges associated with H‐PA care compared to RES care.

Figure 2
(A) Relative difference in length of stay associated with care by H‐PA teams by times of admission (in percent change with RES as reference). (B) Relative difference in charges associated with care by H‐PA teams by time of admission (in percent with RES as reference). Abbreviations: H‐PA, hospitalist‐physician assistant team; RES traditional resident team. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

Discussion

We found that hospitalizations to our H‐PA teams had longer LOS but similar charges, readmission rates, and mortality as compared to traditional resident‐based teams. These findings were robust to multiple sensitivity and subgroup analyses but when we examined times when both types of teams could receive admissions, the difference in LOS was markedly attenuated and nonsignificant.

We note that most prior reports comparing PA‐based models of inpatient care predate the ACGME work hour regulations. In a randomized control trial (1987‐1988) Simmer et al.5 showed lower lengths of stay and charges but possibly higher risk of readmission for PA based teams as compared to resident based teams. Van Rhee et al.7 conducted a nonrandomized retrospective cohort study (1994‐1995) using administrative data which showed lower resource utilization for PA‐based inpatient care. Our results from 2005 to 2006 reflect the important changes in the organization and delivery of inpatient care since these previous investigations.

Roy et al.8 report the only previously published comparison of PA and resident based GM inpatient care after the ACGME mandated work hour regulations. They found PA‐based care was associated with lower costs, whereas we found similar charges for admissions to RES and H‐PA teams. They also found that LOS was similar for PA and resident‐based care, while we found a higher LOS for admissions to our H‐PA team. We note that although the design of Roy's study was similar to our own, patients cared for by PA‐based teams were geographically localized in their model. This may contribute to the differences in results noted between our studies.

Despite no designed differences in patients assigned to either type of team other than time of admission we noted some differences between the H‐PA and RES teams in the descriptive analysis. These differences, such as a higher proportion of hospitalizations to H‐PA teams being admitted from the ER, residing on nonmedicine wards or having an ICU stay are likely a result of our system of assigning admissions to H‐PA teams early during the workday. For example patients on H‐PA teams were more often located on nonmedicine wards as a result of later discharges and bed availability on medicine wards. The difference that deserves special comment is the much higher proportion (13.8% vs. 6.7%) of hospitalizations with an ICU stay on the H‐PA teams. Hospitalizations directly to the ICU were excluded from our study which means that the hospitalizations with an ICU stay in our study were initially admitted to either H‐PA or RES teams and then transferred to the ICU. Transfers out of the ICU usually occur early in the workday when H‐PA teams accepted patients per our admission schedule. These patients may have been preferentially assigned to H‐PA teams, if on returning from the ICU the original team's resident had changed (and the bounce back rule was not in effect). Importantly, the conclusions of our research are not altered on controlling for this difference in the teams by excluding hospitalizations with an ICU stay.

Hospitalizations to H‐PA teams were associated with higher resource utilization if they occurred later in the day or overnight (Figure 2a and b). During these times a transition of care occurred shortly after admission. For a late day admission the H‐PA teams would transfer care for overnight cross cover soon after the admission and for patients admitted overnight as overflow they would assume care of a patient from the nighttime covering physician performing the admission. On the other hand, on RES teams, interns admitting patients overnight continued to care for their patients for part of the following day (30‐hour call). Similar findings of higher resource utilization associated with transfer of care after admission in the daytime11 and nighttime12 have been previously reported. An alternative hypothesis for our findings is that the hospital maybe busier and thus less efficient during times when H‐PA teams had to admit later in the day or accept patients admitted overnight as overflow. Future research to determine the cause of this significant interaction between team assignment and time of admission on resource utilization is important as the large increases in LOS (up to 30%) and charges (up to 50%) noted, could have a potentially large impact if a higher proportion of hospitalizations were affected by this phenomenon.

Our H‐PA teams were assigned equally complex patients as our RES teams, in contrast to previous reports.8, 13 This was accomplished while improving the resident's educational experience and we have previously reported increases in our resident's board pass rates and in‐service training exam scores with that introduction of our H‐PA teams.14 We thus believe that selection of less complex patients to H‐PA teams such as ours is unnecessary and may give them a second tier status in academic settings.

Our report has limitations. It is a retrospective, nonrandomized investigation using a single institution's administrative database and has the limitations of not being able to account for unmeasured confounders, severity of illness, errors in the database, selection bias and has limited generalizability. We measured charges not actual costs,15 but we feel charges are a true reflection of relative resource use when compared between similar patients within a single institution. We also did not account for the readmissions that occur to other hospitals16 and our results do not reflect resource utilization for the healthcare system in total. For example, we could not tell if higher LOS on H‐PA teams resulted in lower readmissions for their patients in all hospitals in the region, which may reveal an overall resource savings. Additionally, we measured in‐hospital mortality and could not capture deaths related to hospital care that may occur shortly after discharge.

ACGME has proposed revised standards that may further restrict resident duty hours when they take effect in July 2011.3 This may lead to further decreases in resident‐based inpatient care. Teaching hospitals will need to continue to develop alternate models for inpatient care that do not depend on house staff. Our findings provide important evidence to inform the development of such models. Our study shows that one such model: PAs paired with hospitalists, accepting admissions early in the workday, with hospitalist coverage over the weekend and nights can care for GM inpatients as complex as those cared for by resident‐based teams without increasing readmission rates, inpatient mortality, or charges but at the cost of slightly higher LOS.

References
  1. ACGME‐Common Program Requirements for Resident Duty Hours. Available at: http://www.acgme.org/acWebsite/dutyHours/dh_ComProgrRequirmentsDutyHours0707.pdf. Accessed July 2010.
  2. Sehgal NL,Shah HM,Parekh VI,Roy CL,Williams MV.Non‐housestaff medicine services in academic centers: models and challenges.J Hosp Med.2008;3(3):247255.
  3. ACGME. Duty Hours: Proposed Standards for Review and comment. Available at: http://acgme‐2010standards.org/pdf/Proposed_Standards. pdf. Accessed July 22,2010.
  4. Agency for Health Care Policy and Research. HCUPnet: A tool for identifying, tracking, and analyzing national hospital statistics. Available at: http://hcup.ahrq.gov/HCUPnet.asp. Accessed July2010.
  5. Simmer TL,Nerenz DR,Rutt WM,Newcomb CS,Benfer DW.A randomized, controlled trial of an attending staff service in general internal medicine.Med Care.1991;29(7 suppl):JS31JS40.
  6. Dhuper S,Choksi S.Replacing an academic internal medicine residency program with a physician assistant‐‐hospitalist model: a Comparative Analysis Study.Am J Med Qual.2009;24(2):132139.
  7. Rhee JV,Ritchie J,Eward AM.Resource use by physician assistant services versus teaching services.JAAPA.2002;15(1):3342.
  8. Roy CL,Liang CL,Lund M, et al.Implementation of a physician assistant/hospitalist service in an academic medical center: impact on efficiency and patient outcomes.J Hosp Med.2008;3(5):361368.
  9. AHRQ. Clinical Classifications Software (CCS) for ICD‐9‐CM. Available at: http://www.hcup‐us.ahrq.gov/toolssoftware/ccs/ccs.jsp#overview. Accessed July2010.
  10. AHRQ. HCUP: Comorbidity Software, Version 3.4.;Available at: http://www.hcup‐us.ahrq.gov/toolssoftware/comorbidity/comorbidity.jsp. Accessed July2010.
  11. Schuberth JL,Elasy TA,Butler J, et al.Effect of short call admission on length of stay and quality of care for acute decompensated heart failure.Circulation.2008;117(20):26372644.
  12. Lofgren RP,Gottlieb D,Williams RA,Rich EC.Post‐call transfer of resident responsibility: its effect on patient care.J Gen Intern Med.1990;5(6):501505.
  13. O'Connor AB,Lang VJ,Lurie SJ,Lambert DR,Rudmann A,Robbins B.The effect of nonteaching services on the distribution of inpatient cases for internal medicine residents.Acad Med.2009:84(2):220225.
  14. Singh S,Petkova JH,Gill A, et al.Allowing for better resident education and improving patient care: hospitalist‐physician assistant teams fill in the gaps.J Hosp Med.2007;2[S2]:139.
  15. Finkler SA.The distinction between cost and charges.Ann Intern Med.1982;96(1):102109.
  16. Jencks SF,Williams MV,Coleman EA.Rehospitalizations among patients in the Medicare Fee‐for‐Service Program.N Engl J Med.2009;360(14):14181428.
Article PDF
Issue
Journal of Hospital Medicine - 6(3)
Publications
Page Number
122-130
Legacy Keywords
education, outcomes measurement, physician assistant, resident
Sections
Article PDF
Article PDF

In 2003 the Accreditation Council for Graduate Medical Education (ACGME) prescribed residency reform in the form of work hour restrictions without prescribing alternatives to resident based care.1 As a response, many academic medical centers have developed innovative models for providing inpatient care, some of which incorporate Physician Assistants (PAs).2 With further restrictions in resident work hours possible,3 teaching hospitals may increase use of these alternate models to provide inpatient care. Widespread implementation of such new and untested models could impact the care of the approximately 20 million hospitalizations that occur every year in US teaching hospitals.4

Few reports have compared the care delivered by these alternate models with the care provided by traditional resident‐based models of care.58 Roy et al.8 have provided the only recent comparison of a PA‐based model of care with a resident‐based model. They showed lower adjusted costs of inpatient care associated with PA based care but other outcomes were similar to resident‐based teams.

The objective of this study is to provide a valid and usable comparison of the outcomes of a hospitalist‐PA (H‐PA) model of inpatient care with the traditional resident‐based model. This will add to the quantity and quality of the limited research on PA‐based inpatient care, and informs the anticipated increase in the involvement of PAs in this arena.

Methods

Study Design and Setting

We conducted a retrospective cohort study at a 430‐bed urban academic medical center in the Midwestern United States.

Models of General Medical (GM) Inpatient Care at the Study Hospital During the Study Period

In November 2004, as a response to the ACGME‐mandated work hour regulations, we formed 2 Hospitalist‐PA teams (H‐PA) to supplement the 6 preexisting general medicine resident teams (RES).

The H‐PA and RES teams differed in staffing, admitting times and weekend/overnight cross coverage structure (Table 1). There were no predesigned differences between the teams in the ward location of their patients, availability of laboratory/radiology services, specialty consultation, social services/case management resources, nursing resources or documentation requirements for admission, daily care, and discharge.

Differences in Structure and Function Between Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA TeamsRES Teams
Attending physicianAlways a hospitalistHospitalist, non‐hospitalist general internist or rarely a specialist
Attending physician roleSupervisory for some patients (about half) and sole care provider for others.Supervisory for all patients
Team compositionOne attending paired with 1 PAAttending + senior resident + (2) interns + (2‐3) medical students
Rotation schedule  
AttendingEvery 2 weeksEvery 2 weeks
Physician assistantOff on weekends 
House staff & medical students Every month
WeekendNo new admissions & hospitalist manages all patientsAccept new admissions
Admission times (weekdays)7 AM to 3 PMNoon to 7 AM
Source of admissionsEmergency room, clinics, other hospitalsEmergency room, clinics, other hospitals
Number of admissions (weekdays)4‐6 patients per day per teamNoon to 5 PM: 2 teams admit a maximum of 9 patients total
  5 PM to 7 AM: 3 teams admit a maximum 5 patients each.
Overnight coverageroles and responsibilitiesOne in‐house faculty3 on call interns
 Cross‐covering 2 H‐PA teamsCross‐covering 2 teams each
 Performing triageAdmitting up to 5 patients each
 Admitting patients if necessary 
 Assisting residents if necessary 
 General medical consultation 

Admission Schedule for H‐PA or RES Teams

The admitting schedule was designed to decrease the workload of the house staff and to do so specifically during the periods of peak educational activity (morning report, attending‐led teaching rounds, and noon report). A faculty admitting medical officer (AMO) assigned patients strictly based on the time an admission was requested. Importantly, the request for admission preceded the time of actual admission recorded when the patient reached the ward. The time difference between request for admission and actual admission depended on the source of admission and the delay associated with assigning a patient room. The AMO assigned 8 to 12 new patients to the H‐PA teams every weekday between 7 AM and 3 PM and to the RES teams between noon and 7 AM the next day. There was a designed period of overlap from noon to 3 PM during which both H‐PA and RES teams could admit patients. This period allowed for flexibility in assigning patients to either type of team depending on their workload. The AMO did not use patient complexity or teaching value to assign patients.

Exceptions to Admission Schedule

Patients admitted overnight after the on call RES had reached their admission limits were assigned to H‐PA teams the next morning. In addition, recently discharged patients who were readmitted while the discharging hospitalist (H‐PA teams) or the discharging resident (RES teams) was still scheduled for inpatient duties, were assigned back to the discharging team irrespective of the admitting schedule.

The same medicine team cared for a patient from admission to discharge but on transfer to the intensive care unit (ICU), an intensivist led critical care team assumed care. On transfer out of the ICU these patients were assigned back to the original team irrespective of admitting schedulethe so called bounce back rule to promote inpatient continuity of care. But if the residents (RES teams) or the hospitalist (H‐PA teams) had changedthe bounce back rule was no longer in effect and these patients were assigned to a team according to the admission schedule.

Study Population and Study Period

We included all hospitalizations of adult patients to GM teams if both their date of admission and their date of discharge fell within the study period (January 1, 2005 to December 31, 2006). We excluded hospitalizations with admissions during the weekendwhen H‐PA teams did not admit patients; hospitalizations to GM services with transfer to nonGM service (excluding ICU) and hospitalizations involving comanagement with specialty servicesas the contribution of GM teams for these was variable; and hospitalizations of private patients.

Data Collection and Team Assignment

We collected patient data from our hospital's discharge abstract database. This database did not contain team information so to assign teams we matched the discharging attending and the day of discharge to the type of team that the discharging attending was leading that day.

We collected patient age, gender, race, insurance status, zip‐code, primary care provider, source of admission, ward type, time and day of admission, and time and day of discharge for use as independent variables. The time of admission captured in the database was the time of actual admission and not the time the admission was requested.

We grouped the principal diagnosis International Statistical Classification of Diseases and Related Health Problems, 9th edition (ICD‐9) codes into clinically relevant categories using the Clinical Classification Software.9 We created comorbidity measures using Healthcare Cost and Utilization Project Comorbidity Software, version 3.4.10

Outcome Measures

We used length of stay (LOS), charges, readmissions within 7, 14, and 30 days and inpatient mortality as our outcome measures. We calculated LOS by subtracting the discharge day and time from the admission day and time. The LOS included time spent in the ICU. We summed all charges accrued during the entire hospitalization including any stay in the ICU but did not include professional fees. We considered any repeat hospitalization to our hospital within 7, 14, and 30 days following a discharge to be a readmission except that we excluded readmissions for a planned procedure or for inpatient rehabilitation.

Statistical Analysis

Descriptive Analysis

We performed unadjusted descriptive statistics at the level of an individual hospitalization using medians and interquartile ranges for continuous data and frequencies and percentages for categorical data. We used chi‐square tests of association and KruskalWallis analysis of variance to compare H‐PA and RES teams.

Missing Data

Because we lacked data on whether a primary outpatient care provider was available for 284 (2.9%) of our study hospitalizations, we dropped them from our multivariable analyses. We used an arbitrary discharge time of noon for the 11 hospitalizations which did not have a discharge time recorded.

Multivariable Analysis

We used multivariable mixed models to risk adjust for a wide variety of variables. We included age, gender, race, insurance, presence of primary care physician, and total number of comorbidities as fixed effects in all models because of the high face validity of these variables. We then added admission source, ward, time, day of week, discharge day of week, and comorbidity measures one by one as fixed effects, including them only if significant at P < 0.01. For assessing LOS, charges, and readmissions, we added a variable identifying each patient as a random effect to account for multiple admissions for the same patient. We then added variables identifying attending physician, principal diagnostic group, and ZIP code of residence as random effects to account for clustering of hospitalizations within these categories, including them only if significant at P < 0.01. For the model assessing mortality we included variables for attending physician, principal diagnostic group, and ZIP code of residence as random effects if significant at P < 0.01. We log transformed LOS and charges because they were extremely skewed in nature. Readmissions were analyzed after excluding patients who died or were discharged alive within 7, 14, or 30 days of the end of the study period.

Sensitivity Analyses

To assess the influence of LOS outliers, we changed LOS to 6 hours if it was less than 6 hours, and 45 days if it was more than 45 daysa process called winsorizing. We consider winsorizing superior to dropping outliers because it acknowledges that outliers contribute information, but prevent them from being too influential. We chose the 6 hour cut off because we believed that was the minimum time required to admit and then discharge a patient. We chose the upper limit of 45 days on reviewing the frequency distribution for outliers. Similarly, we winsorized charges at the first and 99th percentile after reviewing the frequency distribution for outliers. We then log transformed the winsorized data before analysis.

Inpatient deaths reduce the LOS and charges associated with a hospitalization. Thus excess mortality may provide a false concession in terms of lower LOS or charges. To check if this occurred in our study we repeated the analyses after excluding inpatient deaths.

ICU stays are associated with higher LOS, charges, and mortality. In our model of care, some patients transferred to the ICU are not cared for by the original team on transfer out. Moreover, care in the ICU is not controlled by the team that discharges them. Since this might obscure differences in outcomes achieved by RES vs. H‐PA teams, we repeated these analyses after excluding hospitalizations with an ICU stay.

Since mortality can only occur during 1 hospitalization per patient, we repeated the mortality analysis using only each patient's first admission or last admission and using a randomly selected single admission for each patient.

Subgroup Analysis

To limit the effect of different physician characteristics on H‐PA and RES teams we separately analyzed the hospitalizations under the care of hospitalists who served on both H‐PA and RES teams.

To limit the effect of different admission schedules of H‐PA and RES teams we analyzed the hospitalizations with admission times between 11.00 AM and 4.00 PM. Such hospitalizations were likely to be assigned during the noon to 3 PM period when they could be assigned to either an H‐PA or RES team.

Interactions

Finally we explored interactions between the type of team and the fixed effect variables included in each model.

Statistical Software

We performed the statistical analysis using SAS software version 9.0 for UNIX (SAS Institute, Inc., Cary, NC) and R software (The R Project for Statistical Computing).

This study protocol was approved by the hospital's institutional review board.

Results

Study Population

Of the 52,391 hospitalizations to our hospital during the study period, 13,058 were admitted to general medicine. We excluded 3102 weekend admissions and 209 who met other exclusion criteria. We could not determine the team assignment for 66. Of the remaining 9681 hospitalizations, we assigned 2171 to H‐PA teams and 7510 to RES teams (Figure 1).

Figure 1
Study population (H‐PA, hospitalist‐physician assistant team; RES, traditional resident team).

Descriptive Analysis

We compare patients assigned to H‐PA and RES teams in Table 2. They were similar in age, gender, race, having a primary care provider or not, and insurance status. Clinically, they had similar comorbidities and a similar distribution of common principal diagnoses. Consistent with their admitting schedule, H‐PA teams admitted and discharged more patients earlier in the day and admitted more patients earlier in the work week. Patients cared for by H‐PA teams were admitted from the Emergency Room (ER) less often and were more likely to reside on wards designated as nonmedicine by nursing specialty. Hospitalizations to H‐PA teams more often included an ICU stay.

Characteristics of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA (n = 2171)RES (n = 7510)P Value
  • Abbreviations: CI, confidence interval; ER, emergency room; H‐PA, hospitalist‐physician assistant; ICU, Intensive care unit; RES, traditional resident.

Age   
Mean56.8057.04 
Median56560.15
Interquartile range43‐7243‐73 
Age group (years), n (%)   
< 2010 (0.5)57 (0.8) 
20‐29186 (8.6)632 (8.7) 
30‐39221 (10.2)766 (10.3) 
40‐49387 (17.8)1341 (18.1) 
50‐59434 (20.0)1492 (20.2)0.28
60‐69325 (15.0)974 (12.8) 
70‐79271 (12.5)1035 (13.6) 
80‐89262 (12.0)951(12.3) 
90<75 (3.5)262 (3.4) 
Female, n (%)1175 (54.1)4138 (55.1)0.42
Race, n (%)   
White1282 (59.1)4419 (58.9) 
Black793 (36.5)2754 (36.7)0.98
Other96 (4.4)337 (4.5) 
Primary care provider, n (%)  0.16
Yes1537 (73.2)5451 (74.7) 
Missing: 28471 (3.3)213 (2.8) 
Insurance status, n (%)   
Commercial/worker's comp440 (20.3)1442 (19.2) 
Medicare1017 (46.8)3589 (47.8)0.52
Medicaid/others714 (32.9)2479 (33.0) 
Time of admission, n (%)   
0000‐0259167 (7.7)1068 (14.2) 
0300‐0559244 (11.2)485 (6.5) 
0600‐0859456 (21.0)270 (3.6) 
0900‐1159782 (36.0)1146 (15.3)<0.001
1200‐1459299 (13.8)1750 (23.3) 
1500‐1759155 (7.1)1676 (22.3) 
1800‐235968 (3.1)1115 (14.9) 
Time of discharge, n (%)   
2100‐085936 (1.7)174 (2.3) 
0900‐1159275 (12.7)495 (6.6) 
1200‐1459858 (39.6)2608 (34.8)<0.001
1500‐1759749 (34.6)3122 (41.6) 
1800‐2059249 (11.5)1104 (14.7) 
Missing47 
Day of week of admission, n (%)   
Monday462 (21.3)1549 (20.6) 
Tuesday499 (23.0)1470 (19.6) 
Wednesday430 (19.8)1479 (19.7)0.001
Thursday400 (18.4)1482 (19.7) 
Friday380 (17.5)1530 (20.4) 
Day of week of discharge, n (%)   
Monday207 (9.5)829 (11.0) 
Tuesday268 (12.3)973 (13.0) 
Wednesday334 (15.4)1142 (15.2) 
Thursday362 (16.7)1297 (17.3)0.16
Friday485 (22.3)1523 (20.3) 
Saturday330 (15.2)1165 (15.5) 
Sunday185 (8.5)581 (7.7) 
Admit to non‐medicine wards, n (%)1332 (61.4)2624 (34.9)<0.001
Transfer to ICU (at least once), n (%)299 (13.8)504 (6.7)<0.001
Admit from ER No (%)1663 (76.6)6063 (80.7)<0.001
10 most frequent diagnosis (%)Pneumonia (4.9)Pneumonia (5.5) 
 Congestive heart failure; nonhypertensive (4.2)Congestive heart failure; nonhypertensive (3.9) 
 Sickle cell anemia (3.9)Nonspecific chest pain (3.7) 
 Chronic obstructive pulmonary disease and Bronchiectasis (3.3)Urinary tract infections(3.6) 
 Diabetes mellitus with complications (3.2)Skin and subcutaneous tissue infections (3.3) 
 Urinary tract infections (3.2)Sickle cell anemia (3.3) 
 Asthma (3.0)Pancreatic disorders (not diabetes) (2.8) 
 Nonspecific chest pain (3.0)Asthma (2.8) 
 Pancreatic disorders (not diabetes) (2.9)Chronic obstructive pulmonary disease and Bronchiectasis (2.6) 
 Septicemia (2.2)Diabetes mellitus with complications (2.6) 
Average number of comorbidities mean (95% CI)0.39 (0.37‐0.42)0.38 (0.36‐0.39)0.23

In unadjusted comparisons of outcomes (Table 3), hospitalizations on H‐PA teams had higher lengths of stay and charges than hospitalizations on RES teams, possibly higher inpatient mortality rates but similar unadjusted readmission rates at 7, 14, and 30 days

Unadjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA (n = 2171)RES (n = 7150)% Difference* (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; IQR, interquartile range; LOS, length of stay;

  • On comparing log transformed LOS;

  • RES is reference group.

LOSMedian (IQR)Median (IQR)  
Days3.17 (2.03‐5.30)2.99 (1.80‐5.08)+8.9% (4.71‐13.29%)<0.001
Charges    
US Dollars9390 (6196‐16,239)9044 (6106‐14,805)+5.56% (1.96‐9.28%)0.002
Readmissionsn (%)n (%)Odds Ratio (CI) 
Within 7 days147 (6.96)571 (7.78)0.88 (0.73‐1.06)0.19
Within14 days236 (11.34)924 (12.76)0.87 (0.75‐1.01)0.07
Within 30 days383 (18.91)1436 (20.31)0.91 (0.80‐1.03)0.14
Inpatient deaths39 (1.8)95 (1.3)1.36 (0.90‐2.00)0.06

Multivariable Analysis

LOS

Hospitalizations to H‐PA teams were associated with a 6.73% longer LOS (P = 0.005) (Table 4). This difference persisted when we used the winsorized data (6.45% increase, P = 0.006), excluded inpatient deaths (6.81% increase, P = 0.005), or excluded hospitalizations that involved an ICU stay (6.40%increase, P = 0.011) (Table 5).

Adjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams (RES is the reference group)
 OverallSubgroup: Restricted to Physicians Attending on Both H‐PA and RES Teams*Subgroup: Restricted to Hospitalizations Between 11.00 AM and 4.00 PM
% Difference (CI)P Value% Difference (CI)P Value% Difference (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; LOS, length of stay; OR, odds ratio;

  • Number of observations included in subgroup ranges from 2992 to 3196;

  • Number of observations included in subgroup ranges from 3174 to 3384.

LOS6.73% (1.99% to 11.70%)0.0055.44% (0.65% to 11.91%)0.082.97% (4.47% to 10.98%)0.44
Charges2.75% (1.30% to 6.97%)0.191.55% (3.76% to 7.16%)0.576.45% (0.62% to 14.03%)0.07
Risk of ReadmissionAdjusted OR (95%CI)P ValueAdjusted OR (95% CI)P ValueAdjusted OR (95% CI)P Value
Within 7 days0.88 (0.64‐1.20)0.420.74 (0.40‐1.35)0.320.90 (0.40‐2.00)0.78
Within14 days0.90 (0.69‐1.19)0.460.71 (0.51‐0.99)0.050.87 (0.36‐2.13)0.77
Within 30 days0.89 (0.75‐1.06)0.200.75 (0.51‐1.08)0.120.92 (0.55‐1.54)0.75
Inpatient mortality1.27 (0.82‐1.97)0.281.46 (0.67‐3.17)0.331.14 (0.47‐2.74)0.77
Sensitivity Analysis: Adjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams (RES Is the Reference Group)
 Analysis With Winsorized DataAnalysis After Excluding Inpatient DeathsAnalysis After Excluding Patients With ICU Stays
% Difference (CI)P Value% Difference (CI)P Value% Difference (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; ICU, intensive care unit; LOS, length of stay; OR, odds ratio.

LOS6.45% (4.04 to 8.91%)0.0066.81% (2.03 to 11.80%)0.0056.40% (1.46 to 11.58%)0.011
Charges2.67 (1.27 to 6.76%)0.1872.89% (1.16 to 7.11%)0.1640.74% (3.11 to 4.76%)0.710

Charges

Hospitalizations to H‐PA and RES teams were associated with similar charges (Table 4). The results were similar when we used winsorized data, excluded inpatient deaths or excluded hospitalizations involving an ICU stay (Table 5).

Readmissions

The risk of readmission at 7, 14, and 30 days was similar between hospitalizations to H‐PA and RES teams (Table 4).

Mortality

The risk of inpatient death was similar between all hospitalizations to H‐PA and RES teams or only hospitalizations without an ICU stay (Table 4). The results also remained the same in analyses restricted to first admissions, last admissions, or 1 randomly selected admission per patient.

Sub‐Group Analysis

On restricting the multivariable analyses to the subset of hospitalists who staffed both types of teams (Table 4), the increase in LOS associated with H‐PA care was no longer significant (5.44% higher, P = 0.081). The charges, risk of readmission at 7 and 30 days, and risk of inpatient mortality remained similar. The risk of readmission at 14 days was slightly lower following hospitalizations to H‐PA teams (odds ratio 0.71, 95% confidence interval [CI] 0.51‐0.99).

The increase in LOS associated with H‐PA care was further attenuated in analyses of the subset of admissions between 11.00 AM and 4.00 PM (2.97% higher, P = 0.444). The difference in charges approached significance (6.45% higher, P = 0.07), but risk of readmission at 7, 14, and 30 days and risk of inpatient mortality were no different (Table 4).

Interactions

On adding interaction terms between the team assignment and the fixed effect variables in each model we detected that the effect of H‐PA care on LOS (P < 0.001) and charges (P < 0.001) varied by time of admission (Figure 2a and b). Hospitalizations to H‐PA teams from 6.00 PM to 6.00 AM had greater relative increases in LOS as compared to hospitalizations to RES teams during those times. Similarly, hospitalizations during the period 3.00 PM to 3.00 AM had relatively higher charges associated with H‐PA care compared to RES care.

Figure 2
(A) Relative difference in length of stay associated with care by H‐PA teams by times of admission (in percent change with RES as reference). (B) Relative difference in charges associated with care by H‐PA teams by time of admission (in percent with RES as reference). Abbreviations: H‐PA, hospitalist‐physician assistant team; RES traditional resident team. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

Discussion

We found that hospitalizations to our H‐PA teams had longer LOS but similar charges, readmission rates, and mortality as compared to traditional resident‐based teams. These findings were robust to multiple sensitivity and subgroup analyses but when we examined times when both types of teams could receive admissions, the difference in LOS was markedly attenuated and nonsignificant.

We note that most prior reports comparing PA‐based models of inpatient care predate the ACGME work hour regulations. In a randomized control trial (1987‐1988) Simmer et al.5 showed lower lengths of stay and charges but possibly higher risk of readmission for PA based teams as compared to resident based teams. Van Rhee et al.7 conducted a nonrandomized retrospective cohort study (1994‐1995) using administrative data which showed lower resource utilization for PA‐based inpatient care. Our results from 2005 to 2006 reflect the important changes in the organization and delivery of inpatient care since these previous investigations.

Roy et al.8 report the only previously published comparison of PA and resident based GM inpatient care after the ACGME mandated work hour regulations. They found PA‐based care was associated with lower costs, whereas we found similar charges for admissions to RES and H‐PA teams. They also found that LOS was similar for PA and resident‐based care, while we found a higher LOS for admissions to our H‐PA team. We note that although the design of Roy's study was similar to our own, patients cared for by PA‐based teams were geographically localized in their model. This may contribute to the differences in results noted between our studies.

Despite no designed differences in patients assigned to either type of team other than time of admission we noted some differences between the H‐PA and RES teams in the descriptive analysis. These differences, such as a higher proportion of hospitalizations to H‐PA teams being admitted from the ER, residing on nonmedicine wards or having an ICU stay are likely a result of our system of assigning admissions to H‐PA teams early during the workday. For example patients on H‐PA teams were more often located on nonmedicine wards as a result of later discharges and bed availability on medicine wards. The difference that deserves special comment is the much higher proportion (13.8% vs. 6.7%) of hospitalizations with an ICU stay on the H‐PA teams. Hospitalizations directly to the ICU were excluded from our study which means that the hospitalizations with an ICU stay in our study were initially admitted to either H‐PA or RES teams and then transferred to the ICU. Transfers out of the ICU usually occur early in the workday when H‐PA teams accepted patients per our admission schedule. These patients may have been preferentially assigned to H‐PA teams, if on returning from the ICU the original team's resident had changed (and the bounce back rule was not in effect). Importantly, the conclusions of our research are not altered on controlling for this difference in the teams by excluding hospitalizations with an ICU stay.

Hospitalizations to H‐PA teams were associated with higher resource utilization if they occurred later in the day or overnight (Figure 2a and b). During these times a transition of care occurred shortly after admission. For a late day admission the H‐PA teams would transfer care for overnight cross cover soon after the admission and for patients admitted overnight as overflow they would assume care of a patient from the nighttime covering physician performing the admission. On the other hand, on RES teams, interns admitting patients overnight continued to care for their patients for part of the following day (30‐hour call). Similar findings of higher resource utilization associated with transfer of care after admission in the daytime11 and nighttime12 have been previously reported. An alternative hypothesis for our findings is that the hospital maybe busier and thus less efficient during times when H‐PA teams had to admit later in the day or accept patients admitted overnight as overflow. Future research to determine the cause of this significant interaction between team assignment and time of admission on resource utilization is important as the large increases in LOS (up to 30%) and charges (up to 50%) noted, could have a potentially large impact if a higher proportion of hospitalizations were affected by this phenomenon.

Our H‐PA teams were assigned equally complex patients as our RES teams, in contrast to previous reports.8, 13 This was accomplished while improving the resident's educational experience and we have previously reported increases in our resident's board pass rates and in‐service training exam scores with that introduction of our H‐PA teams.14 We thus believe that selection of less complex patients to H‐PA teams such as ours is unnecessary and may give them a second tier status in academic settings.

Our report has limitations. It is a retrospective, nonrandomized investigation using a single institution's administrative database and has the limitations of not being able to account for unmeasured confounders, severity of illness, errors in the database, selection bias and has limited generalizability. We measured charges not actual costs,15 but we feel charges are a true reflection of relative resource use when compared between similar patients within a single institution. We also did not account for the readmissions that occur to other hospitals16 and our results do not reflect resource utilization for the healthcare system in total. For example, we could not tell if higher LOS on H‐PA teams resulted in lower readmissions for their patients in all hospitals in the region, which may reveal an overall resource savings. Additionally, we measured in‐hospital mortality and could not capture deaths related to hospital care that may occur shortly after discharge.

ACGME has proposed revised standards that may further restrict resident duty hours when they take effect in July 2011.3 This may lead to further decreases in resident‐based inpatient care. Teaching hospitals will need to continue to develop alternate models for inpatient care that do not depend on house staff. Our findings provide important evidence to inform the development of such models. Our study shows that one such model: PAs paired with hospitalists, accepting admissions early in the workday, with hospitalist coverage over the weekend and nights can care for GM inpatients as complex as those cared for by resident‐based teams without increasing readmission rates, inpatient mortality, or charges but at the cost of slightly higher LOS.

In 2003 the Accreditation Council for Graduate Medical Education (ACGME) prescribed residency reform in the form of work hour restrictions without prescribing alternatives to resident based care.1 As a response, many academic medical centers have developed innovative models for providing inpatient care, some of which incorporate Physician Assistants (PAs).2 With further restrictions in resident work hours possible,3 teaching hospitals may increase use of these alternate models to provide inpatient care. Widespread implementation of such new and untested models could impact the care of the approximately 20 million hospitalizations that occur every year in US teaching hospitals.4

Few reports have compared the care delivered by these alternate models with the care provided by traditional resident‐based models of care.58 Roy et al.8 have provided the only recent comparison of a PA‐based model of care with a resident‐based model. They showed lower adjusted costs of inpatient care associated with PA based care but other outcomes were similar to resident‐based teams.

The objective of this study is to provide a valid and usable comparison of the outcomes of a hospitalist‐PA (H‐PA) model of inpatient care with the traditional resident‐based model. This will add to the quantity and quality of the limited research on PA‐based inpatient care, and informs the anticipated increase in the involvement of PAs in this arena.

Methods

Study Design and Setting

We conducted a retrospective cohort study at a 430‐bed urban academic medical center in the Midwestern United States.

Models of General Medical (GM) Inpatient Care at the Study Hospital During the Study Period

In November 2004, as a response to the ACGME‐mandated work hour regulations, we formed 2 Hospitalist‐PA teams (H‐PA) to supplement the 6 preexisting general medicine resident teams (RES).

The H‐PA and RES teams differed in staffing, admitting times and weekend/overnight cross coverage structure (Table 1). There were no predesigned differences between the teams in the ward location of their patients, availability of laboratory/radiology services, specialty consultation, social services/case management resources, nursing resources or documentation requirements for admission, daily care, and discharge.

Differences in Structure and Function Between Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA TeamsRES Teams
Attending physicianAlways a hospitalistHospitalist, non‐hospitalist general internist or rarely a specialist
Attending physician roleSupervisory for some patients (about half) and sole care provider for others.Supervisory for all patients
Team compositionOne attending paired with 1 PAAttending + senior resident + (2) interns + (2‐3) medical students
Rotation schedule  
AttendingEvery 2 weeksEvery 2 weeks
Physician assistantOff on weekends 
House staff & medical students Every month
WeekendNo new admissions & hospitalist manages all patientsAccept new admissions
Admission times (weekdays)7 AM to 3 PMNoon to 7 AM
Source of admissionsEmergency room, clinics, other hospitalsEmergency room, clinics, other hospitals
Number of admissions (weekdays)4‐6 patients per day per teamNoon to 5 PM: 2 teams admit a maximum of 9 patients total
  5 PM to 7 AM: 3 teams admit a maximum 5 patients each.
Overnight coverageroles and responsibilitiesOne in‐house faculty3 on call interns
 Cross‐covering 2 H‐PA teamsCross‐covering 2 teams each
 Performing triageAdmitting up to 5 patients each
 Admitting patients if necessary 
 Assisting residents if necessary 
 General medical consultation 

Admission Schedule for H‐PA or RES Teams

The admitting schedule was designed to decrease the workload of the house staff and to do so specifically during the periods of peak educational activity (morning report, attending‐led teaching rounds, and noon report). A faculty admitting medical officer (AMO) assigned patients strictly based on the time an admission was requested. Importantly, the request for admission preceded the time of actual admission recorded when the patient reached the ward. The time difference between request for admission and actual admission depended on the source of admission and the delay associated with assigning a patient room. The AMO assigned 8 to 12 new patients to the H‐PA teams every weekday between 7 AM and 3 PM and to the RES teams between noon and 7 AM the next day. There was a designed period of overlap from noon to 3 PM during which both H‐PA and RES teams could admit patients. This period allowed for flexibility in assigning patients to either type of team depending on their workload. The AMO did not use patient complexity or teaching value to assign patients.

Exceptions to Admission Schedule

Patients admitted overnight after the on call RES had reached their admission limits were assigned to H‐PA teams the next morning. In addition, recently discharged patients who were readmitted while the discharging hospitalist (H‐PA teams) or the discharging resident (RES teams) was still scheduled for inpatient duties, were assigned back to the discharging team irrespective of the admitting schedule.

The same medicine team cared for a patient from admission to discharge but on transfer to the intensive care unit (ICU), an intensivist led critical care team assumed care. On transfer out of the ICU these patients were assigned back to the original team irrespective of admitting schedulethe so called bounce back rule to promote inpatient continuity of care. But if the residents (RES teams) or the hospitalist (H‐PA teams) had changedthe bounce back rule was no longer in effect and these patients were assigned to a team according to the admission schedule.

Study Population and Study Period

We included all hospitalizations of adult patients to GM teams if both their date of admission and their date of discharge fell within the study period (January 1, 2005 to December 31, 2006). We excluded hospitalizations with admissions during the weekendwhen H‐PA teams did not admit patients; hospitalizations to GM services with transfer to nonGM service (excluding ICU) and hospitalizations involving comanagement with specialty servicesas the contribution of GM teams for these was variable; and hospitalizations of private patients.

Data Collection and Team Assignment

We collected patient data from our hospital's discharge abstract database. This database did not contain team information so to assign teams we matched the discharging attending and the day of discharge to the type of team that the discharging attending was leading that day.

We collected patient age, gender, race, insurance status, zip‐code, primary care provider, source of admission, ward type, time and day of admission, and time and day of discharge for use as independent variables. The time of admission captured in the database was the time of actual admission and not the time the admission was requested.

We grouped the principal diagnosis International Statistical Classification of Diseases and Related Health Problems, 9th edition (ICD‐9) codes into clinically relevant categories using the Clinical Classification Software.9 We created comorbidity measures using Healthcare Cost and Utilization Project Comorbidity Software, version 3.4.10

Outcome Measures

We used length of stay (LOS), charges, readmissions within 7, 14, and 30 days and inpatient mortality as our outcome measures. We calculated LOS by subtracting the discharge day and time from the admission day and time. The LOS included time spent in the ICU. We summed all charges accrued during the entire hospitalization including any stay in the ICU but did not include professional fees. We considered any repeat hospitalization to our hospital within 7, 14, and 30 days following a discharge to be a readmission except that we excluded readmissions for a planned procedure or for inpatient rehabilitation.

Statistical Analysis

Descriptive Analysis

We performed unadjusted descriptive statistics at the level of an individual hospitalization using medians and interquartile ranges for continuous data and frequencies and percentages for categorical data. We used chi‐square tests of association and KruskalWallis analysis of variance to compare H‐PA and RES teams.

Missing Data

Because we lacked data on whether a primary outpatient care provider was available for 284 (2.9%) of our study hospitalizations, we dropped them from our multivariable analyses. We used an arbitrary discharge time of noon for the 11 hospitalizations which did not have a discharge time recorded.

Multivariable Analysis

We used multivariable mixed models to risk adjust for a wide variety of variables. We included age, gender, race, insurance, presence of primary care physician, and total number of comorbidities as fixed effects in all models because of the high face validity of these variables. We then added admission source, ward, time, day of week, discharge day of week, and comorbidity measures one by one as fixed effects, including them only if significant at P < 0.01. For assessing LOS, charges, and readmissions, we added a variable identifying each patient as a random effect to account for multiple admissions for the same patient. We then added variables identifying attending physician, principal diagnostic group, and ZIP code of residence as random effects to account for clustering of hospitalizations within these categories, including them only if significant at P < 0.01. For the model assessing mortality we included variables for attending physician, principal diagnostic group, and ZIP code of residence as random effects if significant at P < 0.01. We log transformed LOS and charges because they were extremely skewed in nature. Readmissions were analyzed after excluding patients who died or were discharged alive within 7, 14, or 30 days of the end of the study period.

Sensitivity Analyses

To assess the influence of LOS outliers, we changed LOS to 6 hours if it was less than 6 hours, and 45 days if it was more than 45 daysa process called winsorizing. We consider winsorizing superior to dropping outliers because it acknowledges that outliers contribute information, but prevent them from being too influential. We chose the 6 hour cut off because we believed that was the minimum time required to admit and then discharge a patient. We chose the upper limit of 45 days on reviewing the frequency distribution for outliers. Similarly, we winsorized charges at the first and 99th percentile after reviewing the frequency distribution for outliers. We then log transformed the winsorized data before analysis.

Inpatient deaths reduce the LOS and charges associated with a hospitalization. Thus excess mortality may provide a false concession in terms of lower LOS or charges. To check if this occurred in our study we repeated the analyses after excluding inpatient deaths.

ICU stays are associated with higher LOS, charges, and mortality. In our model of care, some patients transferred to the ICU are not cared for by the original team on transfer out. Moreover, care in the ICU is not controlled by the team that discharges them. Since this might obscure differences in outcomes achieved by RES vs. H‐PA teams, we repeated these analyses after excluding hospitalizations with an ICU stay.

Since mortality can only occur during 1 hospitalization per patient, we repeated the mortality analysis using only each patient's first admission or last admission and using a randomly selected single admission for each patient.

Subgroup Analysis

To limit the effect of different physician characteristics on H‐PA and RES teams we separately analyzed the hospitalizations under the care of hospitalists who served on both H‐PA and RES teams.

To limit the effect of different admission schedules of H‐PA and RES teams we analyzed the hospitalizations with admission times between 11.00 AM and 4.00 PM. Such hospitalizations were likely to be assigned during the noon to 3 PM period when they could be assigned to either an H‐PA or RES team.

Interactions

Finally we explored interactions between the type of team and the fixed effect variables included in each model.

Statistical Software

We performed the statistical analysis using SAS software version 9.0 for UNIX (SAS Institute, Inc., Cary, NC) and R software (The R Project for Statistical Computing).

This study protocol was approved by the hospital's institutional review board.

Results

Study Population

Of the 52,391 hospitalizations to our hospital during the study period, 13,058 were admitted to general medicine. We excluded 3102 weekend admissions and 209 who met other exclusion criteria. We could not determine the team assignment for 66. Of the remaining 9681 hospitalizations, we assigned 2171 to H‐PA teams and 7510 to RES teams (Figure 1).

Figure 1
Study population (H‐PA, hospitalist‐physician assistant team; RES, traditional resident team).

Descriptive Analysis

We compare patients assigned to H‐PA and RES teams in Table 2. They were similar in age, gender, race, having a primary care provider or not, and insurance status. Clinically, they had similar comorbidities and a similar distribution of common principal diagnoses. Consistent with their admitting schedule, H‐PA teams admitted and discharged more patients earlier in the day and admitted more patients earlier in the work week. Patients cared for by H‐PA teams were admitted from the Emergency Room (ER) less often and were more likely to reside on wards designated as nonmedicine by nursing specialty. Hospitalizations to H‐PA teams more often included an ICU stay.

Characteristics of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA (n = 2171)RES (n = 7510)P Value
  • Abbreviations: CI, confidence interval; ER, emergency room; H‐PA, hospitalist‐physician assistant; ICU, Intensive care unit; RES, traditional resident.

Age   
Mean56.8057.04 
Median56560.15
Interquartile range43‐7243‐73 
Age group (years), n (%)   
< 2010 (0.5)57 (0.8) 
20‐29186 (8.6)632 (8.7) 
30‐39221 (10.2)766 (10.3) 
40‐49387 (17.8)1341 (18.1) 
50‐59434 (20.0)1492 (20.2)0.28
60‐69325 (15.0)974 (12.8) 
70‐79271 (12.5)1035 (13.6) 
80‐89262 (12.0)951(12.3) 
90<75 (3.5)262 (3.4) 
Female, n (%)1175 (54.1)4138 (55.1)0.42
Race, n (%)   
White1282 (59.1)4419 (58.9) 
Black793 (36.5)2754 (36.7)0.98
Other96 (4.4)337 (4.5) 
Primary care provider, n (%)  0.16
Yes1537 (73.2)5451 (74.7) 
Missing: 28471 (3.3)213 (2.8) 
Insurance status, n (%)   
Commercial/worker's comp440 (20.3)1442 (19.2) 
Medicare1017 (46.8)3589 (47.8)0.52
Medicaid/others714 (32.9)2479 (33.0) 
Time of admission, n (%)   
0000‐0259167 (7.7)1068 (14.2) 
0300‐0559244 (11.2)485 (6.5) 
0600‐0859456 (21.0)270 (3.6) 
0900‐1159782 (36.0)1146 (15.3)<0.001
1200‐1459299 (13.8)1750 (23.3) 
1500‐1759155 (7.1)1676 (22.3) 
1800‐235968 (3.1)1115 (14.9) 
Time of discharge, n (%)   
2100‐085936 (1.7)174 (2.3) 
0900‐1159275 (12.7)495 (6.6) 
1200‐1459858 (39.6)2608 (34.8)<0.001
1500‐1759749 (34.6)3122 (41.6) 
1800‐2059249 (11.5)1104 (14.7) 
Missing47 
Day of week of admission, n (%)   
Monday462 (21.3)1549 (20.6) 
Tuesday499 (23.0)1470 (19.6) 
Wednesday430 (19.8)1479 (19.7)0.001
Thursday400 (18.4)1482 (19.7) 
Friday380 (17.5)1530 (20.4) 
Day of week of discharge, n (%)   
Monday207 (9.5)829 (11.0) 
Tuesday268 (12.3)973 (13.0) 
Wednesday334 (15.4)1142 (15.2) 
Thursday362 (16.7)1297 (17.3)0.16
Friday485 (22.3)1523 (20.3) 
Saturday330 (15.2)1165 (15.5) 
Sunday185 (8.5)581 (7.7) 
Admit to non‐medicine wards, n (%)1332 (61.4)2624 (34.9)<0.001
Transfer to ICU (at least once), n (%)299 (13.8)504 (6.7)<0.001
Admit from ER No (%)1663 (76.6)6063 (80.7)<0.001
10 most frequent diagnosis (%)Pneumonia (4.9)Pneumonia (5.5) 
 Congestive heart failure; nonhypertensive (4.2)Congestive heart failure; nonhypertensive (3.9) 
 Sickle cell anemia (3.9)Nonspecific chest pain (3.7) 
 Chronic obstructive pulmonary disease and Bronchiectasis (3.3)Urinary tract infections(3.6) 
 Diabetes mellitus with complications (3.2)Skin and subcutaneous tissue infections (3.3) 
 Urinary tract infections (3.2)Sickle cell anemia (3.3) 
 Asthma (3.0)Pancreatic disorders (not diabetes) (2.8) 
 Nonspecific chest pain (3.0)Asthma (2.8) 
 Pancreatic disorders (not diabetes) (2.9)Chronic obstructive pulmonary disease and Bronchiectasis (2.6) 
 Septicemia (2.2)Diabetes mellitus with complications (2.6) 
Average number of comorbidities mean (95% CI)0.39 (0.37‐0.42)0.38 (0.36‐0.39)0.23

In unadjusted comparisons of outcomes (Table 3), hospitalizations on H‐PA teams had higher lengths of stay and charges than hospitalizations on RES teams, possibly higher inpatient mortality rates but similar unadjusted readmission rates at 7, 14, and 30 days

Unadjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams
 H‐PA (n = 2171)RES (n = 7150)% Difference* (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; IQR, interquartile range; LOS, length of stay;

  • On comparing log transformed LOS;

  • RES is reference group.

LOSMedian (IQR)Median (IQR)  
Days3.17 (2.03‐5.30)2.99 (1.80‐5.08)+8.9% (4.71‐13.29%)<0.001
Charges    
US Dollars9390 (6196‐16,239)9044 (6106‐14,805)+5.56% (1.96‐9.28%)0.002
Readmissionsn (%)n (%)Odds Ratio (CI) 
Within 7 days147 (6.96)571 (7.78)0.88 (0.73‐1.06)0.19
Within14 days236 (11.34)924 (12.76)0.87 (0.75‐1.01)0.07
Within 30 days383 (18.91)1436 (20.31)0.91 (0.80‐1.03)0.14
Inpatient deaths39 (1.8)95 (1.3)1.36 (0.90‐2.00)0.06

Multivariable Analysis

LOS

Hospitalizations to H‐PA teams were associated with a 6.73% longer LOS (P = 0.005) (Table 4). This difference persisted when we used the winsorized data (6.45% increase, P = 0.006), excluded inpatient deaths (6.81% increase, P = 0.005), or excluded hospitalizations that involved an ICU stay (6.40%increase, P = 0.011) (Table 5).

Adjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams (RES is the reference group)
 OverallSubgroup: Restricted to Physicians Attending on Both H‐PA and RES Teams*Subgroup: Restricted to Hospitalizations Between 11.00 AM and 4.00 PM
% Difference (CI)P Value% Difference (CI)P Value% Difference (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; LOS, length of stay; OR, odds ratio;

  • Number of observations included in subgroup ranges from 2992 to 3196;

  • Number of observations included in subgroup ranges from 3174 to 3384.

LOS6.73% (1.99% to 11.70%)0.0055.44% (0.65% to 11.91%)0.082.97% (4.47% to 10.98%)0.44
Charges2.75% (1.30% to 6.97%)0.191.55% (3.76% to 7.16%)0.576.45% (0.62% to 14.03%)0.07
Risk of ReadmissionAdjusted OR (95%CI)P ValueAdjusted OR (95% CI)P ValueAdjusted OR (95% CI)P Value
Within 7 days0.88 (0.64‐1.20)0.420.74 (0.40‐1.35)0.320.90 (0.40‐2.00)0.78
Within14 days0.90 (0.69‐1.19)0.460.71 (0.51‐0.99)0.050.87 (0.36‐2.13)0.77
Within 30 days0.89 (0.75‐1.06)0.200.75 (0.51‐1.08)0.120.92 (0.55‐1.54)0.75
Inpatient mortality1.27 (0.82‐1.97)0.281.46 (0.67‐3.17)0.331.14 (0.47‐2.74)0.77
Sensitivity Analysis: Adjusted Comparison of Outcomes of Hospitalization to Hospitalist‐Physician Assistant (H‐PA) and Traditional Resident (RES) Teams (RES Is the Reference Group)
 Analysis With Winsorized DataAnalysis After Excluding Inpatient DeathsAnalysis After Excluding Patients With ICU Stays
% Difference (CI)P Value% Difference (CI)P Value% Difference (CI)P Value
  • Abbreviations: CI, 95% confidence intervals; ICU, intensive care unit; LOS, length of stay; OR, odds ratio.

LOS6.45% (4.04 to 8.91%)0.0066.81% (2.03 to 11.80%)0.0056.40% (1.46 to 11.58%)0.011
Charges2.67 (1.27 to 6.76%)0.1872.89% (1.16 to 7.11%)0.1640.74% (3.11 to 4.76%)0.710

Charges

Hospitalizations to H‐PA and RES teams were associated with similar charges (Table 4). The results were similar when we used winsorized data, excluded inpatient deaths or excluded hospitalizations involving an ICU stay (Table 5).

Readmissions

The risk of readmission at 7, 14, and 30 days was similar between hospitalizations to H‐PA and RES teams (Table 4).

Mortality

The risk of inpatient death was similar between all hospitalizations to H‐PA and RES teams or only hospitalizations without an ICU stay (Table 4). The results also remained the same in analyses restricted to first admissions, last admissions, or 1 randomly selected admission per patient.

Sub‐Group Analysis

On restricting the multivariable analyses to the subset of hospitalists who staffed both types of teams (Table 4), the increase in LOS associated with H‐PA care was no longer significant (5.44% higher, P = 0.081). The charges, risk of readmission at 7 and 30 days, and risk of inpatient mortality remained similar. The risk of readmission at 14 days was slightly lower following hospitalizations to H‐PA teams (odds ratio 0.71, 95% confidence interval [CI] 0.51‐0.99).

The increase in LOS associated with H‐PA care was further attenuated in analyses of the subset of admissions between 11.00 AM and 4.00 PM (2.97% higher, P = 0.444). The difference in charges approached significance (6.45% higher, P = 0.07), but risk of readmission at 7, 14, and 30 days and risk of inpatient mortality were no different (Table 4).

Interactions

On adding interaction terms between the team assignment and the fixed effect variables in each model we detected that the effect of H‐PA care on LOS (P < 0.001) and charges (P < 0.001) varied by time of admission (Figure 2a and b). Hospitalizations to H‐PA teams from 6.00 PM to 6.00 AM had greater relative increases in LOS as compared to hospitalizations to RES teams during those times. Similarly, hospitalizations during the period 3.00 PM to 3.00 AM had relatively higher charges associated with H‐PA care compared to RES care.

Figure 2
(A) Relative difference in length of stay associated with care by H‐PA teams by times of admission (in percent change with RES as reference). (B) Relative difference in charges associated with care by H‐PA teams by time of admission (in percent with RES as reference). Abbreviations: H‐PA, hospitalist‐physician assistant team; RES traditional resident team. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

Discussion

We found that hospitalizations to our H‐PA teams had longer LOS but similar charges, readmission rates, and mortality as compared to traditional resident‐based teams. These findings were robust to multiple sensitivity and subgroup analyses but when we examined times when both types of teams could receive admissions, the difference in LOS was markedly attenuated and nonsignificant.

We note that most prior reports comparing PA‐based models of inpatient care predate the ACGME work hour regulations. In a randomized control trial (1987‐1988) Simmer et al.5 showed lower lengths of stay and charges but possibly higher risk of readmission for PA based teams as compared to resident based teams. Van Rhee et al.7 conducted a nonrandomized retrospective cohort study (1994‐1995) using administrative data which showed lower resource utilization for PA‐based inpatient care. Our results from 2005 to 2006 reflect the important changes in the organization and delivery of inpatient care since these previous investigations.

Roy et al.8 report the only previously published comparison of PA and resident based GM inpatient care after the ACGME mandated work hour regulations. They found PA‐based care was associated with lower costs, whereas we found similar charges for admissions to RES and H‐PA teams. They also found that LOS was similar for PA and resident‐based care, while we found a higher LOS for admissions to our H‐PA team. We note that although the design of Roy's study was similar to our own, patients cared for by PA‐based teams were geographically localized in their model. This may contribute to the differences in results noted between our studies.

Despite no designed differences in patients assigned to either type of team other than time of admission we noted some differences between the H‐PA and RES teams in the descriptive analysis. These differences, such as a higher proportion of hospitalizations to H‐PA teams being admitted from the ER, residing on nonmedicine wards or having an ICU stay are likely a result of our system of assigning admissions to H‐PA teams early during the workday. For example patients on H‐PA teams were more often located on nonmedicine wards as a result of later discharges and bed availability on medicine wards. The difference that deserves special comment is the much higher proportion (13.8% vs. 6.7%) of hospitalizations with an ICU stay on the H‐PA teams. Hospitalizations directly to the ICU were excluded from our study which means that the hospitalizations with an ICU stay in our study were initially admitted to either H‐PA or RES teams and then transferred to the ICU. Transfers out of the ICU usually occur early in the workday when H‐PA teams accepted patients per our admission schedule. These patients may have been preferentially assigned to H‐PA teams, if on returning from the ICU the original team's resident had changed (and the bounce back rule was not in effect). Importantly, the conclusions of our research are not altered on controlling for this difference in the teams by excluding hospitalizations with an ICU stay.

Hospitalizations to H‐PA teams were associated with higher resource utilization if they occurred later in the day or overnight (Figure 2a and b). During these times a transition of care occurred shortly after admission. For a late day admission the H‐PA teams would transfer care for overnight cross cover soon after the admission and for patients admitted overnight as overflow they would assume care of a patient from the nighttime covering physician performing the admission. On the other hand, on RES teams, interns admitting patients overnight continued to care for their patients for part of the following day (30‐hour call). Similar findings of higher resource utilization associated with transfer of care after admission in the daytime11 and nighttime12 have been previously reported. An alternative hypothesis for our findings is that the hospital maybe busier and thus less efficient during times when H‐PA teams had to admit later in the day or accept patients admitted overnight as overflow. Future research to determine the cause of this significant interaction between team assignment and time of admission on resource utilization is important as the large increases in LOS (up to 30%) and charges (up to 50%) noted, could have a potentially large impact if a higher proportion of hospitalizations were affected by this phenomenon.

Our H‐PA teams were assigned equally complex patients as our RES teams, in contrast to previous reports.8, 13 This was accomplished while improving the resident's educational experience and we have previously reported increases in our resident's board pass rates and in‐service training exam scores with that introduction of our H‐PA teams.14 We thus believe that selection of less complex patients to H‐PA teams such as ours is unnecessary and may give them a second tier status in academic settings.

Our report has limitations. It is a retrospective, nonrandomized investigation using a single institution's administrative database and has the limitations of not being able to account for unmeasured confounders, severity of illness, errors in the database, selection bias and has limited generalizability. We measured charges not actual costs,15 but we feel charges are a true reflection of relative resource use when compared between similar patients within a single institution. We also did not account for the readmissions that occur to other hospitals16 and our results do not reflect resource utilization for the healthcare system in total. For example, we could not tell if higher LOS on H‐PA teams resulted in lower readmissions for their patients in all hospitals in the region, which may reveal an overall resource savings. Additionally, we measured in‐hospital mortality and could not capture deaths related to hospital care that may occur shortly after discharge.

ACGME has proposed revised standards that may further restrict resident duty hours when they take effect in July 2011.3 This may lead to further decreases in resident‐based inpatient care. Teaching hospitals will need to continue to develop alternate models for inpatient care that do not depend on house staff. Our findings provide important evidence to inform the development of such models. Our study shows that one such model: PAs paired with hospitalists, accepting admissions early in the workday, with hospitalist coverage over the weekend and nights can care for GM inpatients as complex as those cared for by resident‐based teams without increasing readmission rates, inpatient mortality, or charges but at the cost of slightly higher LOS.

References
  1. ACGME‐Common Program Requirements for Resident Duty Hours. Available at: http://www.acgme.org/acWebsite/dutyHours/dh_ComProgrRequirmentsDutyHours0707.pdf. Accessed July 2010.
  2. Sehgal NL,Shah HM,Parekh VI,Roy CL,Williams MV.Non‐housestaff medicine services in academic centers: models and challenges.J Hosp Med.2008;3(3):247255.
  3. ACGME. Duty Hours: Proposed Standards for Review and comment. Available at: http://acgme‐2010standards.org/pdf/Proposed_Standards. pdf. Accessed July 22,2010.
  4. Agency for Health Care Policy and Research. HCUPnet: A tool for identifying, tracking, and analyzing national hospital statistics. Available at: http://hcup.ahrq.gov/HCUPnet.asp. Accessed July2010.
  5. Simmer TL,Nerenz DR,Rutt WM,Newcomb CS,Benfer DW.A randomized, controlled trial of an attending staff service in general internal medicine.Med Care.1991;29(7 suppl):JS31JS40.
  6. Dhuper S,Choksi S.Replacing an academic internal medicine residency program with a physician assistant‐‐hospitalist model: a Comparative Analysis Study.Am J Med Qual.2009;24(2):132139.
  7. Rhee JV,Ritchie J,Eward AM.Resource use by physician assistant services versus teaching services.JAAPA.2002;15(1):3342.
  8. Roy CL,Liang CL,Lund M, et al.Implementation of a physician assistant/hospitalist service in an academic medical center: impact on efficiency and patient outcomes.J Hosp Med.2008;3(5):361368.
  9. AHRQ. Clinical Classifications Software (CCS) for ICD‐9‐CM. Available at: http://www.hcup‐us.ahrq.gov/toolssoftware/ccs/ccs.jsp#overview. Accessed July2010.
  10. AHRQ. HCUP: Comorbidity Software, Version 3.4.;Available at: http://www.hcup‐us.ahrq.gov/toolssoftware/comorbidity/comorbidity.jsp. Accessed July2010.
  11. Schuberth JL,Elasy TA,Butler J, et al.Effect of short call admission on length of stay and quality of care for acute decompensated heart failure.Circulation.2008;117(20):26372644.
  12. Lofgren RP,Gottlieb D,Williams RA,Rich EC.Post‐call transfer of resident responsibility: its effect on patient care.J Gen Intern Med.1990;5(6):501505.
  13. O'Connor AB,Lang VJ,Lurie SJ,Lambert DR,Rudmann A,Robbins B.The effect of nonteaching services on the distribution of inpatient cases for internal medicine residents.Acad Med.2009:84(2):220225.
  14. Singh S,Petkova JH,Gill A, et al.Allowing for better resident education and improving patient care: hospitalist‐physician assistant teams fill in the gaps.J Hosp Med.2007;2[S2]:139.
  15. Finkler SA.The distinction between cost and charges.Ann Intern Med.1982;96(1):102109.
  16. Jencks SF,Williams MV,Coleman EA.Rehospitalizations among patients in the Medicare Fee‐for‐Service Program.N Engl J Med.2009;360(14):14181428.
References
  1. ACGME‐Common Program Requirements for Resident Duty Hours. Available at: http://www.acgme.org/acWebsite/dutyHours/dh_ComProgrRequirmentsDutyHours0707.pdf. Accessed July 2010.
  2. Sehgal NL,Shah HM,Parekh VI,Roy CL,Williams MV.Non‐housestaff medicine services in academic centers: models and challenges.J Hosp Med.2008;3(3):247255.
  3. ACGME. Duty Hours: Proposed Standards for Review and comment. Available at: http://acgme‐2010standards.org/pdf/Proposed_Standards. pdf. Accessed July 22,2010.
  4. Agency for Health Care Policy and Research. HCUPnet: A tool for identifying, tracking, and analyzing national hospital statistics. Available at: http://hcup.ahrq.gov/HCUPnet.asp. Accessed July2010.
  5. Simmer TL,Nerenz DR,Rutt WM,Newcomb CS,Benfer DW.A randomized, controlled trial of an attending staff service in general internal medicine.Med Care.1991;29(7 suppl):JS31JS40.
  6. Dhuper S,Choksi S.Replacing an academic internal medicine residency program with a physician assistant‐‐hospitalist model: a Comparative Analysis Study.Am J Med Qual.2009;24(2):132139.
  7. Rhee JV,Ritchie J,Eward AM.Resource use by physician assistant services versus teaching services.JAAPA.2002;15(1):3342.
  8. Roy CL,Liang CL,Lund M, et al.Implementation of a physician assistant/hospitalist service in an academic medical center: impact on efficiency and patient outcomes.J Hosp Med.2008;3(5):361368.
  9. AHRQ. Clinical Classifications Software (CCS) for ICD‐9‐CM. Available at: http://www.hcup‐us.ahrq.gov/toolssoftware/ccs/ccs.jsp#overview. Accessed July2010.
  10. AHRQ. HCUP: Comorbidity Software, Version 3.4.;Available at: http://www.hcup‐us.ahrq.gov/toolssoftware/comorbidity/comorbidity.jsp. Accessed July2010.
  11. Schuberth JL,Elasy TA,Butler J, et al.Effect of short call admission on length of stay and quality of care for acute decompensated heart failure.Circulation.2008;117(20):26372644.
  12. Lofgren RP,Gottlieb D,Williams RA,Rich EC.Post‐call transfer of resident responsibility: its effect on patient care.J Gen Intern Med.1990;5(6):501505.
  13. O'Connor AB,Lang VJ,Lurie SJ,Lambert DR,Rudmann A,Robbins B.The effect of nonteaching services on the distribution of inpatient cases for internal medicine residents.Acad Med.2009:84(2):220225.
  14. Singh S,Petkova JH,Gill A, et al.Allowing for better resident education and improving patient care: hospitalist‐physician assistant teams fill in the gaps.J Hosp Med.2007;2[S2]:139.
  15. Finkler SA.The distinction between cost and charges.Ann Intern Med.1982;96(1):102109.
  16. Jencks SF,Williams MV,Coleman EA.Rehospitalizations among patients in the Medicare Fee‐for‐Service Program.N Engl J Med.2009;360(14):14181428.
Issue
Journal of Hospital Medicine - 6(3)
Issue
Journal of Hospital Medicine - 6(3)
Page Number
122-130
Page Number
122-130
Publications
Publications
Article Type
Display Headline
A comparison of outcomes of general medical inpatient care provided by a hospitalist‐physician assistant model vs a traditional resident‐based model
Display Headline
A comparison of outcomes of general medical inpatient care provided by a hospitalist‐physician assistant model vs a traditional resident‐based model
Legacy Keywords
education, outcomes measurement, physician assistant, resident
Legacy Keywords
education, outcomes measurement, physician assistant, resident
Sections
Article Source

Copyright © 2011 Society of Hospital Medicine

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media

The Effect Of An Illustrated Pamphlet Decision-Aid On the Use Of Prostate Cancer Screening Tests

Article Type
Changed
Mon, 01/14/2019 - 11:10
Display Headline
The Effect Of An Illustrated Pamphlet Decision-Aid On the Use Of Prostate Cancer Screening Tests

 

BACKGROUND: Prostate cancer screening with serum prostate-specific antigen (PSA) and digital rectal examination (DRE) continues to increase. Our goal was to test the effect of a prostate cancer screening decision-aid on patients’ knowledge, beliefs, and use of prostate cancer screening tests.

METHODS: Our study was a randomized controlled trial of a prostate cancer screening decision-aid consisting of an illustrated pamphlet as opposed to a comparison intervention. We included 257 men aged 50 to 80 years who were receiving primary care at a Department of Veterans Affairs Hospital in Milwaukee, Wisconsin. The decision-aid provided quantitative outcomes of prostate cancer screening with DRE and PSA. We subsequently evaluated prostate cancer screening knowledge, beliefs, and test use.

RESULTS: The illustrated pamphlet decision-aid was effective in improving knowledge of prostate cancer screening tests: 95% of the experimental group were aware of the possibility of false-negative test results compared with 85% of the comparison group (P <.01). Ninety-one percent of the experimental group were aware of the possibility of a false-positive screening test result compared with 65% of the comparison group (P <.01). However, there was no difference in the use of prostate cancer screening between the experimental (82%) and comparison (84%) groups, (P >.05).

CONCLUSIONS: When used in a primary care setting, an illustrated pamphlet decision-aid was effective in increasing knowledge of prostate cancer screening tests but did not change the use of these tests.

The practice of prostate cancer screening with serum prostate-specific antigen (PSA) and digital rectal examination (DRE) continues to increase, despite ongoing debate in the medical community on the efficacy of screening in reducing prostate cancer mortality.1-5 Prostate cancer screening remains controversial because of concern that mass screening may lead to the detection and treatment of clinically insignificant lesions, exposing an asymptomatic population to significant morbidity.4

It is widely recommended that patients be well informed of potential risks and benefits before engaging in a prostate cancer screening program.2,6-10 The health risks of prostate cancer screening include those of the initial tests, indicated follow-up tests (transrectal ultrasound [TRUS] or rectal biopsy), and therapeutic interventions. For example, an asymptomatic patient who is given a diagnosis of early-stage prostate cancer as a result of screening and is treated with a radical prostatectomy may develop impotence as a complication of treatment. Such a patient would have significant morbidity despite the fact that his cancer may have remained clinically silent throughout his lifetime. Also, the survival benefits of early detection and treatment of prostate cancer are unproven. Thus, the decision regarding prostate cancer screening provides clinicians and patients with a dilemma that involves informed decision making and patient input.

Previous studies report mixed results of the effect of decision-aids on the use of prostate cancer screening tests.11-13 One study reported a decrease in the use of such tests after exposure to a decision-aid in a primary care setting but no effect in a free PSA clinic.11 A second study found decreased interest in PSA screening after exposure to a decision-aid but did not evaluate screening test use.12 Finally, a third study found no effect of a decision-aid intervention on the use of prostate cancer screening tests.13 There is a need for further data on the effect of theoretically based decision-aids on men’s decisions to undergo prostate cancer screening.

Methods

We conducted a randomized controlled trial to test the effect of a prostate cancer screening decision-aid-an illustrated pamphlet-on patients’ knowledge, beliefs, and subsequent use of PSA and DRE prostate cancer screening tests.

Study Protocol

We included men aged 50 to 80 years who had an outpatient encounter in the years 1990 to 1995 at the Clement J. Zablocki Veterans Affairs Medical Center (VAMC) in Milwaukee, Wisconsin. We excluded men who had a history of prostate or other cancer, a previous prostate ultrasound study or biopsy, cystoscopy, prior prostate surgery, active genitourinary symptoms, cognitive impairment (defined by a Mini-Mental State Examination score of 23 or less), an anticipated life expectancy of less than 2 years, or who were currently employed by the VAMC. Potential subjects were identified from a randomly generated computerized list of patients who had received care at the VAMC in the designated time period. Patients were mailed a letter describing the study and inviting those interested to call and be considered for enrollment. The study protocol was approved by the Institutional Review Board of the VAMC and the Medical College of Wisconsin, and we obtained informed consent from all study participants.

The study protocol required 2 visits to the VAMC. At the initial study visit, subjects were randomized and baseline knowledge and belief surveys were administered. Data were also obtained on comorbidity using the Charlson comorbidity index and on reading level using the Rapid Estimate of Adult Literacy (REALM) instrument.14,15 Subjects were then given the experimental or the comparison intervention, each consisting of a written pamphlet to read and review. A research assistant was present when the subject reviewed the pamphlet and was available to answer questions. Postintervention knowledge and belief surveys were administered at the end of the initial study visit. A follow-up visit was scheduled with the subject’s primary care physician or one of the research investigators (JV or MMS) approximately 2 weeks after the initial study visit. At the follow-up visit, the subject was asked if he wanted to undergo prostate cancer screening with a PSA and a DRE. If the subject asked for the physician’s opinion, a scripted response was provided. The response emphasized the tossup nature of the decision and encouraged the patient to make up his own mind about prostate cancer screening. Men with a PSA test result that was greater than or equal to 4.0 ng/dL or those whose DRE was abnormal (asymmetric, indurated, or with a nodule) were referred to a urology clinic for confirmatory testing by TRUS and prostate biopsy. The screening tests were offered to at no cost. At the time of the study, there was no formal recommendation at the clinical site on the use of PSA for prostate cancer screening.

 

 

Development of the Decision-Aid

We conducted 2 focus groups to develop the content of the decision-aid. The focus group participants were similar to the target population: veterans aged 50 to 80 years who were receiving primary care at the VAMC. The Health Belief Model was used as the theoretical framework from which to probe focus group members regarding their knowledge and beliefs about prostate cancer screening.16,17 We found that patients had a general awareness of the prevalence of prostate cancer but expressed significant knowledge deficits and misinformation about risk factors, symptoms, screening recommendations, risk and benefits, treatment options, and prognosis for prostate cancer. We designed the content of the decision-aid to address the deficits in knowledge most striking in the focus groups.

The decision-aid included quantitative information on the operating characteristics (sensitivity and specificity) of a combined screening strategy of DRE and PSA and a description of follow-up tests (TRUS and prostate biopsy). Interpretation of probability outcomes are subject to many biases, including framing and presentation effects.18-20 We tried to present prostate cancer screening outcomes in a balanced manner. The graphic design used to convey the sensitivity and specificity of a prostate cancer screening strategy consisted of human figure representations Figure 1. An illustration presented 100 male human figures. A subset of figures was highlighted to represent the frequency of abnormal screening test results (10/100), true-positive test results (3/100), false-positive test results (7/100), and false-negative test results (1/100). Although treatment was not the focus of the intervention, treatment efficacy is one element of the total risks and benefits associated with prostate cancer screening. In the framework of the Health Belief Model, perceptions of treatment efficacy may influence screening behavior. We included a statement on the uncertain efficacy of treatment of early-stage prostate cancer in the decision-aid intervention.

The comparison intervention consisted of a written pamphlet containing basic prostate cancer information (epidemiology, symptoms of prostate cancer, prostate cancer screening methods, and the potential benefits of screening) but excluding the quantitative and qualitative outcomes regarding risks and benefits of screening that were included in the decision-aid. The basic prostate cancer information was also included in the decision-aid. Pamphlets were printed in a 14-point font to facilitate reading for older subjects. The comparison and experimental pamphlets were 5 and 8 pages in length, respectively.

Outcome Assessments

We used a prostate cancer knowledge assessment survey to evaluate the following domains: risk factors and incidence of prostate cancer, clinical presentation of prostate cancer, test characteristics of the DRE and the PSA, confirmatory tests (TRUS and prostate biopsy), and the natural history of prostate cancer. A prostate cancer belief-assessment survey was used to evaluate subjects’ perceptions of available screening tests and their intended screening behavior. Domains in the belief assessment included the natural history of prostate cancer, intentions to use prostate cancer screening, intentions to follow the physician’s advice on screening, perceptions of test characteristics, and how well informed they felt about screening options. The knowledge and belief assessment surveys consisted of 18 and 10 closed-ended items, respectively. The items were pilot-tested with 30 subjects who had demographic characteristics similar to those of the study population, and the format of items was revised accordingly. Test-retest reliability of single questions for correct/incorrect responses on the knowledge assessment were between 0.56 and 1.00 (average=0.82).

Prostate cancer screening use was ascertained from the follow-up physician visit. Subjects were asked if they wanted to be screened for prostate cancer with DRE or PSA, or both DRE and PSA. If they responded affirmatively, they were given the screening test at that study visit. A subject was considered to have chosen prostate cancer screening if he answered yes to having both the DRE and the PSA and proceeded to have those tests. For patients who had no rectum because of previous gastrointestinal surgery (n=3), a completed PSA met criteria for having chosen prostate cancer screening.

Statistical Methods

The knowledge survey was analyzed as total correct score and individual questions. Total knowledge scores on postintervention assessments were compared between groups using a Wilcoxon-Mann-Whitney test. When comparing the preintervention and postintervention responses to individual knowledge questions, subjects were assigned to 1 of 4 categories: (a) change in response from incorrect to correct, (b) incorrect response on both the pretest and the posttest, (c) correct response on both pretest and posttest, or (d) change in response from correct to incorrect. We used a chi-square analysis to compare categories of pre-post response pairs between the experimental and comparison groups. Postintervention responses to belief assessment items and use of prostate cancer screening (as defined by having both a DRE and a PSA test) were compared between groups with a chi-square analysis. Our study had a power of 0.80 to determine a 15% difference in the proportion of patients deciding to have prostate cancer screening, assuming that the baseline level of screening in the population was 80% and using a 2-sided test with an a of 0.05.

 

 

Results

There were 3592 invitation letters mailed to potential subjects, of which 572 men responded. Of the respondents, 257 (44.9%) were enrolled in the study Figure 2. Reasons for exclusion were history of previous cancer (50), history of prostate or genitourinary disease (102), poor mental status (12), and being an active employee at the medical center (23). Reasons for not participating among eligible patients included: not interested in participating (52), no phone (13), distance or transportation problems (7), the patient felt that he was too ill (26), and miscellaneous reasons (30). Experimental and comparison groups were similar in age, racial distribution, comorbidity, and education Table 1.

Prostate Cancer Screening Knowledge

The knowledge questionnaire listed 18 items. The range of total correct responses on postintervention scores was 5 to 18. There was no difference at baseline in total knowledge scores between the experimental (mean=11.7, standard deviation [SD] =2.4) and comparison (mean=11.4, SD=2.4) groups (P=.32). On postintervention assessments, the experimental group had a higher total knowledge score (mean=15.0, SD=2.3) than the comparison group (mean=14.1, SD=2.7; P <.01). On the postintervention survey, the experimental group was more likely than the comparison group to be aware of the possibility of false-negative and false-positive screening test results and had better knowledge of the natural history of prostate cancer Table 2. When asked to identify the risk of a false-negative test result, the experimental group was more likely than the comparison group (70% vs 49%, P <.05), to correctly identify 1/100 as the frequency of false negative results.

Prostate Cancer Screening Beliefs

Beliefs regarding the performance of prostate cancer screening tests differed between the groups. Specifically, fewer men in the experimental group than in the comparison group believed that screening tests were infallible Table 3. At baseline 79% of the subjects felt that “most men can be cured” if prostate cancer is caught in the early stages and treatment is received. Fifty-six percent of the subjects believed that of those men who have prostate cancer, most died of something else; 35% believed that approximately half die of prostate cancer; and only 9% believed that most men die of their prostate cancer. After the intervention, subjects in the experimental group were more likely than those in the comparison group (67% vs 46%) to respond that most men with prostate cancer die of something else (P <.01).

At baseline, 84% and 87% of the total study cohort stated that they were very likely to have a PSA and DRE, respectively. Ninety-eight percent of the subjects stated that they would have screening for prostate cancer if their physician recommended it. Finally, at baseline 77% of the subjects felt well informed enough to make a decision about prostate cancer screening. Perceptions of being well informed increased to 93% after the intervention but with no difference between groups.

Prostate Cancer Screening Decisions

Eighty-two percent of the experimental group, compared with 84% of the comparison group underwent prostate cancer screening (P=.60). Subjects who chose not to be screened did not differ from screened subjects in age, race, comorbidity level, education, or postintervention prostate cancer screening knowledge. Of the 214 subjects who chose to be screened, 32 had abnormal test results: 15 subjects had a PSA greater than 4.0, and 18 subjects had an abnormal DRE (one subject had both an abnormal DRE and a high PSA). Of the 32 abnormal exams, 21 had a prostate biopsy, and 7 prostate cancers were diagnosed. Of the 11 subjects with a positive screen who did not proceed to biopsy, 1 subject with an elevated PSA deferred a prostate biopsy and subsequently developed metastatic colon cancer. A second subject with elevated PSA refused TRUS and biopsy and continues to be followed up clinically. Of the remaining 9 patients who did not have further testing, one subject refused biopsy and elected to be followed clinically. Eight subjects were evaluated by urology tests, and the recommendation was for clinical follow-up without TRUS or rectal biopsy.

Discussion

We report that a prostate cancer screening aid consisting of an illustrated pamphlet was effective in improving knowledge and changing beliefs about prostate cancer screening when tested in a randomized controlled trial. The visual display of quantitative information improved knowledge about screening outcomes, but this knowledge alone did not change prostate cancer screening test use.

Prostate cancer screening is a clinical decision for which the risks are difficult to balance, a type of decision referred to as a “tossup” dilemma.21,22 The Health Belief Model posits that a change in perceived risks and benefits of screening may affect the likelihood of the patient’s taking preventive action (undergoing prostate cancer screening).16,17 Decision-aids have improved knowledge regarding decision outcomes, reduced decision conflict, and encouraged patients to be more active in the decision-making process.11,13,23 A recent meta-analysis24 shows that although decision-aids have a consistent effect on improving knowledge, they are less likely to alter decisions about a health care intervention. Previous studies of prostate cancer screening decision-aids have provided conflicting results. In one clinical trial, 12% of a primary care practice group exposed to a shared decision-making videotape intervention had a PSA test at their next scheduled clinic visit, compared with 23% of a control group (P=.04). However, a different arm of the same study found no effect of the intervention on the high rates of prostate cancer screening tests in a free PSA screening clinic. In a second clinical trial, men exposed to a scripted informational intervention were significantly less interested in PSA screening than those in a control group,12,25 but the subsequent use of screening tests was not evaluated. A third clinical trial in Canadian men found that a prostate cancer screening informational intervention in a discussion format increased participation in the decision-making process and decreased decisional conflict but did not alter the subsequent use of prostate cancer screening tests.13

 

 

A distinctive feature of our study is the use of a written pamphlet (as opposed to a videotape or a verbal discussion) as the decision-aid modality. Written materials are a commonly used method of educational support26 and in some studies have been preferred by patients to audiotapes or interactive computer materials.27 Patients respond favorably to having written materials that can be taken home to discuss with family or friends.28 However, written materials lack the ability of videotapes or discs to present video and audio role models for the deliberative decision-making process.23,29-30 Future studies are needed to examine the efficacy of written versus audiovisual modalities in presenting clinical outcomes to patients.

Our study supports the use of visual displays of frequencies when presenting information to patients. Human figure representations were used to visually convey the incidence of prostate cancer and the frequency of false-positive and false-negative test results (Figure 1). This approach was successful in improving knowledge regarding test characteristics. The visual display of quantitative information is an area of inquiry with important applications for communication of outcomes to patients.31-35 Previous studies have found that presenting very small probabilities with the use of dot diagrams has influenced the patient’s willingness to take risks.36 More work is needed on how best to display quantitative information in medical settings.

Limitations

Our study had some limitations. First, results were subject to volunteer bias, since the recruitment strategy required that interested patients reply to a mailed study invitation letter. The low rate of participation is similar to that found in previous prostate cancer screening studies that recruited subjects using mailed letters.37 Second, the study protocol removed some of the barriers to prostate cancer screening in the usual care setting. Subjects were offered prostate cancer screening on-site at the time of the follow-up study visit and did not have to pay for screening or follow-up tests. These 2 limitations may bias the study toward higher baseline levels of screening but should not differentially affect the comparison or intervention group. Finally, the current study evaluates knowledge, beliefs, and the subsequent use of prostate cancer screening tests. Other relevant outcomes, including decisional conflict, satisfaction with the decision-making process, and persistence of decision choice, deserve study in future research.

Conclusions

It is increasingly recognized that an informed decision-making process is appropriate before the use of cancer screening tests, especially those that lack strong efficacy evidence from clinical studies.2,6-10 Screening interventions are done in a healthy population during routine office visits, when limited time is available for the physician-patient encounter, and must be feasible in a busy office setting. Ideally, a decision-aid would be self-administered with the option of a follow-up interaction with the physician or another health care provider. Several modes of providing information can be used in this way, including a pamphlet, videotape, or interactive video-disk format. The pamphlet in our study was produced at a low cost, used graphic designs to help convey quantitative information, and was available for patients to take home and review. Simple decision-aids remain a viable method of presenting of complex information for preventive interventions such as prostate cancer screening. Further study is needed to understand the most effective decision-aids.

Acknowledgments

Our research was supported by the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service (project no. SDR 93-005). Dr Schapira is Director of General Internal Medicine Research at the Medical College of Wisconsin and the Clement Zablocki Veterans Affairs Medical Center in Milwaukee, Wisconsin.

References

 

1. AL, Miller BA, Albertsen PC, Dramer BS. The role of increasing detection in the rising incidence of prostate cancer. JAMA 1995;273:548-52.

2. College of Physicians. Screening for prostate cancer. Clinical guideline: part III. Ann Intern Med 1997;126:480-4.

3. CM, Barry MJ, Fleming C, Fahs MC, Mulley AG. Early detection of prostate cancer. Part II: estimating the risks, benefits, and costs. American College of Physicians. Ann Intern Med 1997;126:468-79.

4. MT, Wagner EH, Thompson RS. PSA screening: a public health dilemma. Annu Rev Public Health 1995;16:283-306.

5. Preventive Services Task Force. Screening for prostate cancer. In: Guide to clinical preventive services: report of the US Preventive Services Task Force. 2nd ed. Baltimore, Md: Williams & Wilkins; 1996;119-34.

6. LM. Prostate cancer screening: a place for informed consent? Hosp Pract 1994;29:11-2.

7. AM, Becker DM. Cancer screening and informed patient discussions: truth and consequences. Arch Intern Med 1996;156:1069-72.

8. AS. The mammography and prostate-specific antigen controversies: implications for patient-physician encounters and public policy. J Gen Intern Med 1995;10:266-70.

9. PJ, Hall DMB. Screening, ethics, and the law. BMJ 1992;305:267-8.

10. JM. Screening and informed consent. N Engl J Med 1993;328:438-40.

11. AB, Wennberg JE, Nease RF, Fowler FJ, Ding J, Hynes LM. The importance of patient preference in the decision to screen for prostate cancer. J Gen Intern Med 1996;11:342-9.

12. AMD, Nasser JF, Wolf AM, Schorling JB. The impact of informed consent on patient interest in prostate-specific antigen screening. Arch Intern Med 1996;156:1333-6.

13. BJ, Kirk P, Degner LF, Hassard TH. Information and patient participation in screening for prostate cancer. Patient Educ Couns 1999;37:255-63.

14. ME, Pompei P, Alex KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chron Dis 1987;40:373-83.

15. TC, Long SL, Jackson RH, et al. Rapid estimate of adult literacy in medicine: a shortened screening instrument. Fam Med 1993;25:391-5.

16. J, Buechner J, Denman Scott J, et al. A study guided by the Health Belief Model of the predictors of breast cancer screening of women ages 40 and older. Public Health Rep 1991;106:410-20.

17. RC, Liang J. the early detection of cancer in the primary care setting: factors associated with the acceptance and completion of recommended procedures. Prev Med 1987;16:739-51.

18. DJ, Baron JA, Johansen S, Wahrenberger JW, Ross JM. The framing effect of relative and absolute risk. J Gen Intern Med 1993;8:543-8.

19. B, Bostrom A, Quadrel MJ. Risk perception and communication. Annu Rev Public Health 1993;14:183-203.

20. BJ, Pauker SG, Sox HC, Twersky A. On the elicitation of p for alternative therapies. N Engl J Med 1982;306:1259-69.

21. A. Arguments about tossups. Letter. N Eng J Med 1997;337:638.-

22. SG, Kassierer JP. Contentious screening decisions: does the choice matter? N Eng J Med 1997;336:1243-4.

23. MM, Mead C, Nattinger AB. Enhanced decision-making: the use of a videotape decision-aid for patients with prostate cancer. Patient Educ Couns 1997;30:119-27.

24. AM, Rostrom A, Fiset V, et al. Decision aids for patients facing health treatment or screening decisions; systematic review. BMJ 1999;319:731-4.

25. AM, Schorling JB. P of elderly men for prostate-specific antigen screening and the impact of informed consent. J Gerontol 1998;53:M195-200.

26. SH, McPhee SJ. Healthcare professionals’ use of cancer-related patient education materials: a pilot study. J Cancer Educ 1993;843:6.

27. M, Leek C. Patient education needs: opinions of oncology nurses and their patients. Oncol Nurs Forum 1995;1:139-45.

28. C, Streater A, Darlene M. Functions and preferred methods of receiving information related to radiotherapy: perceptions of patients with cancer. Cancer Nurs 1995;18:374-84.

29. LA, DeVellis B, DeVillis RF. Effects of modeling on patient communication, satisfaction and knowledge. Med Care 1987;25:1044-56.

30. L, Joliss JG, DeLong ER, Peterson ED, Morris KG, Mark DB. Impact of an interactive video on decision making of patients with ischemic heart disease. J Gen Intern Med 1996;11:373-6.

31. ER. The visual display of quantitative information. Cheshire, Conn: Graphics Press; 1983.

32. ER. Envisioning information. Cheshire, Conn: Graphics Press; 1990.

33. G, Murray DJ. Cognition as intuitive statistics. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc; 1987.

34. DJ, Hickam DH. Interpretation of graphic data by patients in general medicine clinic. J Gen Intern Med 1990;5:402-5.

35. IM, Hollands JG. The visual communication of risk. JNCI Monographs 1999;25:149-63.

36. RM, Hammel B, Schimmel LE. Patient information processing and the decision to accept treatment. J Soc Behav Pers 1985;1:113-20.

37. F, Candas B, Dupont A, et al. Screening decreases prostate cancer death: first analysis of the 1988 Quebec prospective randomized controlled trial. Prostate 1999;38:83-91.

Author and Disclosure Information

 

Marilyn M. Schapira, MD, MPH
Jerome Vanruiswyk, MD, MS
Milwaukee, Wisconsin
Submitted, revised, December 26, 1999.
From the Department of Internal Medicine, Medical College of Wisconsin, and the Division of Primary Care, Clement J. Zablocki Veterans Affairs Medical Center. Reprint requests should be addressed to Marilyn M. Schapira, MD, MPH, Division of Primary Care (PC-00), Clement J. Zablocki Veterans Affairs Medical Center, 5000 W. National Ave, Milwaukee, WI, 53295-1000. E-mail: mschap@mcw.edu.

Issue
The Journal of Family Practice - 49(05)
Publications
Topics
Page Number
418-424
Legacy Keywords
,Prostatic neoplasmsinformed consentmass screeningdecision making. (J Fam Pract 2000; 49:418-424)
Sections
Author and Disclosure Information

 

Marilyn M. Schapira, MD, MPH
Jerome Vanruiswyk, MD, MS
Milwaukee, Wisconsin
Submitted, revised, December 26, 1999.
From the Department of Internal Medicine, Medical College of Wisconsin, and the Division of Primary Care, Clement J. Zablocki Veterans Affairs Medical Center. Reprint requests should be addressed to Marilyn M. Schapira, MD, MPH, Division of Primary Care (PC-00), Clement J. Zablocki Veterans Affairs Medical Center, 5000 W. National Ave, Milwaukee, WI, 53295-1000. E-mail: mschap@mcw.edu.

Author and Disclosure Information

 

Marilyn M. Schapira, MD, MPH
Jerome Vanruiswyk, MD, MS
Milwaukee, Wisconsin
Submitted, revised, December 26, 1999.
From the Department of Internal Medicine, Medical College of Wisconsin, and the Division of Primary Care, Clement J. Zablocki Veterans Affairs Medical Center. Reprint requests should be addressed to Marilyn M. Schapira, MD, MPH, Division of Primary Care (PC-00), Clement J. Zablocki Veterans Affairs Medical Center, 5000 W. National Ave, Milwaukee, WI, 53295-1000. E-mail: mschap@mcw.edu.

 

BACKGROUND: Prostate cancer screening with serum prostate-specific antigen (PSA) and digital rectal examination (DRE) continues to increase. Our goal was to test the effect of a prostate cancer screening decision-aid on patients’ knowledge, beliefs, and use of prostate cancer screening tests.

METHODS: Our study was a randomized controlled trial of a prostate cancer screening decision-aid consisting of an illustrated pamphlet as opposed to a comparison intervention. We included 257 men aged 50 to 80 years who were receiving primary care at a Department of Veterans Affairs Hospital in Milwaukee, Wisconsin. The decision-aid provided quantitative outcomes of prostate cancer screening with DRE and PSA. We subsequently evaluated prostate cancer screening knowledge, beliefs, and test use.

RESULTS: The illustrated pamphlet decision-aid was effective in improving knowledge of prostate cancer screening tests: 95% of the experimental group were aware of the possibility of false-negative test results compared with 85% of the comparison group (P <.01). Ninety-one percent of the experimental group were aware of the possibility of a false-positive screening test result compared with 65% of the comparison group (P <.01). However, there was no difference in the use of prostate cancer screening between the experimental (82%) and comparison (84%) groups, (P >.05).

CONCLUSIONS: When used in a primary care setting, an illustrated pamphlet decision-aid was effective in increasing knowledge of prostate cancer screening tests but did not change the use of these tests.

The practice of prostate cancer screening with serum prostate-specific antigen (PSA) and digital rectal examination (DRE) continues to increase, despite ongoing debate in the medical community on the efficacy of screening in reducing prostate cancer mortality.1-5 Prostate cancer screening remains controversial because of concern that mass screening may lead to the detection and treatment of clinically insignificant lesions, exposing an asymptomatic population to significant morbidity.4

It is widely recommended that patients be well informed of potential risks and benefits before engaging in a prostate cancer screening program.2,6-10 The health risks of prostate cancer screening include those of the initial tests, indicated follow-up tests (transrectal ultrasound [TRUS] or rectal biopsy), and therapeutic interventions. For example, an asymptomatic patient who is given a diagnosis of early-stage prostate cancer as a result of screening and is treated with a radical prostatectomy may develop impotence as a complication of treatment. Such a patient would have significant morbidity despite the fact that his cancer may have remained clinically silent throughout his lifetime. Also, the survival benefits of early detection and treatment of prostate cancer are unproven. Thus, the decision regarding prostate cancer screening provides clinicians and patients with a dilemma that involves informed decision making and patient input.

Previous studies report mixed results of the effect of decision-aids on the use of prostate cancer screening tests.11-13 One study reported a decrease in the use of such tests after exposure to a decision-aid in a primary care setting but no effect in a free PSA clinic.11 A second study found decreased interest in PSA screening after exposure to a decision-aid but did not evaluate screening test use.12 Finally, a third study found no effect of a decision-aid intervention on the use of prostate cancer screening tests.13 There is a need for further data on the effect of theoretically based decision-aids on men’s decisions to undergo prostate cancer screening.

Methods

We conducted a randomized controlled trial to test the effect of a prostate cancer screening decision-aid-an illustrated pamphlet-on patients’ knowledge, beliefs, and subsequent use of PSA and DRE prostate cancer screening tests.

Study Protocol

We included men aged 50 to 80 years who had an outpatient encounter in the years 1990 to 1995 at the Clement J. Zablocki Veterans Affairs Medical Center (VAMC) in Milwaukee, Wisconsin. We excluded men who had a history of prostate or other cancer, a previous prostate ultrasound study or biopsy, cystoscopy, prior prostate surgery, active genitourinary symptoms, cognitive impairment (defined by a Mini-Mental State Examination score of 23 or less), an anticipated life expectancy of less than 2 years, or who were currently employed by the VAMC. Potential subjects were identified from a randomly generated computerized list of patients who had received care at the VAMC in the designated time period. Patients were mailed a letter describing the study and inviting those interested to call and be considered for enrollment. The study protocol was approved by the Institutional Review Board of the VAMC and the Medical College of Wisconsin, and we obtained informed consent from all study participants.

The study protocol required 2 visits to the VAMC. At the initial study visit, subjects were randomized and baseline knowledge and belief surveys were administered. Data were also obtained on comorbidity using the Charlson comorbidity index and on reading level using the Rapid Estimate of Adult Literacy (REALM) instrument.14,15 Subjects were then given the experimental or the comparison intervention, each consisting of a written pamphlet to read and review. A research assistant was present when the subject reviewed the pamphlet and was available to answer questions. Postintervention knowledge and belief surveys were administered at the end of the initial study visit. A follow-up visit was scheduled with the subject’s primary care physician or one of the research investigators (JV or MMS) approximately 2 weeks after the initial study visit. At the follow-up visit, the subject was asked if he wanted to undergo prostate cancer screening with a PSA and a DRE. If the subject asked for the physician’s opinion, a scripted response was provided. The response emphasized the tossup nature of the decision and encouraged the patient to make up his own mind about prostate cancer screening. Men with a PSA test result that was greater than or equal to 4.0 ng/dL or those whose DRE was abnormal (asymmetric, indurated, or with a nodule) were referred to a urology clinic for confirmatory testing by TRUS and prostate biopsy. The screening tests were offered to at no cost. At the time of the study, there was no formal recommendation at the clinical site on the use of PSA for prostate cancer screening.

 

 

Development of the Decision-Aid

We conducted 2 focus groups to develop the content of the decision-aid. The focus group participants were similar to the target population: veterans aged 50 to 80 years who were receiving primary care at the VAMC. The Health Belief Model was used as the theoretical framework from which to probe focus group members regarding their knowledge and beliefs about prostate cancer screening.16,17 We found that patients had a general awareness of the prevalence of prostate cancer but expressed significant knowledge deficits and misinformation about risk factors, symptoms, screening recommendations, risk and benefits, treatment options, and prognosis for prostate cancer. We designed the content of the decision-aid to address the deficits in knowledge most striking in the focus groups.

The decision-aid included quantitative information on the operating characteristics (sensitivity and specificity) of a combined screening strategy of DRE and PSA and a description of follow-up tests (TRUS and prostate biopsy). Interpretation of probability outcomes are subject to many biases, including framing and presentation effects.18-20 We tried to present prostate cancer screening outcomes in a balanced manner. The graphic design used to convey the sensitivity and specificity of a prostate cancer screening strategy consisted of human figure representations Figure 1. An illustration presented 100 male human figures. A subset of figures was highlighted to represent the frequency of abnormal screening test results (10/100), true-positive test results (3/100), false-positive test results (7/100), and false-negative test results (1/100). Although treatment was not the focus of the intervention, treatment efficacy is one element of the total risks and benefits associated with prostate cancer screening. In the framework of the Health Belief Model, perceptions of treatment efficacy may influence screening behavior. We included a statement on the uncertain efficacy of treatment of early-stage prostate cancer in the decision-aid intervention.

The comparison intervention consisted of a written pamphlet containing basic prostate cancer information (epidemiology, symptoms of prostate cancer, prostate cancer screening methods, and the potential benefits of screening) but excluding the quantitative and qualitative outcomes regarding risks and benefits of screening that were included in the decision-aid. The basic prostate cancer information was also included in the decision-aid. Pamphlets were printed in a 14-point font to facilitate reading for older subjects. The comparison and experimental pamphlets were 5 and 8 pages in length, respectively.

Outcome Assessments

We used a prostate cancer knowledge assessment survey to evaluate the following domains: risk factors and incidence of prostate cancer, clinical presentation of prostate cancer, test characteristics of the DRE and the PSA, confirmatory tests (TRUS and prostate biopsy), and the natural history of prostate cancer. A prostate cancer belief-assessment survey was used to evaluate subjects’ perceptions of available screening tests and their intended screening behavior. Domains in the belief assessment included the natural history of prostate cancer, intentions to use prostate cancer screening, intentions to follow the physician’s advice on screening, perceptions of test characteristics, and how well informed they felt about screening options. The knowledge and belief assessment surveys consisted of 18 and 10 closed-ended items, respectively. The items were pilot-tested with 30 subjects who had demographic characteristics similar to those of the study population, and the format of items was revised accordingly. Test-retest reliability of single questions for correct/incorrect responses on the knowledge assessment were between 0.56 and 1.00 (average=0.82).

Prostate cancer screening use was ascertained from the follow-up physician visit. Subjects were asked if they wanted to be screened for prostate cancer with DRE or PSA, or both DRE and PSA. If they responded affirmatively, they were given the screening test at that study visit. A subject was considered to have chosen prostate cancer screening if he answered yes to having both the DRE and the PSA and proceeded to have those tests. For patients who had no rectum because of previous gastrointestinal surgery (n=3), a completed PSA met criteria for having chosen prostate cancer screening.

Statistical Methods

The knowledge survey was analyzed as total correct score and individual questions. Total knowledge scores on postintervention assessments were compared between groups using a Wilcoxon-Mann-Whitney test. When comparing the preintervention and postintervention responses to individual knowledge questions, subjects were assigned to 1 of 4 categories: (a) change in response from incorrect to correct, (b) incorrect response on both the pretest and the posttest, (c) correct response on both pretest and posttest, or (d) change in response from correct to incorrect. We used a chi-square analysis to compare categories of pre-post response pairs between the experimental and comparison groups. Postintervention responses to belief assessment items and use of prostate cancer screening (as defined by having both a DRE and a PSA test) were compared between groups with a chi-square analysis. Our study had a power of 0.80 to determine a 15% difference in the proportion of patients deciding to have prostate cancer screening, assuming that the baseline level of screening in the population was 80% and using a 2-sided test with an a of 0.05.

 

 

Results

There were 3592 invitation letters mailed to potential subjects, of which 572 men responded. Of the respondents, 257 (44.9%) were enrolled in the study Figure 2. Reasons for exclusion were history of previous cancer (50), history of prostate or genitourinary disease (102), poor mental status (12), and being an active employee at the medical center (23). Reasons for not participating among eligible patients included: not interested in participating (52), no phone (13), distance or transportation problems (7), the patient felt that he was too ill (26), and miscellaneous reasons (30). Experimental and comparison groups were similar in age, racial distribution, comorbidity, and education Table 1.

Prostate Cancer Screening Knowledge

The knowledge questionnaire listed 18 items. The range of total correct responses on postintervention scores was 5 to 18. There was no difference at baseline in total knowledge scores between the experimental (mean=11.7, standard deviation [SD] =2.4) and comparison (mean=11.4, SD=2.4) groups (P=.32). On postintervention assessments, the experimental group had a higher total knowledge score (mean=15.0, SD=2.3) than the comparison group (mean=14.1, SD=2.7; P <.01). On the postintervention survey, the experimental group was more likely than the comparison group to be aware of the possibility of false-negative and false-positive screening test results and had better knowledge of the natural history of prostate cancer Table 2. When asked to identify the risk of a false-negative test result, the experimental group was more likely than the comparison group (70% vs 49%, P <.05), to correctly identify 1/100 as the frequency of false negative results.

Prostate Cancer Screening Beliefs

Beliefs regarding the performance of prostate cancer screening tests differed between the groups. Specifically, fewer men in the experimental group than in the comparison group believed that screening tests were infallible Table 3. At baseline 79% of the subjects felt that “most men can be cured” if prostate cancer is caught in the early stages and treatment is received. Fifty-six percent of the subjects believed that of those men who have prostate cancer, most died of something else; 35% believed that approximately half die of prostate cancer; and only 9% believed that most men die of their prostate cancer. After the intervention, subjects in the experimental group were more likely than those in the comparison group (67% vs 46%) to respond that most men with prostate cancer die of something else (P <.01).

At baseline, 84% and 87% of the total study cohort stated that they were very likely to have a PSA and DRE, respectively. Ninety-eight percent of the subjects stated that they would have screening for prostate cancer if their physician recommended it. Finally, at baseline 77% of the subjects felt well informed enough to make a decision about prostate cancer screening. Perceptions of being well informed increased to 93% after the intervention but with no difference between groups.

Prostate Cancer Screening Decisions

Eighty-two percent of the experimental group, compared with 84% of the comparison group underwent prostate cancer screening (P=.60). Subjects who chose not to be screened did not differ from screened subjects in age, race, comorbidity level, education, or postintervention prostate cancer screening knowledge. Of the 214 subjects who chose to be screened, 32 had abnormal test results: 15 subjects had a PSA greater than 4.0, and 18 subjects had an abnormal DRE (one subject had both an abnormal DRE and a high PSA). Of the 32 abnormal exams, 21 had a prostate biopsy, and 7 prostate cancers were diagnosed. Of the 11 subjects with a positive screen who did not proceed to biopsy, 1 subject with an elevated PSA deferred a prostate biopsy and subsequently developed metastatic colon cancer. A second subject with elevated PSA refused TRUS and biopsy and continues to be followed up clinically. Of the remaining 9 patients who did not have further testing, one subject refused biopsy and elected to be followed clinically. Eight subjects were evaluated by urology tests, and the recommendation was for clinical follow-up without TRUS or rectal biopsy.

Discussion

We report that a prostate cancer screening aid consisting of an illustrated pamphlet was effective in improving knowledge and changing beliefs about prostate cancer screening when tested in a randomized controlled trial. The visual display of quantitative information improved knowledge about screening outcomes, but this knowledge alone did not change prostate cancer screening test use.

Prostate cancer screening is a clinical decision for which the risks are difficult to balance, a type of decision referred to as a “tossup” dilemma.21,22 The Health Belief Model posits that a change in perceived risks and benefits of screening may affect the likelihood of the patient’s taking preventive action (undergoing prostate cancer screening).16,17 Decision-aids have improved knowledge regarding decision outcomes, reduced decision conflict, and encouraged patients to be more active in the decision-making process.11,13,23 A recent meta-analysis24 shows that although decision-aids have a consistent effect on improving knowledge, they are less likely to alter decisions about a health care intervention. Previous studies of prostate cancer screening decision-aids have provided conflicting results. In one clinical trial, 12% of a primary care practice group exposed to a shared decision-making videotape intervention had a PSA test at their next scheduled clinic visit, compared with 23% of a control group (P=.04). However, a different arm of the same study found no effect of the intervention on the high rates of prostate cancer screening tests in a free PSA screening clinic. In a second clinical trial, men exposed to a scripted informational intervention were significantly less interested in PSA screening than those in a control group,12,25 but the subsequent use of screening tests was not evaluated. A third clinical trial in Canadian men found that a prostate cancer screening informational intervention in a discussion format increased participation in the decision-making process and decreased decisional conflict but did not alter the subsequent use of prostate cancer screening tests.13

 

 

A distinctive feature of our study is the use of a written pamphlet (as opposed to a videotape or a verbal discussion) as the decision-aid modality. Written materials are a commonly used method of educational support26 and in some studies have been preferred by patients to audiotapes or interactive computer materials.27 Patients respond favorably to having written materials that can be taken home to discuss with family or friends.28 However, written materials lack the ability of videotapes or discs to present video and audio role models for the deliberative decision-making process.23,29-30 Future studies are needed to examine the efficacy of written versus audiovisual modalities in presenting clinical outcomes to patients.

Our study supports the use of visual displays of frequencies when presenting information to patients. Human figure representations were used to visually convey the incidence of prostate cancer and the frequency of false-positive and false-negative test results (Figure 1). This approach was successful in improving knowledge regarding test characteristics. The visual display of quantitative information is an area of inquiry with important applications for communication of outcomes to patients.31-35 Previous studies have found that presenting very small probabilities with the use of dot diagrams has influenced the patient’s willingness to take risks.36 More work is needed on how best to display quantitative information in medical settings.

Limitations

Our study had some limitations. First, results were subject to volunteer bias, since the recruitment strategy required that interested patients reply to a mailed study invitation letter. The low rate of participation is similar to that found in previous prostate cancer screening studies that recruited subjects using mailed letters.37 Second, the study protocol removed some of the barriers to prostate cancer screening in the usual care setting. Subjects were offered prostate cancer screening on-site at the time of the follow-up study visit and did not have to pay for screening or follow-up tests. These 2 limitations may bias the study toward higher baseline levels of screening but should not differentially affect the comparison or intervention group. Finally, the current study evaluates knowledge, beliefs, and the subsequent use of prostate cancer screening tests. Other relevant outcomes, including decisional conflict, satisfaction with the decision-making process, and persistence of decision choice, deserve study in future research.

Conclusions

It is increasingly recognized that an informed decision-making process is appropriate before the use of cancer screening tests, especially those that lack strong efficacy evidence from clinical studies.2,6-10 Screening interventions are done in a healthy population during routine office visits, when limited time is available for the physician-patient encounter, and must be feasible in a busy office setting. Ideally, a decision-aid would be self-administered with the option of a follow-up interaction with the physician or another health care provider. Several modes of providing information can be used in this way, including a pamphlet, videotape, or interactive video-disk format. The pamphlet in our study was produced at a low cost, used graphic designs to help convey quantitative information, and was available for patients to take home and review. Simple decision-aids remain a viable method of presenting of complex information for preventive interventions such as prostate cancer screening. Further study is needed to understand the most effective decision-aids.

Acknowledgments

Our research was supported by the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service (project no. SDR 93-005). Dr Schapira is Director of General Internal Medicine Research at the Medical College of Wisconsin and the Clement Zablocki Veterans Affairs Medical Center in Milwaukee, Wisconsin.

 

BACKGROUND: Prostate cancer screening with serum prostate-specific antigen (PSA) and digital rectal examination (DRE) continues to increase. Our goal was to test the effect of a prostate cancer screening decision-aid on patients’ knowledge, beliefs, and use of prostate cancer screening tests.

METHODS: Our study was a randomized controlled trial of a prostate cancer screening decision-aid consisting of an illustrated pamphlet as opposed to a comparison intervention. We included 257 men aged 50 to 80 years who were receiving primary care at a Department of Veterans Affairs Hospital in Milwaukee, Wisconsin. The decision-aid provided quantitative outcomes of prostate cancer screening with DRE and PSA. We subsequently evaluated prostate cancer screening knowledge, beliefs, and test use.

RESULTS: The illustrated pamphlet decision-aid was effective in improving knowledge of prostate cancer screening tests: 95% of the experimental group were aware of the possibility of false-negative test results compared with 85% of the comparison group (P <.01). Ninety-one percent of the experimental group were aware of the possibility of a false-positive screening test result compared with 65% of the comparison group (P <.01). However, there was no difference in the use of prostate cancer screening between the experimental (82%) and comparison (84%) groups, (P >.05).

CONCLUSIONS: When used in a primary care setting, an illustrated pamphlet decision-aid was effective in increasing knowledge of prostate cancer screening tests but did not change the use of these tests.

The practice of prostate cancer screening with serum prostate-specific antigen (PSA) and digital rectal examination (DRE) continues to increase, despite ongoing debate in the medical community on the efficacy of screening in reducing prostate cancer mortality.1-5 Prostate cancer screening remains controversial because of concern that mass screening may lead to the detection and treatment of clinically insignificant lesions, exposing an asymptomatic population to significant morbidity.4

It is widely recommended that patients be well informed of potential risks and benefits before engaging in a prostate cancer screening program.2,6-10 The health risks of prostate cancer screening include those of the initial tests, indicated follow-up tests (transrectal ultrasound [TRUS] or rectal biopsy), and therapeutic interventions. For example, an asymptomatic patient who is given a diagnosis of early-stage prostate cancer as a result of screening and is treated with a radical prostatectomy may develop impotence as a complication of treatment. Such a patient would have significant morbidity despite the fact that his cancer may have remained clinically silent throughout his lifetime. Also, the survival benefits of early detection and treatment of prostate cancer are unproven. Thus, the decision regarding prostate cancer screening provides clinicians and patients with a dilemma that involves informed decision making and patient input.

Previous studies report mixed results of the effect of decision-aids on the use of prostate cancer screening tests.11-13 One study reported a decrease in the use of such tests after exposure to a decision-aid in a primary care setting but no effect in a free PSA clinic.11 A second study found decreased interest in PSA screening after exposure to a decision-aid but did not evaluate screening test use.12 Finally, a third study found no effect of a decision-aid intervention on the use of prostate cancer screening tests.13 There is a need for further data on the effect of theoretically based decision-aids on men’s decisions to undergo prostate cancer screening.

Methods

We conducted a randomized controlled trial to test the effect of a prostate cancer screening decision-aid-an illustrated pamphlet-on patients’ knowledge, beliefs, and subsequent use of PSA and DRE prostate cancer screening tests.

Study Protocol

We included men aged 50 to 80 years who had an outpatient encounter in the years 1990 to 1995 at the Clement J. Zablocki Veterans Affairs Medical Center (VAMC) in Milwaukee, Wisconsin. We excluded men who had a history of prostate or other cancer, a previous prostate ultrasound study or biopsy, cystoscopy, prior prostate surgery, active genitourinary symptoms, cognitive impairment (defined by a Mini-Mental State Examination score of 23 or less), an anticipated life expectancy of less than 2 years, or who were currently employed by the VAMC. Potential subjects were identified from a randomly generated computerized list of patients who had received care at the VAMC in the designated time period. Patients were mailed a letter describing the study and inviting those interested to call and be considered for enrollment. The study protocol was approved by the Institutional Review Board of the VAMC and the Medical College of Wisconsin, and we obtained informed consent from all study participants.

The study protocol required 2 visits to the VAMC. At the initial study visit, subjects were randomized and baseline knowledge and belief surveys were administered. Data were also obtained on comorbidity using the Charlson comorbidity index and on reading level using the Rapid Estimate of Adult Literacy (REALM) instrument.14,15 Subjects were then given the experimental or the comparison intervention, each consisting of a written pamphlet to read and review. A research assistant was present when the subject reviewed the pamphlet and was available to answer questions. Postintervention knowledge and belief surveys were administered at the end of the initial study visit. A follow-up visit was scheduled with the subject’s primary care physician or one of the research investigators (JV or MMS) approximately 2 weeks after the initial study visit. At the follow-up visit, the subject was asked if he wanted to undergo prostate cancer screening with a PSA and a DRE. If the subject asked for the physician’s opinion, a scripted response was provided. The response emphasized the tossup nature of the decision and encouraged the patient to make up his own mind about prostate cancer screening. Men with a PSA test result that was greater than or equal to 4.0 ng/dL or those whose DRE was abnormal (asymmetric, indurated, or with a nodule) were referred to a urology clinic for confirmatory testing by TRUS and prostate biopsy. The screening tests were offered to at no cost. At the time of the study, there was no formal recommendation at the clinical site on the use of PSA for prostate cancer screening.

 

 

Development of the Decision-Aid

We conducted 2 focus groups to develop the content of the decision-aid. The focus group participants were similar to the target population: veterans aged 50 to 80 years who were receiving primary care at the VAMC. The Health Belief Model was used as the theoretical framework from which to probe focus group members regarding their knowledge and beliefs about prostate cancer screening.16,17 We found that patients had a general awareness of the prevalence of prostate cancer but expressed significant knowledge deficits and misinformation about risk factors, symptoms, screening recommendations, risk and benefits, treatment options, and prognosis for prostate cancer. We designed the content of the decision-aid to address the deficits in knowledge most striking in the focus groups.

The decision-aid included quantitative information on the operating characteristics (sensitivity and specificity) of a combined screening strategy of DRE and PSA and a description of follow-up tests (TRUS and prostate biopsy). Interpretation of probability outcomes are subject to many biases, including framing and presentation effects.18-20 We tried to present prostate cancer screening outcomes in a balanced manner. The graphic design used to convey the sensitivity and specificity of a prostate cancer screening strategy consisted of human figure representations Figure 1. An illustration presented 100 male human figures. A subset of figures was highlighted to represent the frequency of abnormal screening test results (10/100), true-positive test results (3/100), false-positive test results (7/100), and false-negative test results (1/100). Although treatment was not the focus of the intervention, treatment efficacy is one element of the total risks and benefits associated with prostate cancer screening. In the framework of the Health Belief Model, perceptions of treatment efficacy may influence screening behavior. We included a statement on the uncertain efficacy of treatment of early-stage prostate cancer in the decision-aid intervention.

The comparison intervention consisted of a written pamphlet containing basic prostate cancer information (epidemiology, symptoms of prostate cancer, prostate cancer screening methods, and the potential benefits of screening) but excluding the quantitative and qualitative outcomes regarding risks and benefits of screening that were included in the decision-aid. The basic prostate cancer information was also included in the decision-aid. Pamphlets were printed in a 14-point font to facilitate reading for older subjects. The comparison and experimental pamphlets were 5 and 8 pages in length, respectively.

Outcome Assessments

We used a prostate cancer knowledge assessment survey to evaluate the following domains: risk factors and incidence of prostate cancer, clinical presentation of prostate cancer, test characteristics of the DRE and the PSA, confirmatory tests (TRUS and prostate biopsy), and the natural history of prostate cancer. A prostate cancer belief-assessment survey was used to evaluate subjects’ perceptions of available screening tests and their intended screening behavior. Domains in the belief assessment included the natural history of prostate cancer, intentions to use prostate cancer screening, intentions to follow the physician’s advice on screening, perceptions of test characteristics, and how well informed they felt about screening options. The knowledge and belief assessment surveys consisted of 18 and 10 closed-ended items, respectively. The items were pilot-tested with 30 subjects who had demographic characteristics similar to those of the study population, and the format of items was revised accordingly. Test-retest reliability of single questions for correct/incorrect responses on the knowledge assessment were between 0.56 and 1.00 (average=0.82).

Prostate cancer screening use was ascertained from the follow-up physician visit. Subjects were asked if they wanted to be screened for prostate cancer with DRE or PSA, or both DRE and PSA. If they responded affirmatively, they were given the screening test at that study visit. A subject was considered to have chosen prostate cancer screening if he answered yes to having both the DRE and the PSA and proceeded to have those tests. For patients who had no rectum because of previous gastrointestinal surgery (n=3), a completed PSA met criteria for having chosen prostate cancer screening.

Statistical Methods

The knowledge survey was analyzed as total correct score and individual questions. Total knowledge scores on postintervention assessments were compared between groups using a Wilcoxon-Mann-Whitney test. When comparing the preintervention and postintervention responses to individual knowledge questions, subjects were assigned to 1 of 4 categories: (a) change in response from incorrect to correct, (b) incorrect response on both the pretest and the posttest, (c) correct response on both pretest and posttest, or (d) change in response from correct to incorrect. We used a chi-square analysis to compare categories of pre-post response pairs between the experimental and comparison groups. Postintervention responses to belief assessment items and use of prostate cancer screening (as defined by having both a DRE and a PSA test) were compared between groups with a chi-square analysis. Our study had a power of 0.80 to determine a 15% difference in the proportion of patients deciding to have prostate cancer screening, assuming that the baseline level of screening in the population was 80% and using a 2-sided test with an a of 0.05.

 

 

Results

There were 3592 invitation letters mailed to potential subjects, of which 572 men responded. Of the respondents, 257 (44.9%) were enrolled in the study Figure 2. Reasons for exclusion were history of previous cancer (50), history of prostate or genitourinary disease (102), poor mental status (12), and being an active employee at the medical center (23). Reasons for not participating among eligible patients included: not interested in participating (52), no phone (13), distance or transportation problems (7), the patient felt that he was too ill (26), and miscellaneous reasons (30). Experimental and comparison groups were similar in age, racial distribution, comorbidity, and education Table 1.

Prostate Cancer Screening Knowledge

The knowledge questionnaire listed 18 items. The range of total correct responses on postintervention scores was 5 to 18. There was no difference at baseline in total knowledge scores between the experimental (mean=11.7, standard deviation [SD] =2.4) and comparison (mean=11.4, SD=2.4) groups (P=.32). On postintervention assessments, the experimental group had a higher total knowledge score (mean=15.0, SD=2.3) than the comparison group (mean=14.1, SD=2.7; P <.01). On the postintervention survey, the experimental group was more likely than the comparison group to be aware of the possibility of false-negative and false-positive screening test results and had better knowledge of the natural history of prostate cancer Table 2. When asked to identify the risk of a false-negative test result, the experimental group was more likely than the comparison group (70% vs 49%, P <.05), to correctly identify 1/100 as the frequency of false negative results.

Prostate Cancer Screening Beliefs

Beliefs regarding the performance of prostate cancer screening tests differed between the groups. Specifically, fewer men in the experimental group than in the comparison group believed that screening tests were infallible Table 3. At baseline 79% of the subjects felt that “most men can be cured” if prostate cancer is caught in the early stages and treatment is received. Fifty-six percent of the subjects believed that of those men who have prostate cancer, most died of something else; 35% believed that approximately half die of prostate cancer; and only 9% believed that most men die of their prostate cancer. After the intervention, subjects in the experimental group were more likely than those in the comparison group (67% vs 46%) to respond that most men with prostate cancer die of something else (P <.01).

At baseline, 84% and 87% of the total study cohort stated that they were very likely to have a PSA and DRE, respectively. Ninety-eight percent of the subjects stated that they would have screening for prostate cancer if their physician recommended it. Finally, at baseline 77% of the subjects felt well informed enough to make a decision about prostate cancer screening. Perceptions of being well informed increased to 93% after the intervention but with no difference between groups.

Prostate Cancer Screening Decisions

Eighty-two percent of the experimental group, compared with 84% of the comparison group underwent prostate cancer screening (P=.60). Subjects who chose not to be screened did not differ from screened subjects in age, race, comorbidity level, education, or postintervention prostate cancer screening knowledge. Of the 214 subjects who chose to be screened, 32 had abnormal test results: 15 subjects had a PSA greater than 4.0, and 18 subjects had an abnormal DRE (one subject had both an abnormal DRE and a high PSA). Of the 32 abnormal exams, 21 had a prostate biopsy, and 7 prostate cancers were diagnosed. Of the 11 subjects with a positive screen who did not proceed to biopsy, 1 subject with an elevated PSA deferred a prostate biopsy and subsequently developed metastatic colon cancer. A second subject with elevated PSA refused TRUS and biopsy and continues to be followed up clinically. Of the remaining 9 patients who did not have further testing, one subject refused biopsy and elected to be followed clinically. Eight subjects were evaluated by urology tests, and the recommendation was for clinical follow-up without TRUS or rectal biopsy.

Discussion

We report that a prostate cancer screening aid consisting of an illustrated pamphlet was effective in improving knowledge and changing beliefs about prostate cancer screening when tested in a randomized controlled trial. The visual display of quantitative information improved knowledge about screening outcomes, but this knowledge alone did not change prostate cancer screening test use.

Prostate cancer screening is a clinical decision for which the risks are difficult to balance, a type of decision referred to as a “tossup” dilemma.21,22 The Health Belief Model posits that a change in perceived risks and benefits of screening may affect the likelihood of the patient’s taking preventive action (undergoing prostate cancer screening).16,17 Decision-aids have improved knowledge regarding decision outcomes, reduced decision conflict, and encouraged patients to be more active in the decision-making process.11,13,23 A recent meta-analysis24 shows that although decision-aids have a consistent effect on improving knowledge, they are less likely to alter decisions about a health care intervention. Previous studies of prostate cancer screening decision-aids have provided conflicting results. In one clinical trial, 12% of a primary care practice group exposed to a shared decision-making videotape intervention had a PSA test at their next scheduled clinic visit, compared with 23% of a control group (P=.04). However, a different arm of the same study found no effect of the intervention on the high rates of prostate cancer screening tests in a free PSA screening clinic. In a second clinical trial, men exposed to a scripted informational intervention were significantly less interested in PSA screening than those in a control group,12,25 but the subsequent use of screening tests was not evaluated. A third clinical trial in Canadian men found that a prostate cancer screening informational intervention in a discussion format increased participation in the decision-making process and decreased decisional conflict but did not alter the subsequent use of prostate cancer screening tests.13

 

 

A distinctive feature of our study is the use of a written pamphlet (as opposed to a videotape or a verbal discussion) as the decision-aid modality. Written materials are a commonly used method of educational support26 and in some studies have been preferred by patients to audiotapes or interactive computer materials.27 Patients respond favorably to having written materials that can be taken home to discuss with family or friends.28 However, written materials lack the ability of videotapes or discs to present video and audio role models for the deliberative decision-making process.23,29-30 Future studies are needed to examine the efficacy of written versus audiovisual modalities in presenting clinical outcomes to patients.

Our study supports the use of visual displays of frequencies when presenting information to patients. Human figure representations were used to visually convey the incidence of prostate cancer and the frequency of false-positive and false-negative test results (Figure 1). This approach was successful in improving knowledge regarding test characteristics. The visual display of quantitative information is an area of inquiry with important applications for communication of outcomes to patients.31-35 Previous studies have found that presenting very small probabilities with the use of dot diagrams has influenced the patient’s willingness to take risks.36 More work is needed on how best to display quantitative information in medical settings.

Limitations

Our study had some limitations. First, results were subject to volunteer bias, since the recruitment strategy required that interested patients reply to a mailed study invitation letter. The low rate of participation is similar to that found in previous prostate cancer screening studies that recruited subjects using mailed letters.37 Second, the study protocol removed some of the barriers to prostate cancer screening in the usual care setting. Subjects were offered prostate cancer screening on-site at the time of the follow-up study visit and did not have to pay for screening or follow-up tests. These 2 limitations may bias the study toward higher baseline levels of screening but should not differentially affect the comparison or intervention group. Finally, the current study evaluates knowledge, beliefs, and the subsequent use of prostate cancer screening tests. Other relevant outcomes, including decisional conflict, satisfaction with the decision-making process, and persistence of decision choice, deserve study in future research.

Conclusions

It is increasingly recognized that an informed decision-making process is appropriate before the use of cancer screening tests, especially those that lack strong efficacy evidence from clinical studies.2,6-10 Screening interventions are done in a healthy population during routine office visits, when limited time is available for the physician-patient encounter, and must be feasible in a busy office setting. Ideally, a decision-aid would be self-administered with the option of a follow-up interaction with the physician or another health care provider. Several modes of providing information can be used in this way, including a pamphlet, videotape, or interactive video-disk format. The pamphlet in our study was produced at a low cost, used graphic designs to help convey quantitative information, and was available for patients to take home and review. Simple decision-aids remain a viable method of presenting of complex information for preventive interventions such as prostate cancer screening. Further study is needed to understand the most effective decision-aids.

Acknowledgments

Our research was supported by the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service (project no. SDR 93-005). Dr Schapira is Director of General Internal Medicine Research at the Medical College of Wisconsin and the Clement Zablocki Veterans Affairs Medical Center in Milwaukee, Wisconsin.

References

 

1. AL, Miller BA, Albertsen PC, Dramer BS. The role of increasing detection in the rising incidence of prostate cancer. JAMA 1995;273:548-52.

2. College of Physicians. Screening for prostate cancer. Clinical guideline: part III. Ann Intern Med 1997;126:480-4.

3. CM, Barry MJ, Fleming C, Fahs MC, Mulley AG. Early detection of prostate cancer. Part II: estimating the risks, benefits, and costs. American College of Physicians. Ann Intern Med 1997;126:468-79.

4. MT, Wagner EH, Thompson RS. PSA screening: a public health dilemma. Annu Rev Public Health 1995;16:283-306.

5. Preventive Services Task Force. Screening for prostate cancer. In: Guide to clinical preventive services: report of the US Preventive Services Task Force. 2nd ed. Baltimore, Md: Williams & Wilkins; 1996;119-34.

6. LM. Prostate cancer screening: a place for informed consent? Hosp Pract 1994;29:11-2.

7. AM, Becker DM. Cancer screening and informed patient discussions: truth and consequences. Arch Intern Med 1996;156:1069-72.

8. AS. The mammography and prostate-specific antigen controversies: implications for patient-physician encounters and public policy. J Gen Intern Med 1995;10:266-70.

9. PJ, Hall DMB. Screening, ethics, and the law. BMJ 1992;305:267-8.

10. JM. Screening and informed consent. N Engl J Med 1993;328:438-40.

11. AB, Wennberg JE, Nease RF, Fowler FJ, Ding J, Hynes LM. The importance of patient preference in the decision to screen for prostate cancer. J Gen Intern Med 1996;11:342-9.

12. AMD, Nasser JF, Wolf AM, Schorling JB. The impact of informed consent on patient interest in prostate-specific antigen screening. Arch Intern Med 1996;156:1333-6.

13. BJ, Kirk P, Degner LF, Hassard TH. Information and patient participation in screening for prostate cancer. Patient Educ Couns 1999;37:255-63.

14. ME, Pompei P, Alex KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chron Dis 1987;40:373-83.

15. TC, Long SL, Jackson RH, et al. Rapid estimate of adult literacy in medicine: a shortened screening instrument. Fam Med 1993;25:391-5.

16. J, Buechner J, Denman Scott J, et al. A study guided by the Health Belief Model of the predictors of breast cancer screening of women ages 40 and older. Public Health Rep 1991;106:410-20.

17. RC, Liang J. the early detection of cancer in the primary care setting: factors associated with the acceptance and completion of recommended procedures. Prev Med 1987;16:739-51.

18. DJ, Baron JA, Johansen S, Wahrenberger JW, Ross JM. The framing effect of relative and absolute risk. J Gen Intern Med 1993;8:543-8.

19. B, Bostrom A, Quadrel MJ. Risk perception and communication. Annu Rev Public Health 1993;14:183-203.

20. BJ, Pauker SG, Sox HC, Twersky A. On the elicitation of p for alternative therapies. N Engl J Med 1982;306:1259-69.

21. A. Arguments about tossups. Letter. N Eng J Med 1997;337:638.-

22. SG, Kassierer JP. Contentious screening decisions: does the choice matter? N Eng J Med 1997;336:1243-4.

23. MM, Mead C, Nattinger AB. Enhanced decision-making: the use of a videotape decision-aid for patients with prostate cancer. Patient Educ Couns 1997;30:119-27.

24. AM, Rostrom A, Fiset V, et al. Decision aids for patients facing health treatment or screening decisions; systematic review. BMJ 1999;319:731-4.

25. AM, Schorling JB. P of elderly men for prostate-specific antigen screening and the impact of informed consent. J Gerontol 1998;53:M195-200.

26. SH, McPhee SJ. Healthcare professionals’ use of cancer-related patient education materials: a pilot study. J Cancer Educ 1993;843:6.

27. M, Leek C. Patient education needs: opinions of oncology nurses and their patients. Oncol Nurs Forum 1995;1:139-45.

28. C, Streater A, Darlene M. Functions and preferred methods of receiving information related to radiotherapy: perceptions of patients with cancer. Cancer Nurs 1995;18:374-84.

29. LA, DeVellis B, DeVillis RF. Effects of modeling on patient communication, satisfaction and knowledge. Med Care 1987;25:1044-56.

30. L, Joliss JG, DeLong ER, Peterson ED, Morris KG, Mark DB. Impact of an interactive video on decision making of patients with ischemic heart disease. J Gen Intern Med 1996;11:373-6.

31. ER. The visual display of quantitative information. Cheshire, Conn: Graphics Press; 1983.

32. ER. Envisioning information. Cheshire, Conn: Graphics Press; 1990.

33. G, Murray DJ. Cognition as intuitive statistics. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc; 1987.

34. DJ, Hickam DH. Interpretation of graphic data by patients in general medicine clinic. J Gen Intern Med 1990;5:402-5.

35. IM, Hollands JG. The visual communication of risk. JNCI Monographs 1999;25:149-63.

36. RM, Hammel B, Schimmel LE. Patient information processing and the decision to accept treatment. J Soc Behav Pers 1985;1:113-20.

37. F, Candas B, Dupont A, et al. Screening decreases prostate cancer death: first analysis of the 1988 Quebec prospective randomized controlled trial. Prostate 1999;38:83-91.

References

 

1. AL, Miller BA, Albertsen PC, Dramer BS. The role of increasing detection in the rising incidence of prostate cancer. JAMA 1995;273:548-52.

2. College of Physicians. Screening for prostate cancer. Clinical guideline: part III. Ann Intern Med 1997;126:480-4.

3. CM, Barry MJ, Fleming C, Fahs MC, Mulley AG. Early detection of prostate cancer. Part II: estimating the risks, benefits, and costs. American College of Physicians. Ann Intern Med 1997;126:468-79.

4. MT, Wagner EH, Thompson RS. PSA screening: a public health dilemma. Annu Rev Public Health 1995;16:283-306.

5. Preventive Services Task Force. Screening for prostate cancer. In: Guide to clinical preventive services: report of the US Preventive Services Task Force. 2nd ed. Baltimore, Md: Williams & Wilkins; 1996;119-34.

6. LM. Prostate cancer screening: a place for informed consent? Hosp Pract 1994;29:11-2.

7. AM, Becker DM. Cancer screening and informed patient discussions: truth and consequences. Arch Intern Med 1996;156:1069-72.

8. AS. The mammography and prostate-specific antigen controversies: implications for patient-physician encounters and public policy. J Gen Intern Med 1995;10:266-70.

9. PJ, Hall DMB. Screening, ethics, and the law. BMJ 1992;305:267-8.

10. JM. Screening and informed consent. N Engl J Med 1993;328:438-40.

11. AB, Wennberg JE, Nease RF, Fowler FJ, Ding J, Hynes LM. The importance of patient preference in the decision to screen for prostate cancer. J Gen Intern Med 1996;11:342-9.

12. AMD, Nasser JF, Wolf AM, Schorling JB. The impact of informed consent on patient interest in prostate-specific antigen screening. Arch Intern Med 1996;156:1333-6.

13. BJ, Kirk P, Degner LF, Hassard TH. Information and patient participation in screening for prostate cancer. Patient Educ Couns 1999;37:255-63.

14. ME, Pompei P, Alex KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chron Dis 1987;40:373-83.

15. TC, Long SL, Jackson RH, et al. Rapid estimate of adult literacy in medicine: a shortened screening instrument. Fam Med 1993;25:391-5.

16. J, Buechner J, Denman Scott J, et al. A study guided by the Health Belief Model of the predictors of breast cancer screening of women ages 40 and older. Public Health Rep 1991;106:410-20.

17. RC, Liang J. the early detection of cancer in the primary care setting: factors associated with the acceptance and completion of recommended procedures. Prev Med 1987;16:739-51.

18. DJ, Baron JA, Johansen S, Wahrenberger JW, Ross JM. The framing effect of relative and absolute risk. J Gen Intern Med 1993;8:543-8.

19. B, Bostrom A, Quadrel MJ. Risk perception and communication. Annu Rev Public Health 1993;14:183-203.

20. BJ, Pauker SG, Sox HC, Twersky A. On the elicitation of p for alternative therapies. N Engl J Med 1982;306:1259-69.

21. A. Arguments about tossups. Letter. N Eng J Med 1997;337:638.-

22. SG, Kassierer JP. Contentious screening decisions: does the choice matter? N Eng J Med 1997;336:1243-4.

23. MM, Mead C, Nattinger AB. Enhanced decision-making: the use of a videotape decision-aid for patients with prostate cancer. Patient Educ Couns 1997;30:119-27.

24. AM, Rostrom A, Fiset V, et al. Decision aids for patients facing health treatment or screening decisions; systematic review. BMJ 1999;319:731-4.

25. AM, Schorling JB. P of elderly men for prostate-specific antigen screening and the impact of informed consent. J Gerontol 1998;53:M195-200.

26. SH, McPhee SJ. Healthcare professionals’ use of cancer-related patient education materials: a pilot study. J Cancer Educ 1993;843:6.

27. M, Leek C. Patient education needs: opinions of oncology nurses and their patients. Oncol Nurs Forum 1995;1:139-45.

28. C, Streater A, Darlene M. Functions and preferred methods of receiving information related to radiotherapy: perceptions of patients with cancer. Cancer Nurs 1995;18:374-84.

29. LA, DeVellis B, DeVillis RF. Effects of modeling on patient communication, satisfaction and knowledge. Med Care 1987;25:1044-56.

30. L, Joliss JG, DeLong ER, Peterson ED, Morris KG, Mark DB. Impact of an interactive video on decision making of patients with ischemic heart disease. J Gen Intern Med 1996;11:373-6.

31. ER. The visual display of quantitative information. Cheshire, Conn: Graphics Press; 1983.

32. ER. Envisioning information. Cheshire, Conn: Graphics Press; 1990.

33. G, Murray DJ. Cognition as intuitive statistics. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc; 1987.

34. DJ, Hickam DH. Interpretation of graphic data by patients in general medicine clinic. J Gen Intern Med 1990;5:402-5.

35. IM, Hollands JG. The visual communication of risk. JNCI Monographs 1999;25:149-63.

36. RM, Hammel B, Schimmel LE. Patient information processing and the decision to accept treatment. J Soc Behav Pers 1985;1:113-20.

37. F, Candas B, Dupont A, et al. Screening decreases prostate cancer death: first analysis of the 1988 Quebec prospective randomized controlled trial. Prostate 1999;38:83-91.

Issue
The Journal of Family Practice - 49(05)
Issue
The Journal of Family Practice - 49(05)
Page Number
418-424
Page Number
418-424
Publications
Publications
Topics
Article Type
Display Headline
The Effect Of An Illustrated Pamphlet Decision-Aid On the Use Of Prostate Cancer Screening Tests
Display Headline
The Effect Of An Illustrated Pamphlet Decision-Aid On the Use Of Prostate Cancer Screening Tests
Legacy Keywords
,Prostatic neoplasmsinformed consentmass screeningdecision making. (J Fam Pract 2000; 49:418-424)
Legacy Keywords
,Prostatic neoplasmsinformed consentmass screeningdecision making. (J Fam Pract 2000; 49:418-424)
Sections
Disallow All Ads
Alternative CME