Article Type
Changed
Fri, 09/14/2018 - 12:34
Display Headline
Bonus-Pay Bonanza

Although there is a lot of debate about the effectiveness of pay-for-performance (P4P) plans, I think the plans are only going to increase in the foreseeable future.

We need more research to tell us the relative impact of public reporting of performance data and P4P programs. Most importantly, the details of how these plans are set up, how and what they measure, and the dollar amount involved will have everything to do with whether they are successful in improving the value of care we provide.

SHM’S Practice Management Committee conducted a mini-survey of hospitalist group leaders in 2006. Here are some of the key findings.

Performance thresholds should be set so hospitalists need to change their practices to achieve them, but not so far out of reach that hospitalists give up on them.

P4P Prevalence

Forty-one percent (60 out of 146) of hospital medicine group (HMG) leaders reported their groups have a quality-incentive program. Of those HMG leaders more likely to report participation in a quality-incentive program:

  • 60% were at hospitals participating in a P4P program;
  • 50% were at multispecialty/PCP medical groups; and
  • 50% were in the Southern region.

Of those HMG leaders less likely to report participation in P4P programs, 28% were at academic programs and 31% were at local hospitalist-only groups.

Group vs. Individual Incentives

Of the HMG leaders participating in a quality-incentive program:

  • 43% reported it was an individual incentive;
  • 35% reported it was a group incentive;
  • 10% reported the plan had elements of both individual and group incentives; and
  • 12% were not sure if their plans had individual or group incentives.

Basis of Quality Targets

Of the HMG leaders reporting that they participate in a quality-incentive program (respondents could indicate one or more answers):

  • 60% of the programs have targets based on national benchmarks;
  • 23% have targets based on local or regional benchmarks;
  • 37% have targets based on their hospital’s previous experience; and
  • 47% have targets based on improvement over a baseline.

Maximum Impact of Incentives

Of the HMG leaders reporting that they participate in a quality-incentive program:

  • 16% report the maximum impact is less than 3%;
  • 24% report the maximum impact is from 3% to 7%;
  • 35% report the maximum impact is from 8% to 10%;
  • 17% report the maximum impact is from 11% to 20%;
  • 3% report the maximum impact is more than 20%; and
  • 5% report they do not know the maximum impact.

Group vs. Individual Incentives

Of the HMG leaders reporting that they participate in a quality-incentive program:

  • 61% said they have received an incentive payment;
  • 37% have not received an incentive payment; and
  • 2% were unsure if they have received an incentive payment.

Quality Metrics

The most common metrics used in P4P programs, based on 29 responses to the SHM survey: 

  • 93% of HM programs have metrics based on The Joint Commission’s (JCAHO) heart failure measures;
  • 86% have metrics based on JCAHO pneumonia measures;
  • 79% have metrics based on JCAHO myocardial infarction measures;
  • 28% have metrics based on a measure of medication reconciliation;
  • 24% have metrics based on avoidance of unapproved abbreviations;
  • 24% have metrics based on 100,000 Lives Campaign measures;
  • 21% have metrics based on patient satisfaction measures;
  • 17% have metrics based on transitions-of-care measures;
  • 10% have metrics based on throughput measures;
  • 7% have metrics based on end-of-life measures;
  • 7% have metrics based on “good citizenship” measures;
  • 7% have metrics based on mortality rate measures; and
  • 7% have metrics based on readmission rate measures.
 

 

The most common metrics used in quality-incentive programs, based on 45 responses to SHM’s survey: 

  • 73% of programs use JCAHO heart failure measures;
  • 73% use “good citizenship” measures;
  • 73% use patient satisfaction measures;
  • 67% use JCAHO pneumonia measures;
  • 51% use transitions-of-care measures;
  • 44% use JCAHO M.I. measures;
  • 31% use throughput measures;
  • 27% use avoidance of unapproved abbreviations;
  • 24% use a measure based on medication reconciliation;
  • 11% use 100,000 Lives Campaign measures;
  • 9% use readmission rate measures;
  • 7% use mortality rate measures; and
  • 2% use end-of-life measures.

Recommendations

The prevalence of hospitalist quality-based compensation plans is continuing to grow rapidly, but the details of the plans’ structure will govern whether they benefit our patients, improve the overall value of the care we provide, and serve as a meaningful component of our compensation. I suggest each practice consider implementing plans with the following attributes:

A total dollar amount available for performance that is enough to influence hospitalist behavior. I think quality incentives should compose as much as 15% to 20% of a hospitalist’s annual income. Plans connecting quality performance to equal to or less than 7% of annual compensation (the case for 40% of groups in the above survey) rarely are effective.

Money vs. metrics. It usually is better to establish a plan based on a sliding scale of improved performance rather than a single threshold. For example, if all of the bonus money is available for a 10% improvement in performance, consider providing 10% of the total available money for each 1% improvement in performance.

Degree of difficulty. Performance thresholds should be set so that hospitalists need to change their practices to achieve them, but not so far out of reach that hospitalists give up on them. This can get tricky. Many practices set thresholds that are very easy to reach (e.g., they may be near the current level of performance).

Metrics for which trusted data is readily available. In most cases, this means using data already being collected. Avoid hard-to-track metrics, as they are likely to lead to disagreements about their accuracy.

Group vs. individual measures. Most performance metrics can’t be clearly attributed to one hospitalist as compared to another. For example, who gets the credit or blame for Ms. Smith getting or not getting a pneumovax? The majority of performance metrics are best measured and paid on a group basis. Some metrics, such as documenting medicine reconciliation on admission and discharge, can be effectively attributed to a single hospitalist and could be paid on an individual basis.

Small number of metrics, A meaningfully large amount of money should be connected to each one. Don’t make the mistake of having a $10,000 per doctor annual quality bonus pool divided among 20 metrics (each metric would pay a maximum of $500 per year).

Rotating metrics. Consider an annual meeting with members of your hospital’s administration to jointly establish the metrics used in the hospitalist quality incentive for that year. It is reasonable to change the metrics periodically.

It seems to me P4P programs are in their infancy, and will continue to evolve rapidly. Plans that fail to improve outcomes enough to justify the complexity of implementing, tracking, and paying for them will disappear slowly. (I wonder if payment for pneumovax administration during the hospital stay will be in this category.) And new, more effective, and more valuable programs will be developed.

 

 

Hospitalist practices will need to be nimble to keep pace with all of this change. Although SHM can alert you to how new P4P initiatives might affect your practice, and even recommend methods to improve your performance, you and your hospitalist colleagues still will have a lot of work to operationalize these programs in your practice. TH

Dr. Nelson has been a practicing hospitalist since 1988 and is co-founder and past president of SHM. He is a principal in Nelson/Flores Associates, a national hospitalist practice management consulting firm. He is part of the faculty for SHM’s “Best Practices in Managing a Hospital Medicine Program” course. This column represents his views and is not intended to reflect an official position of SHM.

Issue
The Hospitalist - 2009(02)
Publications
Sections

Although there is a lot of debate about the effectiveness of pay-for-performance (P4P) plans, I think the plans are only going to increase in the foreseeable future.

We need more research to tell us the relative impact of public reporting of performance data and P4P programs. Most importantly, the details of how these plans are set up, how and what they measure, and the dollar amount involved will have everything to do with whether they are successful in improving the value of care we provide.

SHM’S Practice Management Committee conducted a mini-survey of hospitalist group leaders in 2006. Here are some of the key findings.

Performance thresholds should be set so hospitalists need to change their practices to achieve them, but not so far out of reach that hospitalists give up on them.

P4P Prevalence

Forty-one percent (60 out of 146) of hospital medicine group (HMG) leaders reported their groups have a quality-incentive program. Of those HMG leaders more likely to report participation in a quality-incentive program:

  • 60% were at hospitals participating in a P4P program;
  • 50% were at multispecialty/PCP medical groups; and
  • 50% were in the Southern region.

Of those HMG leaders less likely to report participation in P4P programs, 28% were at academic programs and 31% were at local hospitalist-only groups.

Group vs. Individual Incentives

Of the HMG leaders participating in a quality-incentive program:

  • 43% reported it was an individual incentive;
  • 35% reported it was a group incentive;
  • 10% reported the plan had elements of both individual and group incentives; and
  • 12% were not sure if their plans had individual or group incentives.

Basis of Quality Targets

Of the HMG leaders reporting that they participate in a quality-incentive program (respondents could indicate one or more answers):

  • 60% of the programs have targets based on national benchmarks;
  • 23% have targets based on local or regional benchmarks;
  • 37% have targets based on their hospital’s previous experience; and
  • 47% have targets based on improvement over a baseline.

Maximum Impact of Incentives

Of the HMG leaders reporting that they participate in a quality-incentive program:

  • 16% report the maximum impact is less than 3%;
  • 24% report the maximum impact is from 3% to 7%;
  • 35% report the maximum impact is from 8% to 10%;
  • 17% report the maximum impact is from 11% to 20%;
  • 3% report the maximum impact is more than 20%; and
  • 5% report they do not know the maximum impact.

Group vs. Individual Incentives

Of the HMG leaders reporting that they participate in a quality-incentive program:

  • 61% said they have received an incentive payment;
  • 37% have not received an incentive payment; and
  • 2% were unsure if they have received an incentive payment.

Quality Metrics

The most common metrics used in P4P programs, based on 29 responses to the SHM survey: 

  • 93% of HM programs have metrics based on The Joint Commission’s (JCAHO) heart failure measures;
  • 86% have metrics based on JCAHO pneumonia measures;
  • 79% have metrics based on JCAHO myocardial infarction measures;
  • 28% have metrics based on a measure of medication reconciliation;
  • 24% have metrics based on avoidance of unapproved abbreviations;
  • 24% have metrics based on 100,000 Lives Campaign measures;
  • 21% have metrics based on patient satisfaction measures;
  • 17% have metrics based on transitions-of-care measures;
  • 10% have metrics based on throughput measures;
  • 7% have metrics based on end-of-life measures;
  • 7% have metrics based on “good citizenship” measures;
  • 7% have metrics based on mortality rate measures; and
  • 7% have metrics based on readmission rate measures.
 

 

The most common metrics used in quality-incentive programs, based on 45 responses to SHM’s survey: 

  • 73% of programs use JCAHO heart failure measures;
  • 73% use “good citizenship” measures;
  • 73% use patient satisfaction measures;
  • 67% use JCAHO pneumonia measures;
  • 51% use transitions-of-care measures;
  • 44% use JCAHO M.I. measures;
  • 31% use throughput measures;
  • 27% use avoidance of unapproved abbreviations;
  • 24% use a measure based on medication reconciliation;
  • 11% use 100,000 Lives Campaign measures;
  • 9% use readmission rate measures;
  • 7% use mortality rate measures; and
  • 2% use end-of-life measures.

Recommendations

The prevalence of hospitalist quality-based compensation plans is continuing to grow rapidly, but the details of the plans’ structure will govern whether they benefit our patients, improve the overall value of the care we provide, and serve as a meaningful component of our compensation. I suggest each practice consider implementing plans with the following attributes:

A total dollar amount available for performance that is enough to influence hospitalist behavior. I think quality incentives should compose as much as 15% to 20% of a hospitalist’s annual income. Plans connecting quality performance to equal to or less than 7% of annual compensation (the case for 40% of groups in the above survey) rarely are effective.

Money vs. metrics. It usually is better to establish a plan based on a sliding scale of improved performance rather than a single threshold. For example, if all of the bonus money is available for a 10% improvement in performance, consider providing 10% of the total available money for each 1% improvement in performance.

Degree of difficulty. Performance thresholds should be set so that hospitalists need to change their practices to achieve them, but not so far out of reach that hospitalists give up on them. This can get tricky. Many practices set thresholds that are very easy to reach (e.g., they may be near the current level of performance).

Metrics for which trusted data is readily available. In most cases, this means using data already being collected. Avoid hard-to-track metrics, as they are likely to lead to disagreements about their accuracy.

Group vs. individual measures. Most performance metrics can’t be clearly attributed to one hospitalist as compared to another. For example, who gets the credit or blame for Ms. Smith getting or not getting a pneumovax? The majority of performance metrics are best measured and paid on a group basis. Some metrics, such as documenting medicine reconciliation on admission and discharge, can be effectively attributed to a single hospitalist and could be paid on an individual basis.

Small number of metrics, A meaningfully large amount of money should be connected to each one. Don’t make the mistake of having a $10,000 per doctor annual quality bonus pool divided among 20 metrics (each metric would pay a maximum of $500 per year).

Rotating metrics. Consider an annual meeting with members of your hospital’s administration to jointly establish the metrics used in the hospitalist quality incentive for that year. It is reasonable to change the metrics periodically.

It seems to me P4P programs are in their infancy, and will continue to evolve rapidly. Plans that fail to improve outcomes enough to justify the complexity of implementing, tracking, and paying for them will disappear slowly. (I wonder if payment for pneumovax administration during the hospital stay will be in this category.) And new, more effective, and more valuable programs will be developed.

 

 

Hospitalist practices will need to be nimble to keep pace with all of this change. Although SHM can alert you to how new P4P initiatives might affect your practice, and even recommend methods to improve your performance, you and your hospitalist colleagues still will have a lot of work to operationalize these programs in your practice. TH

Dr. Nelson has been a practicing hospitalist since 1988 and is co-founder and past president of SHM. He is a principal in Nelson/Flores Associates, a national hospitalist practice management consulting firm. He is part of the faculty for SHM’s “Best Practices in Managing a Hospital Medicine Program” course. This column represents his views and is not intended to reflect an official position of SHM.

Although there is a lot of debate about the effectiveness of pay-for-performance (P4P) plans, I think the plans are only going to increase in the foreseeable future.

We need more research to tell us the relative impact of public reporting of performance data and P4P programs. Most importantly, the details of how these plans are set up, how and what they measure, and the dollar amount involved will have everything to do with whether they are successful in improving the value of care we provide.

SHM’S Practice Management Committee conducted a mini-survey of hospitalist group leaders in 2006. Here are some of the key findings.

Performance thresholds should be set so hospitalists need to change their practices to achieve them, but not so far out of reach that hospitalists give up on them.

P4P Prevalence

Forty-one percent (60 out of 146) of hospital medicine group (HMG) leaders reported their groups have a quality-incentive program. Of those HMG leaders more likely to report participation in a quality-incentive program:

  • 60% were at hospitals participating in a P4P program;
  • 50% were at multispecialty/PCP medical groups; and
  • 50% were in the Southern region.

Of those HMG leaders less likely to report participation in P4P programs, 28% were at academic programs and 31% were at local hospitalist-only groups.

Group vs. Individual Incentives

Of the HMG leaders participating in a quality-incentive program:

  • 43% reported it was an individual incentive;
  • 35% reported it was a group incentive;
  • 10% reported the plan had elements of both individual and group incentives; and
  • 12% were not sure if their plans had individual or group incentives.

Basis of Quality Targets

Of the HMG leaders reporting that they participate in a quality-incentive program (respondents could indicate one or more answers):

  • 60% of the programs have targets based on national benchmarks;
  • 23% have targets based on local or regional benchmarks;
  • 37% have targets based on their hospital’s previous experience; and
  • 47% have targets based on improvement over a baseline.

Maximum Impact of Incentives

Of the HMG leaders reporting that they participate in a quality-incentive program:

  • 16% report the maximum impact is less than 3%;
  • 24% report the maximum impact is from 3% to 7%;
  • 35% report the maximum impact is from 8% to 10%;
  • 17% report the maximum impact is from 11% to 20%;
  • 3% report the maximum impact is more than 20%; and
  • 5% report they do not know the maximum impact.

Group vs. Individual Incentives

Of the HMG leaders reporting that they participate in a quality-incentive program:

  • 61% said they have received an incentive payment;
  • 37% have not received an incentive payment; and
  • 2% were unsure if they have received an incentive payment.

Quality Metrics

The most common metrics used in P4P programs, based on 29 responses to the SHM survey: 

  • 93% of HM programs have metrics based on The Joint Commission’s (JCAHO) heart failure measures;
  • 86% have metrics based on JCAHO pneumonia measures;
  • 79% have metrics based on JCAHO myocardial infarction measures;
  • 28% have metrics based on a measure of medication reconciliation;
  • 24% have metrics based on avoidance of unapproved abbreviations;
  • 24% have metrics based on 100,000 Lives Campaign measures;
  • 21% have metrics based on patient satisfaction measures;
  • 17% have metrics based on transitions-of-care measures;
  • 10% have metrics based on throughput measures;
  • 7% have metrics based on end-of-life measures;
  • 7% have metrics based on “good citizenship” measures;
  • 7% have metrics based on mortality rate measures; and
  • 7% have metrics based on readmission rate measures.
 

 

The most common metrics used in quality-incentive programs, based on 45 responses to SHM’s survey: 

  • 73% of programs use JCAHO heart failure measures;
  • 73% use “good citizenship” measures;
  • 73% use patient satisfaction measures;
  • 67% use JCAHO pneumonia measures;
  • 51% use transitions-of-care measures;
  • 44% use JCAHO M.I. measures;
  • 31% use throughput measures;
  • 27% use avoidance of unapproved abbreviations;
  • 24% use a measure based on medication reconciliation;
  • 11% use 100,000 Lives Campaign measures;
  • 9% use readmission rate measures;
  • 7% use mortality rate measures; and
  • 2% use end-of-life measures.

Recommendations

The prevalence of hospitalist quality-based compensation plans is continuing to grow rapidly, but the details of the plans’ structure will govern whether they benefit our patients, improve the overall value of the care we provide, and serve as a meaningful component of our compensation. I suggest each practice consider implementing plans with the following attributes:

A total dollar amount available for performance that is enough to influence hospitalist behavior. I think quality incentives should compose as much as 15% to 20% of a hospitalist’s annual income. Plans connecting quality performance to equal to or less than 7% of annual compensation (the case for 40% of groups in the above survey) rarely are effective.

Money vs. metrics. It usually is better to establish a plan based on a sliding scale of improved performance rather than a single threshold. For example, if all of the bonus money is available for a 10% improvement in performance, consider providing 10% of the total available money for each 1% improvement in performance.

Degree of difficulty. Performance thresholds should be set so that hospitalists need to change their practices to achieve them, but not so far out of reach that hospitalists give up on them. This can get tricky. Many practices set thresholds that are very easy to reach (e.g., they may be near the current level of performance).

Metrics for which trusted data is readily available. In most cases, this means using data already being collected. Avoid hard-to-track metrics, as they are likely to lead to disagreements about their accuracy.

Group vs. individual measures. Most performance metrics can’t be clearly attributed to one hospitalist as compared to another. For example, who gets the credit or blame for Ms. Smith getting or not getting a pneumovax? The majority of performance metrics are best measured and paid on a group basis. Some metrics, such as documenting medicine reconciliation on admission and discharge, can be effectively attributed to a single hospitalist and could be paid on an individual basis.

Small number of metrics, A meaningfully large amount of money should be connected to each one. Don’t make the mistake of having a $10,000 per doctor annual quality bonus pool divided among 20 metrics (each metric would pay a maximum of $500 per year).

Rotating metrics. Consider an annual meeting with members of your hospital’s administration to jointly establish the metrics used in the hospitalist quality incentive for that year. It is reasonable to change the metrics periodically.

It seems to me P4P programs are in their infancy, and will continue to evolve rapidly. Plans that fail to improve outcomes enough to justify the complexity of implementing, tracking, and paying for them will disappear slowly. (I wonder if payment for pneumovax administration during the hospital stay will be in this category.) And new, more effective, and more valuable programs will be developed.

 

 

Hospitalist practices will need to be nimble to keep pace with all of this change. Although SHM can alert you to how new P4P initiatives might affect your practice, and even recommend methods to improve your performance, you and your hospitalist colleagues still will have a lot of work to operationalize these programs in your practice. TH

Dr. Nelson has been a practicing hospitalist since 1988 and is co-founder and past president of SHM. He is a principal in Nelson/Flores Associates, a national hospitalist practice management consulting firm. He is part of the faculty for SHM’s “Best Practices in Managing a Hospital Medicine Program” course. This column represents his views and is not intended to reflect an official position of SHM.

Issue
The Hospitalist - 2009(02)
Issue
The Hospitalist - 2009(02)
Publications
Publications
Article Type
Display Headline
Bonus-Pay Bonanza
Display Headline
Bonus-Pay Bonanza
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)