User login
Over the last 5 years, I’ve periodically devoted this column to providing updates to the Hospital Value-Based Purchasing program. HVBP launched in 2013 as a 5-year mixed upside/downside incentive program with mandatory participation for all U.S. acute care hospitals (critical access, acute inpatient rehabilitation, and long-term acute care hospitals are exempt). The program initially included process and patient experience measures. It later added measures for mortality, efficiency, and patient safety.
For the 2017 version of HVBP, the measures are allocated as follows: eight for patient experience, seven for patient safety (1 of which is a roll up of 11 claims-based measures), three for process, and three for mortality. HVBP uses a budget-neutral funding approach with some winners and some losers but overall net zero spending on the program. It initially put hospitals at risk for 1% of their Medicare inpatient payments (in 2013), with a progressive increase to 2% by this year. HVBP has used a complex approach to determining incentives and penalties, rewarding either improvement or achievement, depending on the baseline performance of the hospital.
When HVBP was rolled out it seemed like a big deal. Hospitals devoted resources to it. I contended that hospitalists should pay attention to its measures and to work with their hospital quality department to promote high performance in the relevant measure domains. I emphasized that the program was good for hospitalists because it put dollars behind the quality improvement projects we had been working on for some time – projects to improve HCAHPS scores; lower mortality; improve heart failure, heart attack, or pneumonia processes; and decrease hospital-acquired infections. For some perspective on dollars at stake, by this year, a 700-bed hospital has about $3.4 million at risk in the program, and a 90-bed hospital has roughly $250,000 at risk.
Has HVBP improved quality? Two studies looking at the early period of HVBP failed to show improvements in process or patient experience measures and demonstrated no change in mortality for heart failure, pneumonia, or heart attack.1,2 Now that the program is in its 5th and final year, thanks to a recent study by Ryan et al., we have an idea if HVBP is associated with longer-term improvements in quality.3
In the study, Ryan et al. compared hospitals participating in HVBP with critical access hospitals, which are exempt from the program. The study yielded some disappointing, if not surprising, results. Improvements in process and patient experience measures for HVBP hospitals were no greater than those for the control group. HVBP was not associated with a significant reduction in mortality for heart failure or heart attack, but was associated with a mortality reduction for pneumonia. In sum, HVBP was not associated with improvements in process or patient experience, and was not associated with lower mortality, except in pneumonia.
As a program designed to incentivize better quality, where did HVBP go wrong? I believe HVBP simply had too many measures for the cognitive bandwidth of an individual or a team looking to improve quality. The total measure count for 2017 is 21! I submit that a hospitalist working to improve quality can keep top-of-mind one or two measures, possibly three at most. While others have postulated that the amount of dollars at risk are too small, I don’t think that’s the problem. Instead, my sense is that hospitalists and other members of the hospital team have quality improvement in their DNA and, regardless of the size of the financial incentives, will work to improve it as long as they have the right tools. Chief among these are good performance data and the time to focus on a finite number of projects.
What lessons can inform better design in the future? As of January 2017, the Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) – representing the biggest change in reimbursement in a generation – progressively exposes doctors and other professionals to upside/downside incentives for quality, resource utilization, use of a certified electronic health record (hospitalists are exempt as they already use the hospital’s EHR), and practice improvement activities.
It would be wise to learn from the shortcomings of HVBP. Namely, if MACRA keeps on its course to incentivize physicians using a complicated formula based on four domains and many more subdomains, it will repeat the mistakes of HVBP and – while creating more administrative burden – likely improve quality very little, if at all. Instead, MACRA should delineate a simple measure set representing improvement activities that physicians and teams can incorporate into their regular work flow without more time taken away from patient care.
The reality is that complicated pay-for-performance programs divert limited available resources away from meaningful improvement activities in order to comply with onerous reporting requirements. As we gain a more nuanced understanding of how these programs work, policy makers should pay attention to the elements of “low-value” and “high-value” incentive systems and apply the “less is more” ethos of high-value care to the next generation of pay-for-performance programs.
Dr. Whitcomb is chief medical officer at Remedy Partners in Darien, Conn., and a cofounder and past president of SHM.
References
1. Ryan AM, Burgess JF, Pesko MF, Borden WB, Dimick JB. “The early effects of Medicare’s mandatory hospital pay-for-performance program” Health Serv Res. 2015;50:81-97.
2. Figueroa JF, Tsugawa Y, Zheng J, Orav EJ, Jha AK. “Association between the Value-Based Purchasing pay for performance program and patient mortality in US hospitals: observational study” BMJ. 2016;353:i2214.
3. Ryan AM, Krinsky S, Maurer KA, Dimick JB. “Changes in Hospital Quality Associated with Hospital Value-Based Purchasing” N Engl J Med. 2017;376:2358-66.
Over the last 5 years, I’ve periodically devoted this column to providing updates to the Hospital Value-Based Purchasing program. HVBP launched in 2013 as a 5-year mixed upside/downside incentive program with mandatory participation for all U.S. acute care hospitals (critical access, acute inpatient rehabilitation, and long-term acute care hospitals are exempt). The program initially included process and patient experience measures. It later added measures for mortality, efficiency, and patient safety.
For the 2017 version of HVBP, the measures are allocated as follows: eight for patient experience, seven for patient safety (1 of which is a roll up of 11 claims-based measures), three for process, and three for mortality. HVBP uses a budget-neutral funding approach with some winners and some losers but overall net zero spending on the program. It initially put hospitals at risk for 1% of their Medicare inpatient payments (in 2013), with a progressive increase to 2% by this year. HVBP has used a complex approach to determining incentives and penalties, rewarding either improvement or achievement, depending on the baseline performance of the hospital.
When HVBP was rolled out it seemed like a big deal. Hospitals devoted resources to it. I contended that hospitalists should pay attention to its measures and to work with their hospital quality department to promote high performance in the relevant measure domains. I emphasized that the program was good for hospitalists because it put dollars behind the quality improvement projects we had been working on for some time – projects to improve HCAHPS scores; lower mortality; improve heart failure, heart attack, or pneumonia processes; and decrease hospital-acquired infections. For some perspective on dollars at stake, by this year, a 700-bed hospital has about $3.4 million at risk in the program, and a 90-bed hospital has roughly $250,000 at risk.
Has HVBP improved quality? Two studies looking at the early period of HVBP failed to show improvements in process or patient experience measures and demonstrated no change in mortality for heart failure, pneumonia, or heart attack.1,2 Now that the program is in its 5th and final year, thanks to a recent study by Ryan et al., we have an idea if HVBP is associated with longer-term improvements in quality.3
In the study, Ryan et al. compared hospitals participating in HVBP with critical access hospitals, which are exempt from the program. The study yielded some disappointing, if not surprising, results. Improvements in process and patient experience measures for HVBP hospitals were no greater than those for the control group. HVBP was not associated with a significant reduction in mortality for heart failure or heart attack, but was associated with a mortality reduction for pneumonia. In sum, HVBP was not associated with improvements in process or patient experience, and was not associated with lower mortality, except in pneumonia.
As a program designed to incentivize better quality, where did HVBP go wrong? I believe HVBP simply had too many measures for the cognitive bandwidth of an individual or a team looking to improve quality. The total measure count for 2017 is 21! I submit that a hospitalist working to improve quality can keep top-of-mind one or two measures, possibly three at most. While others have postulated that the amount of dollars at risk are too small, I don’t think that’s the problem. Instead, my sense is that hospitalists and other members of the hospital team have quality improvement in their DNA and, regardless of the size of the financial incentives, will work to improve it as long as they have the right tools. Chief among these are good performance data and the time to focus on a finite number of projects.
What lessons can inform better design in the future? As of January 2017, the Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) – representing the biggest change in reimbursement in a generation – progressively exposes doctors and other professionals to upside/downside incentives for quality, resource utilization, use of a certified electronic health record (hospitalists are exempt as they already use the hospital’s EHR), and practice improvement activities.
It would be wise to learn from the shortcomings of HVBP. Namely, if MACRA keeps on its course to incentivize physicians using a complicated formula based on four domains and many more subdomains, it will repeat the mistakes of HVBP and – while creating more administrative burden – likely improve quality very little, if at all. Instead, MACRA should delineate a simple measure set representing improvement activities that physicians and teams can incorporate into their regular work flow without more time taken away from patient care.
The reality is that complicated pay-for-performance programs divert limited available resources away from meaningful improvement activities in order to comply with onerous reporting requirements. As we gain a more nuanced understanding of how these programs work, policy makers should pay attention to the elements of “low-value” and “high-value” incentive systems and apply the “less is more” ethos of high-value care to the next generation of pay-for-performance programs.
Dr. Whitcomb is chief medical officer at Remedy Partners in Darien, Conn., and a cofounder and past president of SHM.
References
1. Ryan AM, Burgess JF, Pesko MF, Borden WB, Dimick JB. “The early effects of Medicare’s mandatory hospital pay-for-performance program” Health Serv Res. 2015;50:81-97.
2. Figueroa JF, Tsugawa Y, Zheng J, Orav EJ, Jha AK. “Association between the Value-Based Purchasing pay for performance program and patient mortality in US hospitals: observational study” BMJ. 2016;353:i2214.
3. Ryan AM, Krinsky S, Maurer KA, Dimick JB. “Changes in Hospital Quality Associated with Hospital Value-Based Purchasing” N Engl J Med. 2017;376:2358-66.
Over the last 5 years, I’ve periodically devoted this column to providing updates to the Hospital Value-Based Purchasing program. HVBP launched in 2013 as a 5-year mixed upside/downside incentive program with mandatory participation for all U.S. acute care hospitals (critical access, acute inpatient rehabilitation, and long-term acute care hospitals are exempt). The program initially included process and patient experience measures. It later added measures for mortality, efficiency, and patient safety.
For the 2017 version of HVBP, the measures are allocated as follows: eight for patient experience, seven for patient safety (1 of which is a roll up of 11 claims-based measures), three for process, and three for mortality. HVBP uses a budget-neutral funding approach with some winners and some losers but overall net zero spending on the program. It initially put hospitals at risk for 1% of their Medicare inpatient payments (in 2013), with a progressive increase to 2% by this year. HVBP has used a complex approach to determining incentives and penalties, rewarding either improvement or achievement, depending on the baseline performance of the hospital.
When HVBP was rolled out it seemed like a big deal. Hospitals devoted resources to it. I contended that hospitalists should pay attention to its measures and to work with their hospital quality department to promote high performance in the relevant measure domains. I emphasized that the program was good for hospitalists because it put dollars behind the quality improvement projects we had been working on for some time – projects to improve HCAHPS scores; lower mortality; improve heart failure, heart attack, or pneumonia processes; and decrease hospital-acquired infections. For some perspective on dollars at stake, by this year, a 700-bed hospital has about $3.4 million at risk in the program, and a 90-bed hospital has roughly $250,000 at risk.
Has HVBP improved quality? Two studies looking at the early period of HVBP failed to show improvements in process or patient experience measures and demonstrated no change in mortality for heart failure, pneumonia, or heart attack.1,2 Now that the program is in its 5th and final year, thanks to a recent study by Ryan et al., we have an idea if HVBP is associated with longer-term improvements in quality.3
In the study, Ryan et al. compared hospitals participating in HVBP with critical access hospitals, which are exempt from the program. The study yielded some disappointing, if not surprising, results. Improvements in process and patient experience measures for HVBP hospitals were no greater than those for the control group. HVBP was not associated with a significant reduction in mortality for heart failure or heart attack, but was associated with a mortality reduction for pneumonia. In sum, HVBP was not associated with improvements in process or patient experience, and was not associated with lower mortality, except in pneumonia.
As a program designed to incentivize better quality, where did HVBP go wrong? I believe HVBP simply had too many measures for the cognitive bandwidth of an individual or a team looking to improve quality. The total measure count for 2017 is 21! I submit that a hospitalist working to improve quality can keep top-of-mind one or two measures, possibly three at most. While others have postulated that the amount of dollars at risk are too small, I don’t think that’s the problem. Instead, my sense is that hospitalists and other members of the hospital team have quality improvement in their DNA and, regardless of the size of the financial incentives, will work to improve it as long as they have the right tools. Chief among these are good performance data and the time to focus on a finite number of projects.
What lessons can inform better design in the future? As of January 2017, the Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) – representing the biggest change in reimbursement in a generation – progressively exposes doctors and other professionals to upside/downside incentives for quality, resource utilization, use of a certified electronic health record (hospitalists are exempt as they already use the hospital’s EHR), and practice improvement activities.
It would be wise to learn from the shortcomings of HVBP. Namely, if MACRA keeps on its course to incentivize physicians using a complicated formula based on four domains and many more subdomains, it will repeat the mistakes of HVBP and – while creating more administrative burden – likely improve quality very little, if at all. Instead, MACRA should delineate a simple measure set representing improvement activities that physicians and teams can incorporate into their regular work flow without more time taken away from patient care.
The reality is that complicated pay-for-performance programs divert limited available resources away from meaningful improvement activities in order to comply with onerous reporting requirements. As we gain a more nuanced understanding of how these programs work, policy makers should pay attention to the elements of “low-value” and “high-value” incentive systems and apply the “less is more” ethos of high-value care to the next generation of pay-for-performance programs.
Dr. Whitcomb is chief medical officer at Remedy Partners in Darien, Conn., and a cofounder and past president of SHM.
References
1. Ryan AM, Burgess JF, Pesko MF, Borden WB, Dimick JB. “The early effects of Medicare’s mandatory hospital pay-for-performance program” Health Serv Res. 2015;50:81-97.
2. Figueroa JF, Tsugawa Y, Zheng J, Orav EJ, Jha AK. “Association between the Value-Based Purchasing pay for performance program and patient mortality in US hospitals: observational study” BMJ. 2016;353:i2214.
3. Ryan AM, Krinsky S, Maurer KA, Dimick JB. “Changes in Hospital Quality Associated with Hospital Value-Based Purchasing” N Engl J Med. 2017;376:2358-66.