Affiliations
Institute of Health Policy Management and Evaluation, and Department of Medicine, University of Toronto, and Departments of Medicine, Mount Sinai Hospital and University Health Network, Toronto, Ontario, Canada
Given name(s)
Andrew M.
Family name
Ryan
Degrees
PhD

Value‐Based Payment

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
No hospital left behind? Education policy lessons for value‐based payment in healthcare

The United States is moving aggressively toward value‐based payment. The Department of Health and Human Services recently announced a goal to link 85% of Medicare's fee‐for‐service payments to quality or value by 2016.[1] Despite the inherent logic of paying providers for their results, evidence of the effectiveness of value‐based payment has been mixed and underwhelming. Recent reviews of pay‐for‐performancereflecting the emerging understanding of the complexities of designing successful programshave painted a more negative picture of their overall effectiveness.[2, 3] One study of over 6 million patients found that the Medicare Premier Hospital Quality Incentive Demonstration had no effect on long‐term patient outcomes including 30‐day mortality.[4] At the same time, research suggests that lower performing providers tend to have a disproportionate number of poor patients, many of whom are racial and ethnic minorities. Value‐based payment risks the dual failure of not improving health outcomes while exacerbating health inequities.

We have seen this movie before. In 2001, No Child Left Behind was enacted to improve quality and reduce inequities in K12 education in the United States. Much like healthcare, education suffers from uneven quality and wide socioeconomic disparities.[5] No Child Left Behind attempted to address these problems with new accountability measures. Based on the results from standardized tests, No Child Left Behind rewarded the highest performing schools with more funding while penalizing poor performing schools with reduced funding, and in some cases, forcing failing schools to cede control to outside operators.

In the aftermath of its implementation, however, it became clear that these incentives had not worked as intended. No Child Left Behind did not improve reading performance and was associated with improvements in math performance only for younger students.[6] These modest gains came at a high cost; consistent with teaching to the test, No Child Left Behind led to a shifting of instructional time toward math and reading and away from other subjects. It also led to widespread cheating, challenging the validity of observed performance improvements. Before No Child Left Behind was rolled out, the wealthiest school districts in the country spent as much as 10 times more than the poorest districts.[5] By penalizing the lowest performers, these gaps persisted. Schools were not given the support that they needed to improve performance.

The parallels to healthcare are striking (Table 1). Early results from Medicare's Hospital Value‐Based Purchasing and Readmission Reduction Program show that hospitals caring for more disadvantaged patients have been disproportionately penalized.[7] Similar reverse Robin Hood effects have been observed in incentive programs for physician practices.[8] Over time, financial incentive programs may substantially decrease operating revenue for hospitals and physicians caring for low‐income and minority communities. This could perpetuate the already large disparities in quality and health outcomes facing these populations. Although risk‐adjusting for socioeconomic status may alleviate these concerns in the short term, allowing low‐income or minority patients to have poorer health outcomes simply accepts that disparities exist rather than trying to reduce them.

Financial Incentive and Collaboration‐Based Programs in Healthcare and Education
Healthcare Education
National incentive programs Examples Hospital Value‐Based Purchasing Hospital Readmission‐Reduction Program No Child Left Behind
Hospital‐Acquired Conditions Penalty Program
Physician Value‐Based Payment Modifier
Approach toward improving performance Reimbursements are tied to quality and cost. Test‐based accountability: Results of standardized tests are used to determine levels of federal funding. Schools failing to meet testing goals are penalized with reductions in funding.
Bonuses are given to hospitals and providers that perform well on performance metrics. Low performers are penalized with lower reimbursements. Takeover of failing districts: Districts failing to make adequate yearly progress for 5 years in a row must implement a restructuring plan that may involve changing the school's governance arrangement, converting the school to a charter, or turning the school over to a private management company.
Unintended consequences Gaming Cheating to boost test scores.
Ignoring or neglecting areas of care that are unincentivized. Shift of instruction time toward math and reading.
Avoiding high‐risk or disadvantaged patients. States intentionally making assessment tools easier.
Stress among administrators, teachers, and students due to high‐stakes testing.
Collaboration‐based programs Examples Quality collaboratives Shanghai school system
Hospital engagement networks
Approach toward improving performance Improvement networks: High performing hospitals or providers are identified and work with other groups to improve patient treatment and the care process. Pairing of districts: High‐performing districts are paired with lower performing districts to exchange education development plans, curricula, and teaching materials.
Data sharing: Facilities collect and share data to monitor quality improvements and better identify best practices. Commissioned administration: A high‐performing school partners with low performers by sending experienced teachers and administrators to share successful practices and turn around their performance.
Example of success The Michigan Surgical Quality Collaborative was associated with a 2.6% drop in general and vascular surgery complications. Hospitals participating in the programs made improvements at a faster rate than those outside of the program. Zhabei District No. 8 School, located in an area with high crime rates and low student performance, was transformed from one of the lowest performing schools in its district to ranking 15 out of 30. Approximately 80% of the school's graduates go on to study at universities compared to the municipal average of 56%.

How then is it possible to improve the quality of care at lower performing hospitals without simultaneously designing an incentive system that hurts them? Lessons from the education policy are again instructive. Every 3 years the Organization for Economic Cooperation and Development ranks countries by the performance of their 15‐year olds on a standardized test called the Program for International Student Assessment.[9] For the past 2 sets of rankings, Shanghai, China has topped the list. Like many attempts to generate international rankings, this one has its flaws, and Shanghai's top position has not been without controversy. For one, China is not ranked at the country‐level like other nations; yet, due to the city's status as a wealthy business and financial center, Shanghai certainly cannot be considered representative of the Chinese education system. Nevertheless, the story of how Shanghai reformed its education system and achieved its high position has important implications.

Prior to implementing reforms, Shanghai's rural outer districts struggled with less funding, high teacher turnover rates, and low test scores compared to wealthier urban districts. To reduce education disparities within the city's schools, the government of Shanghai enacted a number of policies aimed at bringing lower performers up to the same level as schools with the highest degree of student achievement.[10] The government gives schools a grade of A, B, C, or D based on the quality of their infrastructure and student performance. It then uses several programs to facilitate the exchange of staff and ideas among schools at different levels. One program pairs high‐performing districts with low‐performing districts to share education development plans, curricula, teaching materials, and best practices. Another strategycalled commissioned administrationinvolves temporary contracts between schools to exchange both administrators and experienced teachers. In addition to these approaches, the government sets a minimum level of spending for schools and transfers public funds to indigent districts to provide them with assistance to reach this level.

The notion that the very best can help the weak requires a sense of solidarity. This solidarity may falter in environments in which hospitals and physicians are in cutthroat competition. Though there will always be some tension between competition and collaboration, in most markets, competition between hospitals does not rule out collaboration. Policies can either relieve or reinforce the natural tension between competition and collaboration. This suggests that adopting reforms with the same intent as the Shanghai system is still possible in healthcare, especially through physician and other provider networks. The healthcare workforce has a rich history of cross‐organizational collaboration through mentorships, the publication of research, and participation in continuing medical education courses. The Centers for Medicare and Medicaid Services' Hospital Engagement Networks, a program in which leading organizations have helped to disseminate interventions to reduce hospital acquired conditions, are an example of this approach. Quality collaborativesgroups of providers who collaborate across institutions to identify problems and best practices for improvementhave similarly shown great promise.[11] Similar approaches have been used by the Institute for Healthcare Improvement in many of their quality improvement initiatives.

Such collaboration‐based programs could be harnessed and tied to financial incentives for quality improvement. For instance, top‐performing hospitals could be incentivized to participate in a venue where they share their best practices with the lower performers in their field. Low performers, in turn, could be provided with financial assistance to implement the appropriate changes. Over time, financial assistance could be made contingent on quality improvements. By providing physicians and other providers with examples of what success looks like and assisting them with garnering the resources to reach this level, improvement would not only be incentivized, it might also become more tangible.

Although some hospitals and physicians may welcome changes to incentive systems, implementation of collaboration‐based programs would not be possible without a facilitator that is willing to underwrite program costs, provide financial incentivizes to providers, and develop a platform for collaboration. Large insurers are the most likely group to have the financial resources and widespread network to develop such programs, but that does not mean that they would be willing to experiment with this approach. This may especially be the case if cost savings and measurable improvements in quality are not immediate. Even though the results of collaboration‐based efforts have been promising, the implementation of these programs has been limited, and adoption in different contexts may not yield the same results. Collaboration‐based programs that have already shown success can serve as models, but they may need significant adaptations to meet the needs of providers in a given area.

Despite its promise, collaboration‐based strategies alone will not be enough to improve certain aspects of quality and value. Although providing physicians with knowledge on how to reduce unnecessary care, for example, could help limit overutilization, it is not sufficient to overcome the incentives of fee‐for‐service payment. In this case, broader payment reform and population‐based accountability can be paired with programs to encourage collaboration. For instance, the Blue Cross and Blue Shield of Massachusetts' Alternative Contract has used a combination of technical assistance, shared savings, and large quality bonuses to improve quality and reduce medical spending growth.[12] Collaboration‐based strategies should be seen as a complement to these broad, thoughtful reforms and a substitute for narrow incentives that encourage myopia and destructive competition.

Evidence from education and healthcare shows that penalizing the worst and rewarding the best will not shift the bell curve of performance. Such approaches are more likely to entrench and expand disparities. Instead, policy should encourage and incentivize collaboration to expand best practices that improve patient outcomes. Lessons from education provide both cautionary tales and novel solutions that might improve healthcare.

Disclosure: Nothing to report.

Files
References
  1. Burwell SM. Setting value‐based payment goals—HHS efforts to improve US health care. N Engl J Med. 2015;372:897899.
  2. Herck P, Smedt D, Annemans L, Remmen R, Rosenthal MB, Sermeus W. Systematic review: effects, design choices, and context of pay‐for‐performance in health care. BMC Health Serv Res. 2010;10:247.
  3. Houle SK, McAlister FA, Jackevicius CA, Chuck AW, Tsuyuki RT. Does performance‐based remuneration for individual health care practitioners affect patient care?: a systematic review. Ann Intern Med. 2012;157(12):889899.
  4. Jha AK, Joynt KE, Orav J, Epstein AM. The long‐term effect of premier pay‐for‐ performance on patient outcomes. N Engl J Med. 2012;366:16061615.
  5. Darling‐Hammond L. Race, inequality and education accountability: the irony of ‘no child left behind.’ Race Ethn Educ. 2007;10:245260.
  6. Dee TS, Jacob BA. The impact of no child left behind on students, teachers, and schools. In: Brookings Paper on Economic Activity. Washington, DC: The Brookings Institution; 2010:149194.
  7. Ryan AM. Will value‐based purchasing increase disparities in care? N Engl J Med. 2013;369:24722474.
  8. Chien AT, Wroblewski K, Damberg C, et al. Do physician organizations located in lower socioeconomic status areas score lower on pay‐for‐performance measures? J Gen Intern Med. 2012;27:548554.
  9. Loveless T. Brown Center Chalkboard. Attention OECD‐PISA: your silence on China is wrong. Washington, DC: The Brookings Institute; 2013:48.
  10. Organisation for Economic Cooperation and Development. Shanghai and Hong Kong: two distinct examples of education reform in China. In: Strong Performers and Successful Reformers in Education: Lessons from PISA for the United States. Paris, France: OECD Publishing; 2010:83115.
  11. Share DA, Campbell DA, Birkmeyer N, et al. How a regional collaborative of hospitals and physicians in Michigan cut costs and improved the quality of care. Health Aff. 2011;30:636645.
  12. Song Z, Rose S, Safran DG, Landon BE, Day MP, Chernew ME. Changes in health care spending and quality 4 years into global payment. N Engl J Med. 2014;371:17041714.
Article PDF
Issue
Journal of Hospital Medicine - 11(1)
Publications
Page Number
62-64
Sections
Files
Files
Article PDF
Article PDF

The United States is moving aggressively toward value‐based payment. The Department of Health and Human Services recently announced a goal to link 85% of Medicare's fee‐for‐service payments to quality or value by 2016.[1] Despite the inherent logic of paying providers for their results, evidence of the effectiveness of value‐based payment has been mixed and underwhelming. Recent reviews of pay‐for‐performancereflecting the emerging understanding of the complexities of designing successful programshave painted a more negative picture of their overall effectiveness.[2, 3] One study of over 6 million patients found that the Medicare Premier Hospital Quality Incentive Demonstration had no effect on long‐term patient outcomes including 30‐day mortality.[4] At the same time, research suggests that lower performing providers tend to have a disproportionate number of poor patients, many of whom are racial and ethnic minorities. Value‐based payment risks the dual failure of not improving health outcomes while exacerbating health inequities.

We have seen this movie before. In 2001, No Child Left Behind was enacted to improve quality and reduce inequities in K12 education in the United States. Much like healthcare, education suffers from uneven quality and wide socioeconomic disparities.[5] No Child Left Behind attempted to address these problems with new accountability measures. Based on the results from standardized tests, No Child Left Behind rewarded the highest performing schools with more funding while penalizing poor performing schools with reduced funding, and in some cases, forcing failing schools to cede control to outside operators.

In the aftermath of its implementation, however, it became clear that these incentives had not worked as intended. No Child Left Behind did not improve reading performance and was associated with improvements in math performance only for younger students.[6] These modest gains came at a high cost; consistent with teaching to the test, No Child Left Behind led to a shifting of instructional time toward math and reading and away from other subjects. It also led to widespread cheating, challenging the validity of observed performance improvements. Before No Child Left Behind was rolled out, the wealthiest school districts in the country spent as much as 10 times more than the poorest districts.[5] By penalizing the lowest performers, these gaps persisted. Schools were not given the support that they needed to improve performance.

The parallels to healthcare are striking (Table 1). Early results from Medicare's Hospital Value‐Based Purchasing and Readmission Reduction Program show that hospitals caring for more disadvantaged patients have been disproportionately penalized.[7] Similar reverse Robin Hood effects have been observed in incentive programs for physician practices.[8] Over time, financial incentive programs may substantially decrease operating revenue for hospitals and physicians caring for low‐income and minority communities. This could perpetuate the already large disparities in quality and health outcomes facing these populations. Although risk‐adjusting for socioeconomic status may alleviate these concerns in the short term, allowing low‐income or minority patients to have poorer health outcomes simply accepts that disparities exist rather than trying to reduce them.

Financial Incentive and Collaboration‐Based Programs in Healthcare and Education
Healthcare Education
National incentive programs Examples Hospital Value‐Based Purchasing Hospital Readmission‐Reduction Program No Child Left Behind
Hospital‐Acquired Conditions Penalty Program
Physician Value‐Based Payment Modifier
Approach toward improving performance Reimbursements are tied to quality and cost. Test‐based accountability: Results of standardized tests are used to determine levels of federal funding. Schools failing to meet testing goals are penalized with reductions in funding.
Bonuses are given to hospitals and providers that perform well on performance metrics. Low performers are penalized with lower reimbursements. Takeover of failing districts: Districts failing to make adequate yearly progress for 5 years in a row must implement a restructuring plan that may involve changing the school's governance arrangement, converting the school to a charter, or turning the school over to a private management company.
Unintended consequences Gaming Cheating to boost test scores.
Ignoring or neglecting areas of care that are unincentivized. Shift of instruction time toward math and reading.
Avoiding high‐risk or disadvantaged patients. States intentionally making assessment tools easier.
Stress among administrators, teachers, and students due to high‐stakes testing.
Collaboration‐based programs Examples Quality collaboratives Shanghai school system
Hospital engagement networks
Approach toward improving performance Improvement networks: High performing hospitals or providers are identified and work with other groups to improve patient treatment and the care process. Pairing of districts: High‐performing districts are paired with lower performing districts to exchange education development plans, curricula, and teaching materials.
Data sharing: Facilities collect and share data to monitor quality improvements and better identify best practices. Commissioned administration: A high‐performing school partners with low performers by sending experienced teachers and administrators to share successful practices and turn around their performance.
Example of success The Michigan Surgical Quality Collaborative was associated with a 2.6% drop in general and vascular surgery complications. Hospitals participating in the programs made improvements at a faster rate than those outside of the program. Zhabei District No. 8 School, located in an area with high crime rates and low student performance, was transformed from one of the lowest performing schools in its district to ranking 15 out of 30. Approximately 80% of the school's graduates go on to study at universities compared to the municipal average of 56%.

How then is it possible to improve the quality of care at lower performing hospitals without simultaneously designing an incentive system that hurts them? Lessons from the education policy are again instructive. Every 3 years the Organization for Economic Cooperation and Development ranks countries by the performance of their 15‐year olds on a standardized test called the Program for International Student Assessment.[9] For the past 2 sets of rankings, Shanghai, China has topped the list. Like many attempts to generate international rankings, this one has its flaws, and Shanghai's top position has not been without controversy. For one, China is not ranked at the country‐level like other nations; yet, due to the city's status as a wealthy business and financial center, Shanghai certainly cannot be considered representative of the Chinese education system. Nevertheless, the story of how Shanghai reformed its education system and achieved its high position has important implications.

Prior to implementing reforms, Shanghai's rural outer districts struggled with less funding, high teacher turnover rates, and low test scores compared to wealthier urban districts. To reduce education disparities within the city's schools, the government of Shanghai enacted a number of policies aimed at bringing lower performers up to the same level as schools with the highest degree of student achievement.[10] The government gives schools a grade of A, B, C, or D based on the quality of their infrastructure and student performance. It then uses several programs to facilitate the exchange of staff and ideas among schools at different levels. One program pairs high‐performing districts with low‐performing districts to share education development plans, curricula, teaching materials, and best practices. Another strategycalled commissioned administrationinvolves temporary contracts between schools to exchange both administrators and experienced teachers. In addition to these approaches, the government sets a minimum level of spending for schools and transfers public funds to indigent districts to provide them with assistance to reach this level.

The notion that the very best can help the weak requires a sense of solidarity. This solidarity may falter in environments in which hospitals and physicians are in cutthroat competition. Though there will always be some tension between competition and collaboration, in most markets, competition between hospitals does not rule out collaboration. Policies can either relieve or reinforce the natural tension between competition and collaboration. This suggests that adopting reforms with the same intent as the Shanghai system is still possible in healthcare, especially through physician and other provider networks. The healthcare workforce has a rich history of cross‐organizational collaboration through mentorships, the publication of research, and participation in continuing medical education courses. The Centers for Medicare and Medicaid Services' Hospital Engagement Networks, a program in which leading organizations have helped to disseminate interventions to reduce hospital acquired conditions, are an example of this approach. Quality collaborativesgroups of providers who collaborate across institutions to identify problems and best practices for improvementhave similarly shown great promise.[11] Similar approaches have been used by the Institute for Healthcare Improvement in many of their quality improvement initiatives.

Such collaboration‐based programs could be harnessed and tied to financial incentives for quality improvement. For instance, top‐performing hospitals could be incentivized to participate in a venue where they share their best practices with the lower performers in their field. Low performers, in turn, could be provided with financial assistance to implement the appropriate changes. Over time, financial assistance could be made contingent on quality improvements. By providing physicians and other providers with examples of what success looks like and assisting them with garnering the resources to reach this level, improvement would not only be incentivized, it might also become more tangible.

Although some hospitals and physicians may welcome changes to incentive systems, implementation of collaboration‐based programs would not be possible without a facilitator that is willing to underwrite program costs, provide financial incentivizes to providers, and develop a platform for collaboration. Large insurers are the most likely group to have the financial resources and widespread network to develop such programs, but that does not mean that they would be willing to experiment with this approach. This may especially be the case if cost savings and measurable improvements in quality are not immediate. Even though the results of collaboration‐based efforts have been promising, the implementation of these programs has been limited, and adoption in different contexts may not yield the same results. Collaboration‐based programs that have already shown success can serve as models, but they may need significant adaptations to meet the needs of providers in a given area.

Despite its promise, collaboration‐based strategies alone will not be enough to improve certain aspects of quality and value. Although providing physicians with knowledge on how to reduce unnecessary care, for example, could help limit overutilization, it is not sufficient to overcome the incentives of fee‐for‐service payment. In this case, broader payment reform and population‐based accountability can be paired with programs to encourage collaboration. For instance, the Blue Cross and Blue Shield of Massachusetts' Alternative Contract has used a combination of technical assistance, shared savings, and large quality bonuses to improve quality and reduce medical spending growth.[12] Collaboration‐based strategies should be seen as a complement to these broad, thoughtful reforms and a substitute for narrow incentives that encourage myopia and destructive competition.

Evidence from education and healthcare shows that penalizing the worst and rewarding the best will not shift the bell curve of performance. Such approaches are more likely to entrench and expand disparities. Instead, policy should encourage and incentivize collaboration to expand best practices that improve patient outcomes. Lessons from education provide both cautionary tales and novel solutions that might improve healthcare.

Disclosure: Nothing to report.

The United States is moving aggressively toward value‐based payment. The Department of Health and Human Services recently announced a goal to link 85% of Medicare's fee‐for‐service payments to quality or value by 2016.[1] Despite the inherent logic of paying providers for their results, evidence of the effectiveness of value‐based payment has been mixed and underwhelming. Recent reviews of pay‐for‐performancereflecting the emerging understanding of the complexities of designing successful programshave painted a more negative picture of their overall effectiveness.[2, 3] One study of over 6 million patients found that the Medicare Premier Hospital Quality Incentive Demonstration had no effect on long‐term patient outcomes including 30‐day mortality.[4] At the same time, research suggests that lower performing providers tend to have a disproportionate number of poor patients, many of whom are racial and ethnic minorities. Value‐based payment risks the dual failure of not improving health outcomes while exacerbating health inequities.

We have seen this movie before. In 2001, No Child Left Behind was enacted to improve quality and reduce inequities in K12 education in the United States. Much like healthcare, education suffers from uneven quality and wide socioeconomic disparities.[5] No Child Left Behind attempted to address these problems with new accountability measures. Based on the results from standardized tests, No Child Left Behind rewarded the highest performing schools with more funding while penalizing poor performing schools with reduced funding, and in some cases, forcing failing schools to cede control to outside operators.

In the aftermath of its implementation, however, it became clear that these incentives had not worked as intended. No Child Left Behind did not improve reading performance and was associated with improvements in math performance only for younger students.[6] These modest gains came at a high cost; consistent with teaching to the test, No Child Left Behind led to a shifting of instructional time toward math and reading and away from other subjects. It also led to widespread cheating, challenging the validity of observed performance improvements. Before No Child Left Behind was rolled out, the wealthiest school districts in the country spent as much as 10 times more than the poorest districts.[5] By penalizing the lowest performers, these gaps persisted. Schools were not given the support that they needed to improve performance.

The parallels to healthcare are striking (Table 1). Early results from Medicare's Hospital Value‐Based Purchasing and Readmission Reduction Program show that hospitals caring for more disadvantaged patients have been disproportionately penalized.[7] Similar reverse Robin Hood effects have been observed in incentive programs for physician practices.[8] Over time, financial incentive programs may substantially decrease operating revenue for hospitals and physicians caring for low‐income and minority communities. This could perpetuate the already large disparities in quality and health outcomes facing these populations. Although risk‐adjusting for socioeconomic status may alleviate these concerns in the short term, allowing low‐income or minority patients to have poorer health outcomes simply accepts that disparities exist rather than trying to reduce them.

Financial Incentive and Collaboration‐Based Programs in Healthcare and Education
Healthcare Education
National incentive programs Examples Hospital Value‐Based Purchasing Hospital Readmission‐Reduction Program No Child Left Behind
Hospital‐Acquired Conditions Penalty Program
Physician Value‐Based Payment Modifier
Approach toward improving performance Reimbursements are tied to quality and cost. Test‐based accountability: Results of standardized tests are used to determine levels of federal funding. Schools failing to meet testing goals are penalized with reductions in funding.
Bonuses are given to hospitals and providers that perform well on performance metrics. Low performers are penalized with lower reimbursements. Takeover of failing districts: Districts failing to make adequate yearly progress for 5 years in a row must implement a restructuring plan that may involve changing the school's governance arrangement, converting the school to a charter, or turning the school over to a private management company.
Unintended consequences Gaming Cheating to boost test scores.
Ignoring or neglecting areas of care that are unincentivized. Shift of instruction time toward math and reading.
Avoiding high‐risk or disadvantaged patients. States intentionally making assessment tools easier.
Stress among administrators, teachers, and students due to high‐stakes testing.
Collaboration‐based programs Examples Quality collaboratives Shanghai school system
Hospital engagement networks
Approach toward improving performance Improvement networks: High performing hospitals or providers are identified and work with other groups to improve patient treatment and the care process. Pairing of districts: High‐performing districts are paired with lower performing districts to exchange education development plans, curricula, and teaching materials.
Data sharing: Facilities collect and share data to monitor quality improvements and better identify best practices. Commissioned administration: A high‐performing school partners with low performers by sending experienced teachers and administrators to share successful practices and turn around their performance.
Example of success The Michigan Surgical Quality Collaborative was associated with a 2.6% drop in general and vascular surgery complications. Hospitals participating in the programs made improvements at a faster rate than those outside of the program. Zhabei District No. 8 School, located in an area with high crime rates and low student performance, was transformed from one of the lowest performing schools in its district to ranking 15 out of 30. Approximately 80% of the school's graduates go on to study at universities compared to the municipal average of 56%.

How then is it possible to improve the quality of care at lower performing hospitals without simultaneously designing an incentive system that hurts them? Lessons from the education policy are again instructive. Every 3 years the Organization for Economic Cooperation and Development ranks countries by the performance of their 15‐year olds on a standardized test called the Program for International Student Assessment.[9] For the past 2 sets of rankings, Shanghai, China has topped the list. Like many attempts to generate international rankings, this one has its flaws, and Shanghai's top position has not been without controversy. For one, China is not ranked at the country‐level like other nations; yet, due to the city's status as a wealthy business and financial center, Shanghai certainly cannot be considered representative of the Chinese education system. Nevertheless, the story of how Shanghai reformed its education system and achieved its high position has important implications.

Prior to implementing reforms, Shanghai's rural outer districts struggled with less funding, high teacher turnover rates, and low test scores compared to wealthier urban districts. To reduce education disparities within the city's schools, the government of Shanghai enacted a number of policies aimed at bringing lower performers up to the same level as schools with the highest degree of student achievement.[10] The government gives schools a grade of A, B, C, or D based on the quality of their infrastructure and student performance. It then uses several programs to facilitate the exchange of staff and ideas among schools at different levels. One program pairs high‐performing districts with low‐performing districts to share education development plans, curricula, teaching materials, and best practices. Another strategycalled commissioned administrationinvolves temporary contracts between schools to exchange both administrators and experienced teachers. In addition to these approaches, the government sets a minimum level of spending for schools and transfers public funds to indigent districts to provide them with assistance to reach this level.

The notion that the very best can help the weak requires a sense of solidarity. This solidarity may falter in environments in which hospitals and physicians are in cutthroat competition. Though there will always be some tension between competition and collaboration, in most markets, competition between hospitals does not rule out collaboration. Policies can either relieve or reinforce the natural tension between competition and collaboration. This suggests that adopting reforms with the same intent as the Shanghai system is still possible in healthcare, especially through physician and other provider networks. The healthcare workforce has a rich history of cross‐organizational collaboration through mentorships, the publication of research, and participation in continuing medical education courses. The Centers for Medicare and Medicaid Services' Hospital Engagement Networks, a program in which leading organizations have helped to disseminate interventions to reduce hospital acquired conditions, are an example of this approach. Quality collaborativesgroups of providers who collaborate across institutions to identify problems and best practices for improvementhave similarly shown great promise.[11] Similar approaches have been used by the Institute for Healthcare Improvement in many of their quality improvement initiatives.

Such collaboration‐based programs could be harnessed and tied to financial incentives for quality improvement. For instance, top‐performing hospitals could be incentivized to participate in a venue where they share their best practices with the lower performers in their field. Low performers, in turn, could be provided with financial assistance to implement the appropriate changes. Over time, financial assistance could be made contingent on quality improvements. By providing physicians and other providers with examples of what success looks like and assisting them with garnering the resources to reach this level, improvement would not only be incentivized, it might also become more tangible.

Although some hospitals and physicians may welcome changes to incentive systems, implementation of collaboration‐based programs would not be possible without a facilitator that is willing to underwrite program costs, provide financial incentivizes to providers, and develop a platform for collaboration. Large insurers are the most likely group to have the financial resources and widespread network to develop such programs, but that does not mean that they would be willing to experiment with this approach. This may especially be the case if cost savings and measurable improvements in quality are not immediate. Even though the results of collaboration‐based efforts have been promising, the implementation of these programs has been limited, and adoption in different contexts may not yield the same results. Collaboration‐based programs that have already shown success can serve as models, but they may need significant adaptations to meet the needs of providers in a given area.

Despite its promise, collaboration‐based strategies alone will not be enough to improve certain aspects of quality and value. Although providing physicians with knowledge on how to reduce unnecessary care, for example, could help limit overutilization, it is not sufficient to overcome the incentives of fee‐for‐service payment. In this case, broader payment reform and population‐based accountability can be paired with programs to encourage collaboration. For instance, the Blue Cross and Blue Shield of Massachusetts' Alternative Contract has used a combination of technical assistance, shared savings, and large quality bonuses to improve quality and reduce medical spending growth.[12] Collaboration‐based strategies should be seen as a complement to these broad, thoughtful reforms and a substitute for narrow incentives that encourage myopia and destructive competition.

Evidence from education and healthcare shows that penalizing the worst and rewarding the best will not shift the bell curve of performance. Such approaches are more likely to entrench and expand disparities. Instead, policy should encourage and incentivize collaboration to expand best practices that improve patient outcomes. Lessons from education provide both cautionary tales and novel solutions that might improve healthcare.

Disclosure: Nothing to report.

References
  1. Burwell SM. Setting value‐based payment goals—HHS efforts to improve US health care. N Engl J Med. 2015;372:897899.
  2. Herck P, Smedt D, Annemans L, Remmen R, Rosenthal MB, Sermeus W. Systematic review: effects, design choices, and context of pay‐for‐performance in health care. BMC Health Serv Res. 2010;10:247.
  3. Houle SK, McAlister FA, Jackevicius CA, Chuck AW, Tsuyuki RT. Does performance‐based remuneration for individual health care practitioners affect patient care?: a systematic review. Ann Intern Med. 2012;157(12):889899.
  4. Jha AK, Joynt KE, Orav J, Epstein AM. The long‐term effect of premier pay‐for‐ performance on patient outcomes. N Engl J Med. 2012;366:16061615.
  5. Darling‐Hammond L. Race, inequality and education accountability: the irony of ‘no child left behind.’ Race Ethn Educ. 2007;10:245260.
  6. Dee TS, Jacob BA. The impact of no child left behind on students, teachers, and schools. In: Brookings Paper on Economic Activity. Washington, DC: The Brookings Institution; 2010:149194.
  7. Ryan AM. Will value‐based purchasing increase disparities in care? N Engl J Med. 2013;369:24722474.
  8. Chien AT, Wroblewski K, Damberg C, et al. Do physician organizations located in lower socioeconomic status areas score lower on pay‐for‐performance measures? J Gen Intern Med. 2012;27:548554.
  9. Loveless T. Brown Center Chalkboard. Attention OECD‐PISA: your silence on China is wrong. Washington, DC: The Brookings Institute; 2013:48.
  10. Organisation for Economic Cooperation and Development. Shanghai and Hong Kong: two distinct examples of education reform in China. In: Strong Performers and Successful Reformers in Education: Lessons from PISA for the United States. Paris, France: OECD Publishing; 2010:83115.
  11. Share DA, Campbell DA, Birkmeyer N, et al. How a regional collaborative of hospitals and physicians in Michigan cut costs and improved the quality of care. Health Aff. 2011;30:636645.
  12. Song Z, Rose S, Safran DG, Landon BE, Day MP, Chernew ME. Changes in health care spending and quality 4 years into global payment. N Engl J Med. 2014;371:17041714.
References
  1. Burwell SM. Setting value‐based payment goals—HHS efforts to improve US health care. N Engl J Med. 2015;372:897899.
  2. Herck P, Smedt D, Annemans L, Remmen R, Rosenthal MB, Sermeus W. Systematic review: effects, design choices, and context of pay‐for‐performance in health care. BMC Health Serv Res. 2010;10:247.
  3. Houle SK, McAlister FA, Jackevicius CA, Chuck AW, Tsuyuki RT. Does performance‐based remuneration for individual health care practitioners affect patient care?: a systematic review. Ann Intern Med. 2012;157(12):889899.
  4. Jha AK, Joynt KE, Orav J, Epstein AM. The long‐term effect of premier pay‐for‐ performance on patient outcomes. N Engl J Med. 2012;366:16061615.
  5. Darling‐Hammond L. Race, inequality and education accountability: the irony of ‘no child left behind.’ Race Ethn Educ. 2007;10:245260.
  6. Dee TS, Jacob BA. The impact of no child left behind on students, teachers, and schools. In: Brookings Paper on Economic Activity. Washington, DC: The Brookings Institution; 2010:149194.
  7. Ryan AM. Will value‐based purchasing increase disparities in care? N Engl J Med. 2013;369:24722474.
  8. Chien AT, Wroblewski K, Damberg C, et al. Do physician organizations located in lower socioeconomic status areas score lower on pay‐for‐performance measures? J Gen Intern Med. 2012;27:548554.
  9. Loveless T. Brown Center Chalkboard. Attention OECD‐PISA: your silence on China is wrong. Washington, DC: The Brookings Institute; 2013:48.
  10. Organisation for Economic Cooperation and Development. Shanghai and Hong Kong: two distinct examples of education reform in China. In: Strong Performers and Successful Reformers in Education: Lessons from PISA for the United States. Paris, France: OECD Publishing; 2010:83115.
  11. Share DA, Campbell DA, Birkmeyer N, et al. How a regional collaborative of hospitals and physicians in Michigan cut costs and improved the quality of care. Health Aff. 2011;30:636645.
  12. Song Z, Rose S, Safran DG, Landon BE, Day MP, Chernew ME. Changes in health care spending and quality 4 years into global payment. N Engl J Med. 2014;371:17041714.
Issue
Journal of Hospital Medicine - 11(1)
Issue
Journal of Hospital Medicine - 11(1)
Page Number
62-64
Page Number
62-64
Publications
Publications
Article Type
Display Headline
No hospital left behind? Education policy lessons for value‐based payment in healthcare
Display Headline
No hospital left behind? Education policy lessons for value‐based payment in healthcare
Sections
Article Source
© 2015 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Andrew Ryan, 1420 Washington Heights, M3124, Ann Arbor, MI 48109‐2029; Telephone: 734‐936‐1311; Fax: 734‐936‐4338; E‐mail: amryan@umich.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Public Quality Reporting

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Grade pending: Lessons for hospital quality reporting from the New York City restaurant sanitation inspection program

Few consumers would choose to dine at a restaurant if they knew the kitchen was infested with cockroaches. Few patients would choose to undergo a liver transplant in a hospital that was performing the procedure for the first time. In most sectors, consumers gather information about quality (and price) from the marketplace, where economic theory predicts that rational behavior and competition will lead to continuous improvement over time. However, for some goods and services, information is sparse and asymmetric between consumers and suppliers. In sectors where consumer health is at risk, society has often intervened to assure minimum standards. Yet sometimes these efforts have fallen short. In healthcare, physician licensure and hospital accreditation (eg, through the Joint Commission), although providing an important foundation to assure safety, have not come close to solving the widespread quality problems.[1] Basic regulatory requirements for restaurants have also proven inadequate to prevent food‐borne illness. Consumer trust, without information, can be a recipe (or prescription) for trouble.

In response, high‐profile efforts have been introduced to publicize the quality and safety of service providers. One example is Hospital Compare, Medicare's national quality reporting program for US hospitals.[2] The New York City sanitary grade inspection program is a parallel effort for restaurants. Although customers can judge how much they like the food from a restaurantor look up reviews at Yelp.comthey face greater difficulty identifying whether a restaurant was responsible for making them sick. By publicizing restaurants' sanitation conditions, the New York City inspection program seeks to use market forces to decrease food‐borne illness by deterring consumers from eating at restaurants with poor sanitation grades.

The aims of Hospital Compare and the New York City sanitary inspection program are fundamentally similar. Both initiatives seek to address a common market failure resulting in the consumer's lack of information on quality and safety. By infusing the market with information, these programs enable consumers to make better choices and encourage service providers to improve quality and safety.[3] Despite the promise of these programs, a copious literature about the effects of public quality reporting in healthcare has found mixed results.[4, 5] Although the performance measures in any public reporting program must be valid and reliable, good measures are not sufficient to achieve the goals of public reporting. To engage patients, reported results must also be accessible, understandable, and meaningful. Both patients' lack of knowledge about the reports[6] and patients' inability to effectively use these data to make better decisions[7] are some reasons why public quality reporting has fallen short of its expectations. This article argues that the New York City program is much better structured to positively affect patient choice, and holds important lessons for public quality reporting in US hospitals.

CONTRASTS BETWEEN HOSPITAL COMPARE AND THE NEW YORK CITY RESTAURANT SANITARY INSPECTION PROGRAM

Hospital Compare reports performance for 108 separate quality indicators related to quality and patient safety for US hospitals (Table 1). These are a combination of structure measures (eg, hospital participation in a systematic database for cardiac surgery), process of care measures (eg, acute myocardial infarction patients receiving fibrinolytic therapy within 30 minutes of hospital arrival), outcomes (eg, 30‐day mortality and readmission), and patient experience measures (eg, how you would rate your communication with your physician). Hospital Compare data, frequently based on hospital quality performance 1 to 3 years prior to publication, are displayed on a website. Hospitals do not receive a summary measure of quality or safety.[8] Hospitals face financial incentives that are tied to measure reporting[9] and performance for some of the measures on Hospital Compare.[10, 11] Hospital accreditation is only loosely related to performance on these measures.

Contrasts Between Hospital Compare and the New York City Sanitary Inspection Program
Attribute Hospital Compare New York City Sanitary Inspection Program
Display of information On a website (http://www.medicare.gov/hospitalcompare/search.html). On the front of the restaurant, with additional information also available on a website (http://www.nyc.gov/html/doh/html/services/restaurant‐inspection.shtml).
Frequency of information update Quarterly; data often lag by between 1 and 3 years. Unannounced inspections occur at least annually. Grades are posted immediately after inspection.
Quality measures Mix of measures pertaining to quality improvement activities (eg, hospital participation in a cardiac surgery registry or a quality improvement initiative), rates of adherence with evidence‐based medicine (eg, heart failure patients receiving discharge instructions, acute myocardial infarction patients receiving ‐blocker at arrival), and patient outcomes (eg, 30‐day mortality and 30‐day readmission for acute myocardial infarction, heart failure, and pneumonia). Mix of measures pertaining to conditions of the facility (eg, improper sewage disposal system, improper food contact surface, evidence of live rats in the facility) and the treatment and handling of food (eg, food is unwrapped, appropriate thermometer not used to measure temperature of potentially hazardous foods, food not prepared to sufficiently high temperature).
Clarity and simplicity of information 108 individual measures. No summary measure. Single summary letter grade displayed on front of restaurant. Detailed data on individual violations (ie, measures) available on website.
Consequences of poor performance and mechanisms for enforcement Hospitals are subject to financial penalties for not reporting certain measures and face financial incentives for performance on a subset of measures. Restaurants are fined for violations, are subject to repeated inspections for poor performance, and are subject to closure for severe violations.
Consumer awareness Limited Widespread

The New York City sanitation program regularly inspects restaurants and scores them on a standard set of indicators that correspond to critical violations (eg, food is contaminated by mouse droppings) or general violations (eg, garbage is not adequately covered).[12] Points are assigned to each type and severity of violation, and the sum of the points are converted into a summary grade of A, B, or C. Restaurants can dispute the grades, receiving a grade pending designation until the dispute is adjudicated. After inspection, sanitation grades are immediately posted on restaurants' front door or window, providing current information that is clearly visible to consumers before entering. More detailed information on sanitation violations is also available on a website. If restaurants receive an A grade, they face no additional inspections for 1 year, but poorly graded restaurants may receive monthly inspections. Restaurants face fines from violations and are subject to closure from severe violations. Recently proposed changes would decrease fines and give restaurants greater opportunities to appeal grades, but leave the program otherwise intact.[13]

IMPLICATIONS FOR PUBLIC QUALITY REPORTING IN HOSPITALS

Along with value‐based payment reforms, public quality reporting is one of the few major system‐level approaches that is being implemented in the US to improve quality and safety in healthcare. However, without a simple and understandable display of information that is available when a patient needs it, quality and safety information will likely go unused.[14] Hospital Compare leaves it up the patient to find the quality and safety information and does little to help patients understand and use the information effectively. Hospital Compare asks patients to do far more work, which is perhaps why it has been largely ignored by patients.[2, 15] The New York City sanitation inspection program evaluates restaurants, prominently displays an understandable summary result, and puts the scoring details in the background. Although peer‐reviewed evaluations of the New York City sanitation inspection program have not yet been published, internal data show that the program has decreased customer concern about getting sick, improved sanitary practices, and decreased salmonella.[16] Evidence from a similar program in Los Angeles County found that hygiene grades steered consumers toward restaurants with better sanitary conditions and decreased food‐borne illness.[17]

The nature of choice in healthcare, particularly the choice of hospital, is much different than it is for restaurants. In some areas, a single hospital may serve a large geographical area, severely limiting choice. Even when patients have the ability to receive care at different hospitals, choice may be limited because patients are referred to a specific hospital by their outpatient physician or are brought to a hospital during an emergency.[18] In these cases, quality grades on the front doors of hospitals would not affect patient decisions, at least for that admission. Nonetheless, if quality grades were posted on the front doors of hospitals, patients receiving both inpatient and outpatient care would see the grades, and could use the information to make future decisions. Posted grades may also lead patients to review more in‐depth quality information related to their condition on the Hospital Compare website. Posted quality grades would also increase the visibility of the grades for other stakeholdersincluding the media and boards of directorsmagnifying their salience and impact.

How quality information is displayed and summarized can make or break public reporting programs. The New York City sanitation inspection program displays summarized, composite measures in the form of widely understood letter grades. Hospital Compare, however, displays myriad, unrelated performance measures that are not summarized into a global quality or safety measure. This information display is at odds with best practice. Patients find it difficult to synthesize data from multiple performance indicators to determine the relative quality of healthcare providers or insurance plans.7 In many cases, more information can lead to worse decision making.[19] Patients' difficulty making optimal choices has been noted in numerous healthcare settings, including purchasing Medicare Part D plans[20] and choosing health plans.[21] Recent evidence suggests that Nursing Home Compare's shift from an unsummarized collection of disparate performance measures to a 5‐star rating system has led patients to choose higher‐ranked facilities.[22] The fact that commercial providers of product quality information, such as Consumer Reports[23] and US News and World Report,[24] publish global summary scores, in addition to component scores, is a hint that this style of reporting is more appealing to consumers. Reports suggest that Medicare is moving toward a 5‐star quality rating system for hospitals,[8] which is a welcome development.

Different types of patients may demand different types of quality information, and a single summary measure for Hospital Compare may not meet the needs of a diverse set of patients. Nonetheless, the benefits from an actionable, understandable, comprehensive, and appropriate summary measure likely outweigh the costs of a potential mismatch for certain types of patients. Many of the performance measures on Hospital Compare already apply broadly to diverse sets of patients (eg, the structure measures, patient experience, and surgical safety) and are not specific to certain disease areas. Global summary measures could be complemented by separate component scores (eg, by disease area or domain of quality) for patients who wanted information on different aspects of care.

The inspection regime that underlies the New York City sanitary inspection program has parallels in healthcare that could be extended to Hospital Compare. For instance, the Joint Commission performs surprise inspections of hospitals as part of its accreditation process. The publicly reported 5‐star ratings for nursing homes are also based, in part, on inspection results.[25] Results from these types of inspections can capture up‐to‐date information on important dimensions of quality and safety that are not available in standard administrative data sources. Incorporating inspection results into Hospital Compare could increase both the timeliness and validity of the reporting.

The New York City sanitation inspection program is not a panacea: the indicators may not capture all relevant aspects of restaurant sanitation, some research suggests that past sanitary grades do not predict future grades,[26] and sanitary grade inflation over time has the potential to mask meaningful differences in sanitary conditions that are related to food‐borne illness.[16, 26] However, by providing understandable and meaningful reports at the point of service, the New York City program is well designed to encourage sanitation improvement through both consumer and supplier behavior.

Where the New York City sanitation inspection program succeeds, Hospital Compare fails. Hospital Compare is not patient centered, and it is not working for patients. Medicare can learn from the New York City restaurant sanitation inspection program to enhance the effects of public reporting by presenting information to consumers that is relevant, easy to access and interpret, and up to date. The greater complexity of hospital product lines should not deter these efforts. Patients' lives, not just the health of their gastrointestinal tracts, are at stake.

ACKNOWLEDGEMENTS

The authors thank Kaveh G. Shojania, MD, and Edward E. Etchells, MD, MSc, University of Toronto, and Martin Roland, DM, University of Oxford and RAND Europe for their comments on an earlier draft of the manuscript. None were compensated for their contributions.

Disclosures: Nothing to report.

Files
References
  1. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001.
  2. Ryan AM, Nallamothu BK, Dimick JB. Medicare's public reporting initiative on hospital quality had modest or no impact on mortality from three key conditions. Health Aff (Millwood). 2012;31(3):585592.
  3. Muller MP, Detsky AS. Public reporting of hospital hand hygiene compliance—helpful or harmful? JAMA. 2010;304(10):11161117.
  4. Epstein AJ. Do cardiac surgery report cards reduce mortality? Assessing the evidence. Med Care Res Rev. 2006;63(4):403426.
  5. Kolstad JT, Chernew ME. Quality and consumer decision making in the market for health insurance and health care services. Med Care Res Rev. 2009;66(1 suppl):28S52S.
  6. Schneider EC, Epstein AM. Use of public performance reports: a survey of patients undergoing cardiac surgery. JAMA. 1998;279(20):16381642.
  7. Hibbard JH, Slovic P, Jewett JJ. Informing consumer decisions in health care: implications from decision‐making research. Milbank Q. 1997;75(3):395414.
  8. Centers for Medicare hospital inpatient prospective payment systems for acute care hospitals and the long‐term care hospital prospective payment system and proposed fiscal year 2014 rates; quality reporting requirements for specific providers; hospital conditions of participation. Fed Regist. 2013:2748627823.
  9. Werner RM, Bradlow ET. Relationship between Medicare's hospital compare performance measures and mortality rates. JAMA. 2006;296(22):26942702.
  10. Ryan AM. Will value‐based purchasing increase disparities in care? N Engl J Med. 2013;369(26):24722474.
  11. Joynt KE, Jha AK. A path forward on Medicare readmissions. N Engl J Med. 2013;368(13):11751177.
  12. New York City Department of Health and Mental Hygiene. What to expect when you're inspected: a guide for food service operators. New York, NY: New York City Department of Health and Mental Hygiene; 2010.
  13. Grynbaum MM. In reprieve for restaurant industry, New York proposes changes to grading system. New York Times. March 22, 2014:A15.
  14. Kahneman D. Thinking, Fast and Slow. New York, NY: Farrar, Straus and Giroux; 2011.
  15. Huesch MD, Currid‐Halkett E, Doctor JN. Public hospital quality report awareness: evidence from National and Californian Internet searches and social media mentions, 2012. BMJ Open. 2014;4(3):e004417.
  16. New York City Department of Health and Mental Hygiene. Restaurant Grading in New York City at 18 Months. New York, NY: New York City Department of Health and Mental Hygiene; 2013.
  17. Jin GZ, Leslie P. The effect of information on product quality: evidence from restaurant hygiene grade cards. Q J Econ. 2003;118(2):409451.
  18. Doyle JJ, Graves JA, Gruber J, Kleiner S. Do high‐cost hospitals deliver better care? Evidence from ambulance referral patterns. National Bureau of Economic Research. Working paper no. 17936. Available at: http://www.nber.org/papers/w17936.pdf. Published March 2012. Accessed November 18, 2014.
  19. Peters E, Dieckmann N, Dixon A, Hibbard JH, Mertz CK. Less is more in presenting quality information to consumers. Med Care Res Rev. 2007;64(2)169190.
  20. Abaluck J and Gruber J. Choice inconsistencies among the elderly: evidence from plan choice in the Medicare Part D program. Amer Econ Rev. 2011;101(4)11801210.
  21. Hibbard JH, Slovic P, Peters E, Finucane ML. Strategies for reporting health plan performance information to consumers: evidence from controlled studies. Health Serv Res. 2002;37(2):291313.
  22. Hirth RA, Huang SS. Quality reporting and private prices: evidence from the nursing home industry. Paper presented at: American Society of Health Economists Annual Meeting; June 23, 2014; Los Angeles, CA.
  23. Consumer Reports. Best new care values. Available at: http://consumerreports.org/cro/2012/05/best-new-car-values/index.htm. Updated February 2014. Accessed November 18, 2014.
  24. Morse R. Best value schools methodology. US News and World Report. September 8, 2014. Available at: http://www.usnews.com/education/best-colleges/articles/2013/09/09/best-value-schools-methodology. Accessed November 18, 2014.
  25. Centers for Medicare 122:574677.
Article PDF
Issue
Journal of Hospital Medicine - 10(2)
Publications
Page Number
116-119
Sections
Files
Files
Article PDF
Article PDF

Few consumers would choose to dine at a restaurant if they knew the kitchen was infested with cockroaches. Few patients would choose to undergo a liver transplant in a hospital that was performing the procedure for the first time. In most sectors, consumers gather information about quality (and price) from the marketplace, where economic theory predicts that rational behavior and competition will lead to continuous improvement over time. However, for some goods and services, information is sparse and asymmetric between consumers and suppliers. In sectors where consumer health is at risk, society has often intervened to assure minimum standards. Yet sometimes these efforts have fallen short. In healthcare, physician licensure and hospital accreditation (eg, through the Joint Commission), although providing an important foundation to assure safety, have not come close to solving the widespread quality problems.[1] Basic regulatory requirements for restaurants have also proven inadequate to prevent food‐borne illness. Consumer trust, without information, can be a recipe (or prescription) for trouble.

In response, high‐profile efforts have been introduced to publicize the quality and safety of service providers. One example is Hospital Compare, Medicare's national quality reporting program for US hospitals.[2] The New York City sanitary grade inspection program is a parallel effort for restaurants. Although customers can judge how much they like the food from a restaurantor look up reviews at Yelp.comthey face greater difficulty identifying whether a restaurant was responsible for making them sick. By publicizing restaurants' sanitation conditions, the New York City inspection program seeks to use market forces to decrease food‐borne illness by deterring consumers from eating at restaurants with poor sanitation grades.

The aims of Hospital Compare and the New York City sanitary inspection program are fundamentally similar. Both initiatives seek to address a common market failure resulting in the consumer's lack of information on quality and safety. By infusing the market with information, these programs enable consumers to make better choices and encourage service providers to improve quality and safety.[3] Despite the promise of these programs, a copious literature about the effects of public quality reporting in healthcare has found mixed results.[4, 5] Although the performance measures in any public reporting program must be valid and reliable, good measures are not sufficient to achieve the goals of public reporting. To engage patients, reported results must also be accessible, understandable, and meaningful. Both patients' lack of knowledge about the reports[6] and patients' inability to effectively use these data to make better decisions[7] are some reasons why public quality reporting has fallen short of its expectations. This article argues that the New York City program is much better structured to positively affect patient choice, and holds important lessons for public quality reporting in US hospitals.

CONTRASTS BETWEEN HOSPITAL COMPARE AND THE NEW YORK CITY RESTAURANT SANITARY INSPECTION PROGRAM

Hospital Compare reports performance for 108 separate quality indicators related to quality and patient safety for US hospitals (Table 1). These are a combination of structure measures (eg, hospital participation in a systematic database for cardiac surgery), process of care measures (eg, acute myocardial infarction patients receiving fibrinolytic therapy within 30 minutes of hospital arrival), outcomes (eg, 30‐day mortality and readmission), and patient experience measures (eg, how you would rate your communication with your physician). Hospital Compare data, frequently based on hospital quality performance 1 to 3 years prior to publication, are displayed on a website. Hospitals do not receive a summary measure of quality or safety.[8] Hospitals face financial incentives that are tied to measure reporting[9] and performance for some of the measures on Hospital Compare.[10, 11] Hospital accreditation is only loosely related to performance on these measures.

Contrasts Between Hospital Compare and the New York City Sanitary Inspection Program
Attribute Hospital Compare New York City Sanitary Inspection Program
Display of information On a website (http://www.medicare.gov/hospitalcompare/search.html). On the front of the restaurant, with additional information also available on a website (http://www.nyc.gov/html/doh/html/services/restaurant‐inspection.shtml).
Frequency of information update Quarterly; data often lag by between 1 and 3 years. Unannounced inspections occur at least annually. Grades are posted immediately after inspection.
Quality measures Mix of measures pertaining to quality improvement activities (eg, hospital participation in a cardiac surgery registry or a quality improvement initiative), rates of adherence with evidence‐based medicine (eg, heart failure patients receiving discharge instructions, acute myocardial infarction patients receiving ‐blocker at arrival), and patient outcomes (eg, 30‐day mortality and 30‐day readmission for acute myocardial infarction, heart failure, and pneumonia). Mix of measures pertaining to conditions of the facility (eg, improper sewage disposal system, improper food contact surface, evidence of live rats in the facility) and the treatment and handling of food (eg, food is unwrapped, appropriate thermometer not used to measure temperature of potentially hazardous foods, food not prepared to sufficiently high temperature).
Clarity and simplicity of information 108 individual measures. No summary measure. Single summary letter grade displayed on front of restaurant. Detailed data on individual violations (ie, measures) available on website.
Consequences of poor performance and mechanisms for enforcement Hospitals are subject to financial penalties for not reporting certain measures and face financial incentives for performance on a subset of measures. Restaurants are fined for violations, are subject to repeated inspections for poor performance, and are subject to closure for severe violations.
Consumer awareness Limited Widespread

The New York City sanitation program regularly inspects restaurants and scores them on a standard set of indicators that correspond to critical violations (eg, food is contaminated by mouse droppings) or general violations (eg, garbage is not adequately covered).[12] Points are assigned to each type and severity of violation, and the sum of the points are converted into a summary grade of A, B, or C. Restaurants can dispute the grades, receiving a grade pending designation until the dispute is adjudicated. After inspection, sanitation grades are immediately posted on restaurants' front door or window, providing current information that is clearly visible to consumers before entering. More detailed information on sanitation violations is also available on a website. If restaurants receive an A grade, they face no additional inspections for 1 year, but poorly graded restaurants may receive monthly inspections. Restaurants face fines from violations and are subject to closure from severe violations. Recently proposed changes would decrease fines and give restaurants greater opportunities to appeal grades, but leave the program otherwise intact.[13]

IMPLICATIONS FOR PUBLIC QUALITY REPORTING IN HOSPITALS

Along with value‐based payment reforms, public quality reporting is one of the few major system‐level approaches that is being implemented in the US to improve quality and safety in healthcare. However, without a simple and understandable display of information that is available when a patient needs it, quality and safety information will likely go unused.[14] Hospital Compare leaves it up the patient to find the quality and safety information and does little to help patients understand and use the information effectively. Hospital Compare asks patients to do far more work, which is perhaps why it has been largely ignored by patients.[2, 15] The New York City sanitation inspection program evaluates restaurants, prominently displays an understandable summary result, and puts the scoring details in the background. Although peer‐reviewed evaluations of the New York City sanitation inspection program have not yet been published, internal data show that the program has decreased customer concern about getting sick, improved sanitary practices, and decreased salmonella.[16] Evidence from a similar program in Los Angeles County found that hygiene grades steered consumers toward restaurants with better sanitary conditions and decreased food‐borne illness.[17]

The nature of choice in healthcare, particularly the choice of hospital, is much different than it is for restaurants. In some areas, a single hospital may serve a large geographical area, severely limiting choice. Even when patients have the ability to receive care at different hospitals, choice may be limited because patients are referred to a specific hospital by their outpatient physician or are brought to a hospital during an emergency.[18] In these cases, quality grades on the front doors of hospitals would not affect patient decisions, at least for that admission. Nonetheless, if quality grades were posted on the front doors of hospitals, patients receiving both inpatient and outpatient care would see the grades, and could use the information to make future decisions. Posted grades may also lead patients to review more in‐depth quality information related to their condition on the Hospital Compare website. Posted quality grades would also increase the visibility of the grades for other stakeholdersincluding the media and boards of directorsmagnifying their salience and impact.

How quality information is displayed and summarized can make or break public reporting programs. The New York City sanitation inspection program displays summarized, composite measures in the form of widely understood letter grades. Hospital Compare, however, displays myriad, unrelated performance measures that are not summarized into a global quality or safety measure. This information display is at odds with best practice. Patients find it difficult to synthesize data from multiple performance indicators to determine the relative quality of healthcare providers or insurance plans.7 In many cases, more information can lead to worse decision making.[19] Patients' difficulty making optimal choices has been noted in numerous healthcare settings, including purchasing Medicare Part D plans[20] and choosing health plans.[21] Recent evidence suggests that Nursing Home Compare's shift from an unsummarized collection of disparate performance measures to a 5‐star rating system has led patients to choose higher‐ranked facilities.[22] The fact that commercial providers of product quality information, such as Consumer Reports[23] and US News and World Report,[24] publish global summary scores, in addition to component scores, is a hint that this style of reporting is more appealing to consumers. Reports suggest that Medicare is moving toward a 5‐star quality rating system for hospitals,[8] which is a welcome development.

Different types of patients may demand different types of quality information, and a single summary measure for Hospital Compare may not meet the needs of a diverse set of patients. Nonetheless, the benefits from an actionable, understandable, comprehensive, and appropriate summary measure likely outweigh the costs of a potential mismatch for certain types of patients. Many of the performance measures on Hospital Compare already apply broadly to diverse sets of patients (eg, the structure measures, patient experience, and surgical safety) and are not specific to certain disease areas. Global summary measures could be complemented by separate component scores (eg, by disease area or domain of quality) for patients who wanted information on different aspects of care.

The inspection regime that underlies the New York City sanitary inspection program has parallels in healthcare that could be extended to Hospital Compare. For instance, the Joint Commission performs surprise inspections of hospitals as part of its accreditation process. The publicly reported 5‐star ratings for nursing homes are also based, in part, on inspection results.[25] Results from these types of inspections can capture up‐to‐date information on important dimensions of quality and safety that are not available in standard administrative data sources. Incorporating inspection results into Hospital Compare could increase both the timeliness and validity of the reporting.

The New York City sanitation inspection program is not a panacea: the indicators may not capture all relevant aspects of restaurant sanitation, some research suggests that past sanitary grades do not predict future grades,[26] and sanitary grade inflation over time has the potential to mask meaningful differences in sanitary conditions that are related to food‐borne illness.[16, 26] However, by providing understandable and meaningful reports at the point of service, the New York City program is well designed to encourage sanitation improvement through both consumer and supplier behavior.

Where the New York City sanitation inspection program succeeds, Hospital Compare fails. Hospital Compare is not patient centered, and it is not working for patients. Medicare can learn from the New York City restaurant sanitation inspection program to enhance the effects of public reporting by presenting information to consumers that is relevant, easy to access and interpret, and up to date. The greater complexity of hospital product lines should not deter these efforts. Patients' lives, not just the health of their gastrointestinal tracts, are at stake.

ACKNOWLEDGEMENTS

The authors thank Kaveh G. Shojania, MD, and Edward E. Etchells, MD, MSc, University of Toronto, and Martin Roland, DM, University of Oxford and RAND Europe for their comments on an earlier draft of the manuscript. None were compensated for their contributions.

Disclosures: Nothing to report.

Few consumers would choose to dine at a restaurant if they knew the kitchen was infested with cockroaches. Few patients would choose to undergo a liver transplant in a hospital that was performing the procedure for the first time. In most sectors, consumers gather information about quality (and price) from the marketplace, where economic theory predicts that rational behavior and competition will lead to continuous improvement over time. However, for some goods and services, information is sparse and asymmetric between consumers and suppliers. In sectors where consumer health is at risk, society has often intervened to assure minimum standards. Yet sometimes these efforts have fallen short. In healthcare, physician licensure and hospital accreditation (eg, through the Joint Commission), although providing an important foundation to assure safety, have not come close to solving the widespread quality problems.[1] Basic regulatory requirements for restaurants have also proven inadequate to prevent food‐borne illness. Consumer trust, without information, can be a recipe (or prescription) for trouble.

In response, high‐profile efforts have been introduced to publicize the quality and safety of service providers. One example is Hospital Compare, Medicare's national quality reporting program for US hospitals.[2] The New York City sanitary grade inspection program is a parallel effort for restaurants. Although customers can judge how much they like the food from a restaurantor look up reviews at Yelp.comthey face greater difficulty identifying whether a restaurant was responsible for making them sick. By publicizing restaurants' sanitation conditions, the New York City inspection program seeks to use market forces to decrease food‐borne illness by deterring consumers from eating at restaurants with poor sanitation grades.

The aims of Hospital Compare and the New York City sanitary inspection program are fundamentally similar. Both initiatives seek to address a common market failure resulting in the consumer's lack of information on quality and safety. By infusing the market with information, these programs enable consumers to make better choices and encourage service providers to improve quality and safety.[3] Despite the promise of these programs, a copious literature about the effects of public quality reporting in healthcare has found mixed results.[4, 5] Although the performance measures in any public reporting program must be valid and reliable, good measures are not sufficient to achieve the goals of public reporting. To engage patients, reported results must also be accessible, understandable, and meaningful. Both patients' lack of knowledge about the reports[6] and patients' inability to effectively use these data to make better decisions[7] are some reasons why public quality reporting has fallen short of its expectations. This article argues that the New York City program is much better structured to positively affect patient choice, and holds important lessons for public quality reporting in US hospitals.

CONTRASTS BETWEEN HOSPITAL COMPARE AND THE NEW YORK CITY RESTAURANT SANITARY INSPECTION PROGRAM

Hospital Compare reports performance for 108 separate quality indicators related to quality and patient safety for US hospitals (Table 1). These are a combination of structure measures (eg, hospital participation in a systematic database for cardiac surgery), process of care measures (eg, acute myocardial infarction patients receiving fibrinolytic therapy within 30 minutes of hospital arrival), outcomes (eg, 30‐day mortality and readmission), and patient experience measures (eg, how you would rate your communication with your physician). Hospital Compare data, frequently based on hospital quality performance 1 to 3 years prior to publication, are displayed on a website. Hospitals do not receive a summary measure of quality or safety.[8] Hospitals face financial incentives that are tied to measure reporting[9] and performance for some of the measures on Hospital Compare.[10, 11] Hospital accreditation is only loosely related to performance on these measures.

Contrasts Between Hospital Compare and the New York City Sanitary Inspection Program
Attribute Hospital Compare New York City Sanitary Inspection Program
Display of information On a website (http://www.medicare.gov/hospitalcompare/search.html). On the front of the restaurant, with additional information also available on a website (http://www.nyc.gov/html/doh/html/services/restaurant‐inspection.shtml).
Frequency of information update Quarterly; data often lag by between 1 and 3 years. Unannounced inspections occur at least annually. Grades are posted immediately after inspection.
Quality measures Mix of measures pertaining to quality improvement activities (eg, hospital participation in a cardiac surgery registry or a quality improvement initiative), rates of adherence with evidence‐based medicine (eg, heart failure patients receiving discharge instructions, acute myocardial infarction patients receiving ‐blocker at arrival), and patient outcomes (eg, 30‐day mortality and 30‐day readmission for acute myocardial infarction, heart failure, and pneumonia). Mix of measures pertaining to conditions of the facility (eg, improper sewage disposal system, improper food contact surface, evidence of live rats in the facility) and the treatment and handling of food (eg, food is unwrapped, appropriate thermometer not used to measure temperature of potentially hazardous foods, food not prepared to sufficiently high temperature).
Clarity and simplicity of information 108 individual measures. No summary measure. Single summary letter grade displayed on front of restaurant. Detailed data on individual violations (ie, measures) available on website.
Consequences of poor performance and mechanisms for enforcement Hospitals are subject to financial penalties for not reporting certain measures and face financial incentives for performance on a subset of measures. Restaurants are fined for violations, are subject to repeated inspections for poor performance, and are subject to closure for severe violations.
Consumer awareness Limited Widespread

The New York City sanitation program regularly inspects restaurants and scores them on a standard set of indicators that correspond to critical violations (eg, food is contaminated by mouse droppings) or general violations (eg, garbage is not adequately covered).[12] Points are assigned to each type and severity of violation, and the sum of the points are converted into a summary grade of A, B, or C. Restaurants can dispute the grades, receiving a grade pending designation until the dispute is adjudicated. After inspection, sanitation grades are immediately posted on restaurants' front door or window, providing current information that is clearly visible to consumers before entering. More detailed information on sanitation violations is also available on a website. If restaurants receive an A grade, they face no additional inspections for 1 year, but poorly graded restaurants may receive monthly inspections. Restaurants face fines from violations and are subject to closure from severe violations. Recently proposed changes would decrease fines and give restaurants greater opportunities to appeal grades, but leave the program otherwise intact.[13]

IMPLICATIONS FOR PUBLIC QUALITY REPORTING IN HOSPITALS

Along with value‐based payment reforms, public quality reporting is one of the few major system‐level approaches that is being implemented in the US to improve quality and safety in healthcare. However, without a simple and understandable display of information that is available when a patient needs it, quality and safety information will likely go unused.[14] Hospital Compare leaves it up the patient to find the quality and safety information and does little to help patients understand and use the information effectively. Hospital Compare asks patients to do far more work, which is perhaps why it has been largely ignored by patients.[2, 15] The New York City sanitation inspection program evaluates restaurants, prominently displays an understandable summary result, and puts the scoring details in the background. Although peer‐reviewed evaluations of the New York City sanitation inspection program have not yet been published, internal data show that the program has decreased customer concern about getting sick, improved sanitary practices, and decreased salmonella.[16] Evidence from a similar program in Los Angeles County found that hygiene grades steered consumers toward restaurants with better sanitary conditions and decreased food‐borne illness.[17]

The nature of choice in healthcare, particularly the choice of hospital, is much different than it is for restaurants. In some areas, a single hospital may serve a large geographical area, severely limiting choice. Even when patients have the ability to receive care at different hospitals, choice may be limited because patients are referred to a specific hospital by their outpatient physician or are brought to a hospital during an emergency.[18] In these cases, quality grades on the front doors of hospitals would not affect patient decisions, at least for that admission. Nonetheless, if quality grades were posted on the front doors of hospitals, patients receiving both inpatient and outpatient care would see the grades, and could use the information to make future decisions. Posted grades may also lead patients to review more in‐depth quality information related to their condition on the Hospital Compare website. Posted quality grades would also increase the visibility of the grades for other stakeholdersincluding the media and boards of directorsmagnifying their salience and impact.

How quality information is displayed and summarized can make or break public reporting programs. The New York City sanitation inspection program displays summarized, composite measures in the form of widely understood letter grades. Hospital Compare, however, displays myriad, unrelated performance measures that are not summarized into a global quality or safety measure. This information display is at odds with best practice. Patients find it difficult to synthesize data from multiple performance indicators to determine the relative quality of healthcare providers or insurance plans.7 In many cases, more information can lead to worse decision making.[19] Patients' difficulty making optimal choices has been noted in numerous healthcare settings, including purchasing Medicare Part D plans[20] and choosing health plans.[21] Recent evidence suggests that Nursing Home Compare's shift from an unsummarized collection of disparate performance measures to a 5‐star rating system has led patients to choose higher‐ranked facilities.[22] The fact that commercial providers of product quality information, such as Consumer Reports[23] and US News and World Report,[24] publish global summary scores, in addition to component scores, is a hint that this style of reporting is more appealing to consumers. Reports suggest that Medicare is moving toward a 5‐star quality rating system for hospitals,[8] which is a welcome development.

Different types of patients may demand different types of quality information, and a single summary measure for Hospital Compare may not meet the needs of a diverse set of patients. Nonetheless, the benefits from an actionable, understandable, comprehensive, and appropriate summary measure likely outweigh the costs of a potential mismatch for certain types of patients. Many of the performance measures on Hospital Compare already apply broadly to diverse sets of patients (eg, the structure measures, patient experience, and surgical safety) and are not specific to certain disease areas. Global summary measures could be complemented by separate component scores (eg, by disease area or domain of quality) for patients who wanted information on different aspects of care.

The inspection regime that underlies the New York City sanitary inspection program has parallels in healthcare that could be extended to Hospital Compare. For instance, the Joint Commission performs surprise inspections of hospitals as part of its accreditation process. The publicly reported 5‐star ratings for nursing homes are also based, in part, on inspection results.[25] Results from these types of inspections can capture up‐to‐date information on important dimensions of quality and safety that are not available in standard administrative data sources. Incorporating inspection results into Hospital Compare could increase both the timeliness and validity of the reporting.

The New York City sanitation inspection program is not a panacea: the indicators may not capture all relevant aspects of restaurant sanitation, some research suggests that past sanitary grades do not predict future grades,[26] and sanitary grade inflation over time has the potential to mask meaningful differences in sanitary conditions that are related to food‐borne illness.[16, 26] However, by providing understandable and meaningful reports at the point of service, the New York City program is well designed to encourage sanitation improvement through both consumer and supplier behavior.

Where the New York City sanitation inspection program succeeds, Hospital Compare fails. Hospital Compare is not patient centered, and it is not working for patients. Medicare can learn from the New York City restaurant sanitation inspection program to enhance the effects of public reporting by presenting information to consumers that is relevant, easy to access and interpret, and up to date. The greater complexity of hospital product lines should not deter these efforts. Patients' lives, not just the health of their gastrointestinal tracts, are at stake.

ACKNOWLEDGEMENTS

The authors thank Kaveh G. Shojania, MD, and Edward E. Etchells, MD, MSc, University of Toronto, and Martin Roland, DM, University of Oxford and RAND Europe for their comments on an earlier draft of the manuscript. None were compensated for their contributions.

Disclosures: Nothing to report.

References
  1. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001.
  2. Ryan AM, Nallamothu BK, Dimick JB. Medicare's public reporting initiative on hospital quality had modest or no impact on mortality from three key conditions. Health Aff (Millwood). 2012;31(3):585592.
  3. Muller MP, Detsky AS. Public reporting of hospital hand hygiene compliance—helpful or harmful? JAMA. 2010;304(10):11161117.
  4. Epstein AJ. Do cardiac surgery report cards reduce mortality? Assessing the evidence. Med Care Res Rev. 2006;63(4):403426.
  5. Kolstad JT, Chernew ME. Quality and consumer decision making in the market for health insurance and health care services. Med Care Res Rev. 2009;66(1 suppl):28S52S.
  6. Schneider EC, Epstein AM. Use of public performance reports: a survey of patients undergoing cardiac surgery. JAMA. 1998;279(20):16381642.
  7. Hibbard JH, Slovic P, Jewett JJ. Informing consumer decisions in health care: implications from decision‐making research. Milbank Q. 1997;75(3):395414.
  8. Centers for Medicare hospital inpatient prospective payment systems for acute care hospitals and the long‐term care hospital prospective payment system and proposed fiscal year 2014 rates; quality reporting requirements for specific providers; hospital conditions of participation. Fed Regist. 2013:2748627823.
  9. Werner RM, Bradlow ET. Relationship between Medicare's hospital compare performance measures and mortality rates. JAMA. 2006;296(22):26942702.
  10. Ryan AM. Will value‐based purchasing increase disparities in care? N Engl J Med. 2013;369(26):24722474.
  11. Joynt KE, Jha AK. A path forward on Medicare readmissions. N Engl J Med. 2013;368(13):11751177.
  12. New York City Department of Health and Mental Hygiene. What to expect when you're inspected: a guide for food service operators. New York, NY: New York City Department of Health and Mental Hygiene; 2010.
  13. Grynbaum MM. In reprieve for restaurant industry, New York proposes changes to grading system. New York Times. March 22, 2014:A15.
  14. Kahneman D. Thinking, Fast and Slow. New York, NY: Farrar, Straus and Giroux; 2011.
  15. Huesch MD, Currid‐Halkett E, Doctor JN. Public hospital quality report awareness: evidence from National and Californian Internet searches and social media mentions, 2012. BMJ Open. 2014;4(3):e004417.
  16. New York City Department of Health and Mental Hygiene. Restaurant Grading in New York City at 18 Months. New York, NY: New York City Department of Health and Mental Hygiene; 2013.
  17. Jin GZ, Leslie P. The effect of information on product quality: evidence from restaurant hygiene grade cards. Q J Econ. 2003;118(2):409451.
  18. Doyle JJ, Graves JA, Gruber J, Kleiner S. Do high‐cost hospitals deliver better care? Evidence from ambulance referral patterns. National Bureau of Economic Research. Working paper no. 17936. Available at: http://www.nber.org/papers/w17936.pdf. Published March 2012. Accessed November 18, 2014.
  19. Peters E, Dieckmann N, Dixon A, Hibbard JH, Mertz CK. Less is more in presenting quality information to consumers. Med Care Res Rev. 2007;64(2)169190.
  20. Abaluck J and Gruber J. Choice inconsistencies among the elderly: evidence from plan choice in the Medicare Part D program. Amer Econ Rev. 2011;101(4)11801210.
  21. Hibbard JH, Slovic P, Peters E, Finucane ML. Strategies for reporting health plan performance information to consumers: evidence from controlled studies. Health Serv Res. 2002;37(2):291313.
  22. Hirth RA, Huang SS. Quality reporting and private prices: evidence from the nursing home industry. Paper presented at: American Society of Health Economists Annual Meeting; June 23, 2014; Los Angeles, CA.
  23. Consumer Reports. Best new care values. Available at: http://consumerreports.org/cro/2012/05/best-new-car-values/index.htm. Updated February 2014. Accessed November 18, 2014.
  24. Morse R. Best value schools methodology. US News and World Report. September 8, 2014. Available at: http://www.usnews.com/education/best-colleges/articles/2013/09/09/best-value-schools-methodology. Accessed November 18, 2014.
  25. Centers for Medicare 122:574677.
References
  1. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001.
  2. Ryan AM, Nallamothu BK, Dimick JB. Medicare's public reporting initiative on hospital quality had modest or no impact on mortality from three key conditions. Health Aff (Millwood). 2012;31(3):585592.
  3. Muller MP, Detsky AS. Public reporting of hospital hand hygiene compliance—helpful or harmful? JAMA. 2010;304(10):11161117.
  4. Epstein AJ. Do cardiac surgery report cards reduce mortality? Assessing the evidence. Med Care Res Rev. 2006;63(4):403426.
  5. Kolstad JT, Chernew ME. Quality and consumer decision making in the market for health insurance and health care services. Med Care Res Rev. 2009;66(1 suppl):28S52S.
  6. Schneider EC, Epstein AM. Use of public performance reports: a survey of patients undergoing cardiac surgery. JAMA. 1998;279(20):16381642.
  7. Hibbard JH, Slovic P, Jewett JJ. Informing consumer decisions in health care: implications from decision‐making research. Milbank Q. 1997;75(3):395414.
  8. Centers for Medicare hospital inpatient prospective payment systems for acute care hospitals and the long‐term care hospital prospective payment system and proposed fiscal year 2014 rates; quality reporting requirements for specific providers; hospital conditions of participation. Fed Regist. 2013:2748627823.
  9. Werner RM, Bradlow ET. Relationship between Medicare's hospital compare performance measures and mortality rates. JAMA. 2006;296(22):26942702.
  10. Ryan AM. Will value‐based purchasing increase disparities in care? N Engl J Med. 2013;369(26):24722474.
  11. Joynt KE, Jha AK. A path forward on Medicare readmissions. N Engl J Med. 2013;368(13):11751177.
  12. New York City Department of Health and Mental Hygiene. What to expect when you're inspected: a guide for food service operators. New York, NY: New York City Department of Health and Mental Hygiene; 2010.
  13. Grynbaum MM. In reprieve for restaurant industry, New York proposes changes to grading system. New York Times. March 22, 2014:A15.
  14. Kahneman D. Thinking, Fast and Slow. New York, NY: Farrar, Straus and Giroux; 2011.
  15. Huesch MD, Currid‐Halkett E, Doctor JN. Public hospital quality report awareness: evidence from National and Californian Internet searches and social media mentions, 2012. BMJ Open. 2014;4(3):e004417.
  16. New York City Department of Health and Mental Hygiene. Restaurant Grading in New York City at 18 Months. New York, NY: New York City Department of Health and Mental Hygiene; 2013.
  17. Jin GZ, Leslie P. The effect of information on product quality: evidence from restaurant hygiene grade cards. Q J Econ. 2003;118(2):409451.
  18. Doyle JJ, Graves JA, Gruber J, Kleiner S. Do high‐cost hospitals deliver better care? Evidence from ambulance referral patterns. National Bureau of Economic Research. Working paper no. 17936. Available at: http://www.nber.org/papers/w17936.pdf. Published March 2012. Accessed November 18, 2014.
  19. Peters E, Dieckmann N, Dixon A, Hibbard JH, Mertz CK. Less is more in presenting quality information to consumers. Med Care Res Rev. 2007;64(2)169190.
  20. Abaluck J and Gruber J. Choice inconsistencies among the elderly: evidence from plan choice in the Medicare Part D program. Amer Econ Rev. 2011;101(4)11801210.
  21. Hibbard JH, Slovic P, Peters E, Finucane ML. Strategies for reporting health plan performance information to consumers: evidence from controlled studies. Health Serv Res. 2002;37(2):291313.
  22. Hirth RA, Huang SS. Quality reporting and private prices: evidence from the nursing home industry. Paper presented at: American Society of Health Economists Annual Meeting; June 23, 2014; Los Angeles, CA.
  23. Consumer Reports. Best new care values. Available at: http://consumerreports.org/cro/2012/05/best-new-car-values/index.htm. Updated February 2014. Accessed November 18, 2014.
  24. Morse R. Best value schools methodology. US News and World Report. September 8, 2014. Available at: http://www.usnews.com/education/best-colleges/articles/2013/09/09/best-value-schools-methodology. Accessed November 18, 2014.
  25. Centers for Medicare 122:574677.
Issue
Journal of Hospital Medicine - 10(2)
Issue
Journal of Hospital Medicine - 10(2)
Page Number
116-119
Page Number
116-119
Publications
Publications
Article Type
Display Headline
Grade pending: Lessons for hospital quality reporting from the New York City restaurant sanitation inspection program
Display Headline
Grade pending: Lessons for hospital quality reporting from the New York City restaurant sanitation inspection program
Sections
Article Source
© 2014 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Andrew M. Ryan, PhD, 1415 Washington Heights, SPH II Rm. 3125, Ann Arbor, MI 48104; Telephone: 734.936.1311; Fax: 734.936.4338; E‐mail: amryan@umich.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files