Article Type
Changed
Mon, 11/11/2019 - 11:47

CMS considers how to assess socioeconomic factors

Since 2005 the government website Hospital Compare has publicly reported quality data on hospitals, with periodic updates of their performance, including specific measures of quality. But how accurately do the ratings reflect a hospital’s actual quality of care, and what do the ratings mean for hospitalists?

Dr. Kate Goodrich of the George Washington Hospital Center in Washington
Dr. Kate Goodrich

Hospital Compare provides searchable, comparable information to consumers on reported quality of care data submitted by more than 4,000 Medicare-certified hospitals, along with Veterans Administration and military health system hospitals. It is designed to allow consumers to select hospitals and directly compare their mortality, complication, infection, and other performance measures on conditions such as heart attacks, heart failure, pneumonia, and surgical outcomes.

The Overall Hospital Quality Star Ratings, which began in 2016, combine data from more than 50 quality measures publicly reported on Hospital Compare into an overall rating of one to five stars for each hospital. These ratings are designed to enhance and supplement existing quality measures with a more “customer-centric” measure that makes it easier for consumers to act on the information. Obviously, this would be helpful to consumers who feel overwhelmed by the volume of data on the Hospital Compare website, and by the complexity of some of the measures.

A posted call in spring 2019 by CMS for public comment on possible methodological changes to the Overall Hospital Quality Star Ratings received more than 800 comments from 150 different organizations. And this past summer, the Centers for Medicare & Medicaid Services decided to delay posting the refreshed Star Ratings in its Hospital Compare data preview reports for July 2019. The agency says it intends to release the updated information in early 2020. Meanwhile, the reported data – particularly the overall star ratings – continue to generate controversy for the hospital field.
 

Hospitalists’ critical role

Hospitalists are not rated individually on Hospital Compare, but they play important roles in the quality of care their hospital provides – and thus ultimately the hospital’s publicly reported rankings. Hospitalists typically are not specifically incentivized or penalized for their hospital’s performance, but this does happen in some cases.

“Hospital administrators absolutely take note of their hospital’s star ratings. These are the people hospitalists work for, and this is definitely top of their minds,” said Kate Goodrich, MD, MHS, director of the Center for Clinical Standards and Quality at CMS. “I recently spoke at an SHM annual conference and every question I was asked was about hospital ratings and the star system,” noted Dr. Goodrich, herself a practicing hospitalist at George Washington University Medical Center in Washington.

The government’s aim for Hospital Compare is to give consumers easy-to-understand indicators of the quality of care provided by hospitals, especially where they might have a choice of hospitals, such as for an elective surgery. Making that information public is also viewed as a motivator to help drive improvements in hospital performance, Dr. Goodrich said.

“In terms of what we measure, we try to make sure it’s important to patients and to clinicians. We have frontline practicing physicians, patients, and families advising us, along with methodologists and PhD researchers. These stakeholders tell us what is important to measure and why,” she said. “Hospitals and all health providers need more actionable and timely data to improve their quality of care, especially if they want to participate in accountable care organizations. And we need to make the information easy to understand.”

Dr. Goodrich sees two main themes in the public response to its request for comment. “People say the methodology we use to calculate star ratings is frustrating for hospitals, which have found it difficult to model their performance, predict their star ratings, or explain the discrepancies.” Hospitals taking care of sicker patients with lower socioeconomic status also say the ratings unfairly penalize them. “I work in a large urban hospital, and I understand this. They say we don’t take that sufficiently into account in the ratings,” she said.

“While our modeling shows that current ratings highly correlate with performance on individual measures, we have asked for comment on if and how we could adjust for socioeconomic factors. We are actively considering how to make changes to address these concerns,” Dr. Goodrich said.

In August 2019, CMS acknowledged that it plans to change the methodology used to calculate hospital star ratings in early 2021, but has not yet revealed specific details about the nature of the changes. The agency intends to propose the changes through the public rule-making process sometime in 2020.
 

 

 

Continuing controversy

The American Hospital Association – which has had strong concerns about the methodology and the usefulness of hospital star ratings – is pushing back on some of the changes to the system being considered by CMS. In its submitted comments, AHA supported only three of the 14 potential star ratings methodology changes being considered. AHA and the American Association of Medical Colleges, among others, have urged taking down the star ratings until major changes can be made.

“When the star ratings were first implemented, a lot of challenges became apparent right away,” said Akin Demehin, MPH, AHA’s director of quality policy. “We began to see that those hospitals that treat more complicated patients and poorer patients tended to perform more poorly on the ratings. So there was something wrong with the methodology. Then, starting in 2018, hospitals began seeing real shifts in their performance ratings when the underlying data hadn’t really changed.”

CMS uses a statistical approach called latent variable modeling. Its underlying assumption is that you can say something about a hospital’s underlying quality based on the data you already have, Mr. Demehin said, but noted “that can be a questionable assumption.” He also emphasized the need for ratings that compare hospitals that are similar in size and model to each other.

Dr. Suparna Dutta, division chief, hospital medicine, Rush University, Chicago
Dr. Suparna Dutta

Suparna Dutta, MD, division chief, hospital medicine, Rush University, Chicago, said analyses done at Rush showed that the statistical model CMS used in calculating the star ratings was dynamically changing the weighting of certain measures in every release. “That meant one specific performance measure could play an outsized role in determining a final rating,” she said. In particular the methodology inadvertently penalized large hospitals, academic medical centers, and institutions that provide heroic care.

“We fundamentally believe that consumers should have meaningful information about hospital quality,” said Nancy Foster, AHA’s vice president for quality and patient safety policy at AHA. “We understand the complexities of Hospital Compare and the challenges of getting simple information for consumers. To its credit, CMS is thinking about how to do that, and we support them in that effort.”
 

Getting a handle on quality

Hospitalists are responsible for ensuring that their hospitals excel in the care of patients, said Julius Yang, MD, hospitalist and director of quality at Beth Israel Deaconess Medical Center in Boston. That also requires keeping up on the primary public ways these issues are addressed through reporting of quality data and through reimbursement policy. “That should be part of our core competencies as hospitalists.”

Some of the measures on Hospital Compare don’t overlap much with the work of hospitalists, he noted. But for others, such as for pneumonia, COPD, and care of patients with stroke, or for mortality and 30-day readmissions rates, “we are involved, even if not directly, and certainly responsible for contributing to the outcomes and the opportunity to add value,” he said.

“When it comes to 30-day readmission rates, do we really understand the risk factors for readmissions and the barriers to patients remaining in the community after their hospital stay? Are our patients stable enough to be discharged, and have we worked with the care coordination team to make sure they have the resources they need? And have we communicated adequately with the outpatient doctor? All of these things are within the wheelhouse of the hospitalist,” Dr. Yang said. “Let’s accept that the readmissions rate, for example, is not a perfect measure of quality. But as an imperfect measure, it can point us in the right direction.”

Dr. Jose Figueroa, Harvard Medical School, Boston
Dr. Jose Figueroa

Jose Figueroa, MD, MPH, hospitalist and assistant professor at Harvard Medical School, has been studying for his health system the impact of hospital penalties such as the Hospital Readmissions Reduction Program on health equity. In general, hospitalists play an important role in dictating processes of care and serving on quality-oriented committees across multiple realms of the hospital, he said.

“What’s hard from the hospitalist’s perspective is that there don’t seem to be simple solutions to move the dial on many of these measures,” Dr. Figueroa said. “If the hospital is at three stars, can we say, okay, if we do X, Y, and Z, then our hospital will move from three to five stars? Some of these measures are so broad and not in our purview. Which ones apply to me as a hospitalist and my care processes?”

Dr. Dutta sits on the SHM Policy Committee, which has been working to bring these issues to the attention of frontline hospitalists. “Hospitalists are always going to be aligned with their hospital’s priorities. We’re in it to provide high-quality care, but there’s no magic way to do that,” she said.

Hospital Compare measures sometimes end up in hospitalist incentives plans – for example, the readmission penalty rates – even though that is a fairly arbitrary measure and hard to pin to one doctor, Dr. Dutta explained. “If you look at the evidence regarding these metrics, there are not a lot of data to show that the metrics lead to what we really want, which is better care for patients.”

A recent study in the British Medical Journal, for example, examined the association between the penalties on hospitals in the Hospital Acquired Condition Reduction Program and clinical outcome.1 The researchers concluded that the penalties were not associated with significant change or found to drive meaningful clinical improvement.
 

 

 

How can hospitalists engage with Compare?

Dr. Goodrich refers hospitalists seeking quality resources to their local quality improvement organizations (QIO) and to Hospital Improvement Innovation Networks at the regional, state, national, or hospital system level.

One helpful thing that any group of hospitalists could do, added Dr. Figueroa, is to examine the measures closely and determine which ones they think they can influence. “Then look for the hospitals that resemble ours and care for similar patients, based on the demographics. We can then say: ‘Okay, that’s a fair comparison. This can be a benchmark with our peers,’” he said. Then it’s important to ask how your hospital is doing over time on these measures, and use that to prioritize.

“You also have to appreciate that these are broad quality measures, and to impact them you have to do broad quality improvement efforts. Another piece of this is getting good at collecting and analyzing data internally in a timely fashion. You don’t want to wait 2-3 years to find out in Hospital Compare that you’re not performing well. You care about the care you provided today, not 2 or 3 years ago. Without this internal check, it’s impossible to know what to invest in – and to see if things you do are having an impact,” Dr. Figueroa said.

“As physician leaders, this is a real opportunity for us to trigger a conversation with our hospital’s administration around what we went into medicine for in the first place – to improve our patients’ care,” said Dr. Goodrich. She said Hospital Compare is one tool for sparking systemic quality improvement across the hospital – which is an important part of the hospitalist’s job. “If you want to be a bigger star within your hospital, show that level of commitment. It likely would be welcomed by your hospital.”
 

Reference

1. Sankaran R et al. Changes in hospital safety following penalties in the US Hospital Acquired Condition Reduction Program: retrospective cohort study. BMJ. 2019 Jul 3 doi: 10.1136/bmj.l4109.

Publications
Topics
Sections

CMS considers how to assess socioeconomic factors

CMS considers how to assess socioeconomic factors

Since 2005 the government website Hospital Compare has publicly reported quality data on hospitals, with periodic updates of their performance, including specific measures of quality. But how accurately do the ratings reflect a hospital’s actual quality of care, and what do the ratings mean for hospitalists?

Dr. Kate Goodrich of the George Washington Hospital Center in Washington
Dr. Kate Goodrich

Hospital Compare provides searchable, comparable information to consumers on reported quality of care data submitted by more than 4,000 Medicare-certified hospitals, along with Veterans Administration and military health system hospitals. It is designed to allow consumers to select hospitals and directly compare their mortality, complication, infection, and other performance measures on conditions such as heart attacks, heart failure, pneumonia, and surgical outcomes.

The Overall Hospital Quality Star Ratings, which began in 2016, combine data from more than 50 quality measures publicly reported on Hospital Compare into an overall rating of one to five stars for each hospital. These ratings are designed to enhance and supplement existing quality measures with a more “customer-centric” measure that makes it easier for consumers to act on the information. Obviously, this would be helpful to consumers who feel overwhelmed by the volume of data on the Hospital Compare website, and by the complexity of some of the measures.

A posted call in spring 2019 by CMS for public comment on possible methodological changes to the Overall Hospital Quality Star Ratings received more than 800 comments from 150 different organizations. And this past summer, the Centers for Medicare & Medicaid Services decided to delay posting the refreshed Star Ratings in its Hospital Compare data preview reports for July 2019. The agency says it intends to release the updated information in early 2020. Meanwhile, the reported data – particularly the overall star ratings – continue to generate controversy for the hospital field.
 

Hospitalists’ critical role

Hospitalists are not rated individually on Hospital Compare, but they play important roles in the quality of care their hospital provides – and thus ultimately the hospital’s publicly reported rankings. Hospitalists typically are not specifically incentivized or penalized for their hospital’s performance, but this does happen in some cases.

“Hospital administrators absolutely take note of their hospital’s star ratings. These are the people hospitalists work for, and this is definitely top of their minds,” said Kate Goodrich, MD, MHS, director of the Center for Clinical Standards and Quality at CMS. “I recently spoke at an SHM annual conference and every question I was asked was about hospital ratings and the star system,” noted Dr. Goodrich, herself a practicing hospitalist at George Washington University Medical Center in Washington.

The government’s aim for Hospital Compare is to give consumers easy-to-understand indicators of the quality of care provided by hospitals, especially where they might have a choice of hospitals, such as for an elective surgery. Making that information public is also viewed as a motivator to help drive improvements in hospital performance, Dr. Goodrich said.

“In terms of what we measure, we try to make sure it’s important to patients and to clinicians. We have frontline practicing physicians, patients, and families advising us, along with methodologists and PhD researchers. These stakeholders tell us what is important to measure and why,” she said. “Hospitals and all health providers need more actionable and timely data to improve their quality of care, especially if they want to participate in accountable care organizations. And we need to make the information easy to understand.”

Dr. Goodrich sees two main themes in the public response to its request for comment. “People say the methodology we use to calculate star ratings is frustrating for hospitals, which have found it difficult to model their performance, predict their star ratings, or explain the discrepancies.” Hospitals taking care of sicker patients with lower socioeconomic status also say the ratings unfairly penalize them. “I work in a large urban hospital, and I understand this. They say we don’t take that sufficiently into account in the ratings,” she said.

“While our modeling shows that current ratings highly correlate with performance on individual measures, we have asked for comment on if and how we could adjust for socioeconomic factors. We are actively considering how to make changes to address these concerns,” Dr. Goodrich said.

In August 2019, CMS acknowledged that it plans to change the methodology used to calculate hospital star ratings in early 2021, but has not yet revealed specific details about the nature of the changes. The agency intends to propose the changes through the public rule-making process sometime in 2020.
 

 

 

Continuing controversy

The American Hospital Association – which has had strong concerns about the methodology and the usefulness of hospital star ratings – is pushing back on some of the changes to the system being considered by CMS. In its submitted comments, AHA supported only three of the 14 potential star ratings methodology changes being considered. AHA and the American Association of Medical Colleges, among others, have urged taking down the star ratings until major changes can be made.

“When the star ratings were first implemented, a lot of challenges became apparent right away,” said Akin Demehin, MPH, AHA’s director of quality policy. “We began to see that those hospitals that treat more complicated patients and poorer patients tended to perform more poorly on the ratings. So there was something wrong with the methodology. Then, starting in 2018, hospitals began seeing real shifts in their performance ratings when the underlying data hadn’t really changed.”

CMS uses a statistical approach called latent variable modeling. Its underlying assumption is that you can say something about a hospital’s underlying quality based on the data you already have, Mr. Demehin said, but noted “that can be a questionable assumption.” He also emphasized the need for ratings that compare hospitals that are similar in size and model to each other.

Dr. Suparna Dutta, division chief, hospital medicine, Rush University, Chicago
Dr. Suparna Dutta

Suparna Dutta, MD, division chief, hospital medicine, Rush University, Chicago, said analyses done at Rush showed that the statistical model CMS used in calculating the star ratings was dynamically changing the weighting of certain measures in every release. “That meant one specific performance measure could play an outsized role in determining a final rating,” she said. In particular the methodology inadvertently penalized large hospitals, academic medical centers, and institutions that provide heroic care.

“We fundamentally believe that consumers should have meaningful information about hospital quality,” said Nancy Foster, AHA’s vice president for quality and patient safety policy at AHA. “We understand the complexities of Hospital Compare and the challenges of getting simple information for consumers. To its credit, CMS is thinking about how to do that, and we support them in that effort.”
 

Getting a handle on quality

Hospitalists are responsible for ensuring that their hospitals excel in the care of patients, said Julius Yang, MD, hospitalist and director of quality at Beth Israel Deaconess Medical Center in Boston. That also requires keeping up on the primary public ways these issues are addressed through reporting of quality data and through reimbursement policy. “That should be part of our core competencies as hospitalists.”

Some of the measures on Hospital Compare don’t overlap much with the work of hospitalists, he noted. But for others, such as for pneumonia, COPD, and care of patients with stroke, or for mortality and 30-day readmissions rates, “we are involved, even if not directly, and certainly responsible for contributing to the outcomes and the opportunity to add value,” he said.

“When it comes to 30-day readmission rates, do we really understand the risk factors for readmissions and the barriers to patients remaining in the community after their hospital stay? Are our patients stable enough to be discharged, and have we worked with the care coordination team to make sure they have the resources they need? And have we communicated adequately with the outpatient doctor? All of these things are within the wheelhouse of the hospitalist,” Dr. Yang said. “Let’s accept that the readmissions rate, for example, is not a perfect measure of quality. But as an imperfect measure, it can point us in the right direction.”

Dr. Jose Figueroa, Harvard Medical School, Boston
Dr. Jose Figueroa

Jose Figueroa, MD, MPH, hospitalist and assistant professor at Harvard Medical School, has been studying for his health system the impact of hospital penalties such as the Hospital Readmissions Reduction Program on health equity. In general, hospitalists play an important role in dictating processes of care and serving on quality-oriented committees across multiple realms of the hospital, he said.

“What’s hard from the hospitalist’s perspective is that there don’t seem to be simple solutions to move the dial on many of these measures,” Dr. Figueroa said. “If the hospital is at three stars, can we say, okay, if we do X, Y, and Z, then our hospital will move from three to five stars? Some of these measures are so broad and not in our purview. Which ones apply to me as a hospitalist and my care processes?”

Dr. Dutta sits on the SHM Policy Committee, which has been working to bring these issues to the attention of frontline hospitalists. “Hospitalists are always going to be aligned with their hospital’s priorities. We’re in it to provide high-quality care, but there’s no magic way to do that,” she said.

Hospital Compare measures sometimes end up in hospitalist incentives plans – for example, the readmission penalty rates – even though that is a fairly arbitrary measure and hard to pin to one doctor, Dr. Dutta explained. “If you look at the evidence regarding these metrics, there are not a lot of data to show that the metrics lead to what we really want, which is better care for patients.”

A recent study in the British Medical Journal, for example, examined the association between the penalties on hospitals in the Hospital Acquired Condition Reduction Program and clinical outcome.1 The researchers concluded that the penalties were not associated with significant change or found to drive meaningful clinical improvement.
 

 

 

How can hospitalists engage with Compare?

Dr. Goodrich refers hospitalists seeking quality resources to their local quality improvement organizations (QIO) and to Hospital Improvement Innovation Networks at the regional, state, national, or hospital system level.

One helpful thing that any group of hospitalists could do, added Dr. Figueroa, is to examine the measures closely and determine which ones they think they can influence. “Then look for the hospitals that resemble ours and care for similar patients, based on the demographics. We can then say: ‘Okay, that’s a fair comparison. This can be a benchmark with our peers,’” he said. Then it’s important to ask how your hospital is doing over time on these measures, and use that to prioritize.

“You also have to appreciate that these are broad quality measures, and to impact them you have to do broad quality improvement efforts. Another piece of this is getting good at collecting and analyzing data internally in a timely fashion. You don’t want to wait 2-3 years to find out in Hospital Compare that you’re not performing well. You care about the care you provided today, not 2 or 3 years ago. Without this internal check, it’s impossible to know what to invest in – and to see if things you do are having an impact,” Dr. Figueroa said.

“As physician leaders, this is a real opportunity for us to trigger a conversation with our hospital’s administration around what we went into medicine for in the first place – to improve our patients’ care,” said Dr. Goodrich. She said Hospital Compare is one tool for sparking systemic quality improvement across the hospital – which is an important part of the hospitalist’s job. “If you want to be a bigger star within your hospital, show that level of commitment. It likely would be welcomed by your hospital.”
 

Reference

1. Sankaran R et al. Changes in hospital safety following penalties in the US Hospital Acquired Condition Reduction Program: retrospective cohort study. BMJ. 2019 Jul 3 doi: 10.1136/bmj.l4109.

Since 2005 the government website Hospital Compare has publicly reported quality data on hospitals, with periodic updates of their performance, including specific measures of quality. But how accurately do the ratings reflect a hospital’s actual quality of care, and what do the ratings mean for hospitalists?

Dr. Kate Goodrich of the George Washington Hospital Center in Washington
Dr. Kate Goodrich

Hospital Compare provides searchable, comparable information to consumers on reported quality of care data submitted by more than 4,000 Medicare-certified hospitals, along with Veterans Administration and military health system hospitals. It is designed to allow consumers to select hospitals and directly compare their mortality, complication, infection, and other performance measures on conditions such as heart attacks, heart failure, pneumonia, and surgical outcomes.

The Overall Hospital Quality Star Ratings, which began in 2016, combine data from more than 50 quality measures publicly reported on Hospital Compare into an overall rating of one to five stars for each hospital. These ratings are designed to enhance and supplement existing quality measures with a more “customer-centric” measure that makes it easier for consumers to act on the information. Obviously, this would be helpful to consumers who feel overwhelmed by the volume of data on the Hospital Compare website, and by the complexity of some of the measures.

A posted call in spring 2019 by CMS for public comment on possible methodological changes to the Overall Hospital Quality Star Ratings received more than 800 comments from 150 different organizations. And this past summer, the Centers for Medicare & Medicaid Services decided to delay posting the refreshed Star Ratings in its Hospital Compare data preview reports for July 2019. The agency says it intends to release the updated information in early 2020. Meanwhile, the reported data – particularly the overall star ratings – continue to generate controversy for the hospital field.
 

Hospitalists’ critical role

Hospitalists are not rated individually on Hospital Compare, but they play important roles in the quality of care their hospital provides – and thus ultimately the hospital’s publicly reported rankings. Hospitalists typically are not specifically incentivized or penalized for their hospital’s performance, but this does happen in some cases.

“Hospital administrators absolutely take note of their hospital’s star ratings. These are the people hospitalists work for, and this is definitely top of their minds,” said Kate Goodrich, MD, MHS, director of the Center for Clinical Standards and Quality at CMS. “I recently spoke at an SHM annual conference and every question I was asked was about hospital ratings and the star system,” noted Dr. Goodrich, herself a practicing hospitalist at George Washington University Medical Center in Washington.

The government’s aim for Hospital Compare is to give consumers easy-to-understand indicators of the quality of care provided by hospitals, especially where they might have a choice of hospitals, such as for an elective surgery. Making that information public is also viewed as a motivator to help drive improvements in hospital performance, Dr. Goodrich said.

“In terms of what we measure, we try to make sure it’s important to patients and to clinicians. We have frontline practicing physicians, patients, and families advising us, along with methodologists and PhD researchers. These stakeholders tell us what is important to measure and why,” she said. “Hospitals and all health providers need more actionable and timely data to improve their quality of care, especially if they want to participate in accountable care organizations. And we need to make the information easy to understand.”

Dr. Goodrich sees two main themes in the public response to its request for comment. “People say the methodology we use to calculate star ratings is frustrating for hospitals, which have found it difficult to model their performance, predict their star ratings, or explain the discrepancies.” Hospitals taking care of sicker patients with lower socioeconomic status also say the ratings unfairly penalize them. “I work in a large urban hospital, and I understand this. They say we don’t take that sufficiently into account in the ratings,” she said.

“While our modeling shows that current ratings highly correlate with performance on individual measures, we have asked for comment on if and how we could adjust for socioeconomic factors. We are actively considering how to make changes to address these concerns,” Dr. Goodrich said.

In August 2019, CMS acknowledged that it plans to change the methodology used to calculate hospital star ratings in early 2021, but has not yet revealed specific details about the nature of the changes. The agency intends to propose the changes through the public rule-making process sometime in 2020.
 

 

 

Continuing controversy

The American Hospital Association – which has had strong concerns about the methodology and the usefulness of hospital star ratings – is pushing back on some of the changes to the system being considered by CMS. In its submitted comments, AHA supported only three of the 14 potential star ratings methodology changes being considered. AHA and the American Association of Medical Colleges, among others, have urged taking down the star ratings until major changes can be made.

“When the star ratings were first implemented, a lot of challenges became apparent right away,” said Akin Demehin, MPH, AHA’s director of quality policy. “We began to see that those hospitals that treat more complicated patients and poorer patients tended to perform more poorly on the ratings. So there was something wrong with the methodology. Then, starting in 2018, hospitals began seeing real shifts in their performance ratings when the underlying data hadn’t really changed.”

CMS uses a statistical approach called latent variable modeling. Its underlying assumption is that you can say something about a hospital’s underlying quality based on the data you already have, Mr. Demehin said, but noted “that can be a questionable assumption.” He also emphasized the need for ratings that compare hospitals that are similar in size and model to each other.

Dr. Suparna Dutta, division chief, hospital medicine, Rush University, Chicago
Dr. Suparna Dutta

Suparna Dutta, MD, division chief, hospital medicine, Rush University, Chicago, said analyses done at Rush showed that the statistical model CMS used in calculating the star ratings was dynamically changing the weighting of certain measures in every release. “That meant one specific performance measure could play an outsized role in determining a final rating,” she said. In particular the methodology inadvertently penalized large hospitals, academic medical centers, and institutions that provide heroic care.

“We fundamentally believe that consumers should have meaningful information about hospital quality,” said Nancy Foster, AHA’s vice president for quality and patient safety policy at AHA. “We understand the complexities of Hospital Compare and the challenges of getting simple information for consumers. To its credit, CMS is thinking about how to do that, and we support them in that effort.”
 

Getting a handle on quality

Hospitalists are responsible for ensuring that their hospitals excel in the care of patients, said Julius Yang, MD, hospitalist and director of quality at Beth Israel Deaconess Medical Center in Boston. That also requires keeping up on the primary public ways these issues are addressed through reporting of quality data and through reimbursement policy. “That should be part of our core competencies as hospitalists.”

Some of the measures on Hospital Compare don’t overlap much with the work of hospitalists, he noted. But for others, such as for pneumonia, COPD, and care of patients with stroke, or for mortality and 30-day readmissions rates, “we are involved, even if not directly, and certainly responsible for contributing to the outcomes and the opportunity to add value,” he said.

“When it comes to 30-day readmission rates, do we really understand the risk factors for readmissions and the barriers to patients remaining in the community after their hospital stay? Are our patients stable enough to be discharged, and have we worked with the care coordination team to make sure they have the resources they need? And have we communicated adequately with the outpatient doctor? All of these things are within the wheelhouse of the hospitalist,” Dr. Yang said. “Let’s accept that the readmissions rate, for example, is not a perfect measure of quality. But as an imperfect measure, it can point us in the right direction.”

Dr. Jose Figueroa, Harvard Medical School, Boston
Dr. Jose Figueroa

Jose Figueroa, MD, MPH, hospitalist and assistant professor at Harvard Medical School, has been studying for his health system the impact of hospital penalties such as the Hospital Readmissions Reduction Program on health equity. In general, hospitalists play an important role in dictating processes of care and serving on quality-oriented committees across multiple realms of the hospital, he said.

“What’s hard from the hospitalist’s perspective is that there don’t seem to be simple solutions to move the dial on many of these measures,” Dr. Figueroa said. “If the hospital is at three stars, can we say, okay, if we do X, Y, and Z, then our hospital will move from three to five stars? Some of these measures are so broad and not in our purview. Which ones apply to me as a hospitalist and my care processes?”

Dr. Dutta sits on the SHM Policy Committee, which has been working to bring these issues to the attention of frontline hospitalists. “Hospitalists are always going to be aligned with their hospital’s priorities. We’re in it to provide high-quality care, but there’s no magic way to do that,” she said.

Hospital Compare measures sometimes end up in hospitalist incentives plans – for example, the readmission penalty rates – even though that is a fairly arbitrary measure and hard to pin to one doctor, Dr. Dutta explained. “If you look at the evidence regarding these metrics, there are not a lot of data to show that the metrics lead to what we really want, which is better care for patients.”

A recent study in the British Medical Journal, for example, examined the association between the penalties on hospitals in the Hospital Acquired Condition Reduction Program and clinical outcome.1 The researchers concluded that the penalties were not associated with significant change or found to drive meaningful clinical improvement.
 

 

 

How can hospitalists engage with Compare?

Dr. Goodrich refers hospitalists seeking quality resources to their local quality improvement organizations (QIO) and to Hospital Improvement Innovation Networks at the regional, state, national, or hospital system level.

One helpful thing that any group of hospitalists could do, added Dr. Figueroa, is to examine the measures closely and determine which ones they think they can influence. “Then look for the hospitals that resemble ours and care for similar patients, based on the demographics. We can then say: ‘Okay, that’s a fair comparison. This can be a benchmark with our peers,’” he said. Then it’s important to ask how your hospital is doing over time on these measures, and use that to prioritize.

“You also have to appreciate that these are broad quality measures, and to impact them you have to do broad quality improvement efforts. Another piece of this is getting good at collecting and analyzing data internally in a timely fashion. You don’t want to wait 2-3 years to find out in Hospital Compare that you’re not performing well. You care about the care you provided today, not 2 or 3 years ago. Without this internal check, it’s impossible to know what to invest in – and to see if things you do are having an impact,” Dr. Figueroa said.

“As physician leaders, this is a real opportunity for us to trigger a conversation with our hospital’s administration around what we went into medicine for in the first place – to improve our patients’ care,” said Dr. Goodrich. She said Hospital Compare is one tool for sparking systemic quality improvement across the hospital – which is an important part of the hospitalist’s job. “If you want to be a bigger star within your hospital, show that level of commitment. It likely would be welcomed by your hospital.”
 

Reference

1. Sankaran R et al. Changes in hospital safety following penalties in the US Hospital Acquired Condition Reduction Program: retrospective cohort study. BMJ. 2019 Jul 3 doi: 10.1136/bmj.l4109.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.