Allowed Publications
Slot System
Featured Buckets
Featured Buckets Admin

Anticipating the A.I. revolution

Article Type
Changed
Tue, 11/26/2019 - 12:47

Goal is to augment human performance

Artificial intelligence (A.I.) is likely to change almost everything in medical practice, according to a new book called “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again,” by Eric Topol, MD.

archy13/Getty Images

Dr. Topol told The Hospitalist that his book’s subtitle “is the paradox: the unexpected, far-reaching goal of A.I. that can, if used properly, restore the most important part of medicine – a deep patient-doctor relationship.”

That’s because A.I. can do more than enhance diagnoses; it can also help with tasks such as note-taking and reading scans, making it possible for hospitalists to spend more time connecting with their patients. “Hospitalists could have a much better handle on a patient’s dataset via algorithmic processing, providing alerts and augmented performance of hospitalists (when validated),” Dr. Topol said. “They can also expect far less keyboard use with the help of speech recognition, natural language processing, and deep learning.”In an interview with the New York Times, Dr. Topol said that by augmenting human performance, A.I. has the potential to markedly improve productivity, efficiency, work flow, accuracy and speed, both for doctors and for patients, giving more charge and control to consumers through algorithmic support of their data.

“We can’t, and will never, rely on only algorithms for interpretation of life and death matters,” he said. “That requires human expert contextualization, something machines can’t do.”Of course, there could be pitfalls. “The liabilities include breaches of privacy and security, hacking, the lack of explainability of most A.I. algorithms, the potential to worsen inequities, the embedded bias, and ethical quandaries,” he said.
 

Reference

1. O’Connor A. How Artificial Intelligence Could Transform Medicine. New York Times. March 11, 2019. https://www.nytimes.com/2019/03/11/well/live/how-artificial-intelligence-could-transform-medicine.html.

Publications
Topics
Sections

Goal is to augment human performance

Goal is to augment human performance

Artificial intelligence (A.I.) is likely to change almost everything in medical practice, according to a new book called “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again,” by Eric Topol, MD.

archy13/Getty Images

Dr. Topol told The Hospitalist that his book’s subtitle “is the paradox: the unexpected, far-reaching goal of A.I. that can, if used properly, restore the most important part of medicine – a deep patient-doctor relationship.”

That’s because A.I. can do more than enhance diagnoses; it can also help with tasks such as note-taking and reading scans, making it possible for hospitalists to spend more time connecting with their patients. “Hospitalists could have a much better handle on a patient’s dataset via algorithmic processing, providing alerts and augmented performance of hospitalists (when validated),” Dr. Topol said. “They can also expect far less keyboard use with the help of speech recognition, natural language processing, and deep learning.”In an interview with the New York Times, Dr. Topol said that by augmenting human performance, A.I. has the potential to markedly improve productivity, efficiency, work flow, accuracy and speed, both for doctors and for patients, giving more charge and control to consumers through algorithmic support of their data.

“We can’t, and will never, rely on only algorithms for interpretation of life and death matters,” he said. “That requires human expert contextualization, something machines can’t do.”Of course, there could be pitfalls. “The liabilities include breaches of privacy and security, hacking, the lack of explainability of most A.I. algorithms, the potential to worsen inequities, the embedded bias, and ethical quandaries,” he said.
 

Reference

1. O’Connor A. How Artificial Intelligence Could Transform Medicine. New York Times. March 11, 2019. https://www.nytimes.com/2019/03/11/well/live/how-artificial-intelligence-could-transform-medicine.html.

Artificial intelligence (A.I.) is likely to change almost everything in medical practice, according to a new book called “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again,” by Eric Topol, MD.

archy13/Getty Images

Dr. Topol told The Hospitalist that his book’s subtitle “is the paradox: the unexpected, far-reaching goal of A.I. that can, if used properly, restore the most important part of medicine – a deep patient-doctor relationship.”

That’s because A.I. can do more than enhance diagnoses; it can also help with tasks such as note-taking and reading scans, making it possible for hospitalists to spend more time connecting with their patients. “Hospitalists could have a much better handle on a patient’s dataset via algorithmic processing, providing alerts and augmented performance of hospitalists (when validated),” Dr. Topol said. “They can also expect far less keyboard use with the help of speech recognition, natural language processing, and deep learning.”In an interview with the New York Times, Dr. Topol said that by augmenting human performance, A.I. has the potential to markedly improve productivity, efficiency, work flow, accuracy and speed, both for doctors and for patients, giving more charge and control to consumers through algorithmic support of their data.

“We can’t, and will never, rely on only algorithms for interpretation of life and death matters,” he said. “That requires human expert contextualization, something machines can’t do.”Of course, there could be pitfalls. “The liabilities include breaches of privacy and security, hacking, the lack of explainability of most A.I. algorithms, the potential to worsen inequities, the embedded bias, and ethical quandaries,” he said.
 

Reference

1. O’Connor A. How Artificial Intelligence Could Transform Medicine. New York Times. March 11, 2019. https://www.nytimes.com/2019/03/11/well/live/how-artificial-intelligence-could-transform-medicine.html.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Going beyond the QI project

Article Type
Changed
Thu, 11/14/2019 - 10:43

Role modeling for residents

Quality improvement (QI) education has become increasingly seen as core content in graduate medical education, said Brian Wong, MD, FRCPC, of the University of Toronto. One of the most common strategies for teaching QI is to have residents participate in a QI project, in which hospitalists often take a leading role.

“Given the investment made and time spent carrying out these projects, it is important to know whether or not the training has led to the desired outcome from both a learning and a project standpoint,” Dr. Wong said, which is why he coauthored a recent editorial on the subject in BMJ Quality and Safety. QI educators have long recognized that it’s difficult to know whether the education was successful.

“For example, if the project was not successful, does it matter if the residents learned key QI principles that they were able to apply to their project work?” Dr. Wong noted. “Our perspective extends this discussion by asking, ‘What does success look like in QI education?’ We argue that rather than focusing on whether the project was successful or not, our real goal should be to create QI educational experiences that will ensure that residents change their behaviors in future practice to embrace QI as an activity that is core to their everyday work.”

Hospitalists have an important role in that. “They can set the stage for learners to recognize just how important it is to incorporate QI into daily work. Through this role modeling, residents who carry out QI projects can see that the lessons learned contribute to lifelong engagement in QI.”

Dr. Wong’s hope is to focus on the type of QI experience that fosters long-term behavior changes.

“We want residents, when they graduate, to embrace QI, to volunteer to participate in organizational initiatives, to welcome practice data and reflect on it for the purposes of continuous improvement, to collaborate interprofessionally to make small iterative changes to the care delivery system to ensure that patients receive the highest quality of care possible,” he said. “My hope is that we can start to think differently about how we measure success in QI education.”

Reference

1. Myers JS, Wong BM. Measuring outcomes in quality improvement education: Success is in the eye of the beholder. BMJ Qual Saf. 2019 Mar 18. doi: 10.1136/bmjqs-2018-008305.

Publications
Topics
Sections

Role modeling for residents

Role modeling for residents

Quality improvement (QI) education has become increasingly seen as core content in graduate medical education, said Brian Wong, MD, FRCPC, of the University of Toronto. One of the most common strategies for teaching QI is to have residents participate in a QI project, in which hospitalists often take a leading role.

“Given the investment made and time spent carrying out these projects, it is important to know whether or not the training has led to the desired outcome from both a learning and a project standpoint,” Dr. Wong said, which is why he coauthored a recent editorial on the subject in BMJ Quality and Safety. QI educators have long recognized that it’s difficult to know whether the education was successful.

“For example, if the project was not successful, does it matter if the residents learned key QI principles that they were able to apply to their project work?” Dr. Wong noted. “Our perspective extends this discussion by asking, ‘What does success look like in QI education?’ We argue that rather than focusing on whether the project was successful or not, our real goal should be to create QI educational experiences that will ensure that residents change their behaviors in future practice to embrace QI as an activity that is core to their everyday work.”

Hospitalists have an important role in that. “They can set the stage for learners to recognize just how important it is to incorporate QI into daily work. Through this role modeling, residents who carry out QI projects can see that the lessons learned contribute to lifelong engagement in QI.”

Dr. Wong’s hope is to focus on the type of QI experience that fosters long-term behavior changes.

“We want residents, when they graduate, to embrace QI, to volunteer to participate in organizational initiatives, to welcome practice data and reflect on it for the purposes of continuous improvement, to collaborate interprofessionally to make small iterative changes to the care delivery system to ensure that patients receive the highest quality of care possible,” he said. “My hope is that we can start to think differently about how we measure success in QI education.”

Reference

1. Myers JS, Wong BM. Measuring outcomes in quality improvement education: Success is in the eye of the beholder. BMJ Qual Saf. 2019 Mar 18. doi: 10.1136/bmjqs-2018-008305.

Quality improvement (QI) education has become increasingly seen as core content in graduate medical education, said Brian Wong, MD, FRCPC, of the University of Toronto. One of the most common strategies for teaching QI is to have residents participate in a QI project, in which hospitalists often take a leading role.

“Given the investment made and time spent carrying out these projects, it is important to know whether or not the training has led to the desired outcome from both a learning and a project standpoint,” Dr. Wong said, which is why he coauthored a recent editorial on the subject in BMJ Quality and Safety. QI educators have long recognized that it’s difficult to know whether the education was successful.

“For example, if the project was not successful, does it matter if the residents learned key QI principles that they were able to apply to their project work?” Dr. Wong noted. “Our perspective extends this discussion by asking, ‘What does success look like in QI education?’ We argue that rather than focusing on whether the project was successful or not, our real goal should be to create QI educational experiences that will ensure that residents change their behaviors in future practice to embrace QI as an activity that is core to their everyday work.”

Hospitalists have an important role in that. “They can set the stage for learners to recognize just how important it is to incorporate QI into daily work. Through this role modeling, residents who carry out QI projects can see that the lessons learned contribute to lifelong engagement in QI.”

Dr. Wong’s hope is to focus on the type of QI experience that fosters long-term behavior changes.

“We want residents, when they graduate, to embrace QI, to volunteer to participate in organizational initiatives, to welcome practice data and reflect on it for the purposes of continuous improvement, to collaborate interprofessionally to make small iterative changes to the care delivery system to ensure that patients receive the highest quality of care possible,” he said. “My hope is that we can start to think differently about how we measure success in QI education.”

Reference

1. Myers JS, Wong BM. Measuring outcomes in quality improvement education: Success is in the eye of the beholder. BMJ Qual Saf. 2019 Mar 18. doi: 10.1136/bmjqs-2018-008305.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Glycemic Control eQUIPS yields success at Dignity Health Sequoia Hospital

Article Type
Changed
Tue, 05/03/2022 - 15:12

Glucometrics database aids tracking, trending

In honor of Diabetes Awareness Month, The Hospitalist spoke recently with Stephanie Dizon, PharmD, BCPS, director of pharmacy at Dignity Health Sequoia Hospital in Redwood City, Calif. Dr. Dizon was the project lead for Dignity Health Sequoia’s participation in the Society of Hospital Medicine’s Glycemic Control eQUIPS program. The Northern California hospital was recognized as a top performer in the program.

Dr. Stephanie Dizon, Dignity Health Sequoia Hospital, Redwood City, Calif.
Dr. Stephanie Dizon

SHM’s eQUIPS offers a virtual library of resources, including a step-by-step implementation guide, that addresses various issues that range from subcutaneous insulin protocols to care coordination and good hypoglycemia management. In addition, the program offers access to a data center for performance tracking and benchmarking.

Dr. Dizon shared her experience as a participant in the program, and explained its impact on glycemic control at Dignity Health Sequoia Hospital.
 

Could you tell us about your personal involvement with SHM?

I started as the quality lead for glycemic control for Sequoia Hospital in 2017 while serving in the role as the clinical pharmacy manager. Currently, I am the director of pharmacy.

What inspired your institution to enroll in the GC eQUIPS program? What were the challenges it helped you address?

Sequoia Hospital started in this journey to improve overall glycemic control in a collaborative with eight other Dignity Health hospitals in 2011. At Sequoia Hospital, this effort was led by Karen Harrison, RN, MSN, CCRN. At the time, Dignity Health saw variations in insulin management and adverse events, and it inspired this group to review their practices and try to find a better way to standardize them. The hope was that sharing information and making efforts to standardize practices would lead to better glycemic control.

Enrollment in the GC eQUIPS program helped Sequoia Hospital efficiently analyze data that would otherwise be too large to manage. In addition, by tracking and trending these large data sets, it helped us not only to see where the hospital’s greatest challenges are in glycemic control but also observe what the impact is when making changes. We were part of a nine-site study that proved the effectiveness of GC eQUIPS and highlighted the collective success across the health system.
 

What did you find most useful in the suite of resources included in eQUIPS?

The benchmarking webinars and informational webinars that have been provided by Greg Maynard, MD, over the years have been especially helpful. They have broadened my understanding of glycemic control. The glucometrics database is especially helpful for tracking and trending – we share these reports on a monthly basis with nursing and provider leadership. In addition, being able to benchmark ourselves with other hospitals pushes us to improve and keep an eye on glycemic control.

Are there any other highlights from your participation– and your institution’s – in the program that you feel would be beneficial to others who may be considering enrollment?

Having access to the tools available in the GC eQUIPS program is very powerful for data analysis and benchmarking. As a result, it allows the people at an institution to focus on the day-to-day tasks, clinical initiatives, and building a culture that can make a program successful instead of focusing on data collection.

For more information on SHM’s Glycemic Control resources or to enroll in eQUIPS, visit hospitalmedicine.org/gc.

Publications
Topics
Sections

Glucometrics database aids tracking, trending

Glucometrics database aids tracking, trending

In honor of Diabetes Awareness Month, The Hospitalist spoke recently with Stephanie Dizon, PharmD, BCPS, director of pharmacy at Dignity Health Sequoia Hospital in Redwood City, Calif. Dr. Dizon was the project lead for Dignity Health Sequoia’s participation in the Society of Hospital Medicine’s Glycemic Control eQUIPS program. The Northern California hospital was recognized as a top performer in the program.

Dr. Stephanie Dizon, Dignity Health Sequoia Hospital, Redwood City, Calif.
Dr. Stephanie Dizon

SHM’s eQUIPS offers a virtual library of resources, including a step-by-step implementation guide, that addresses various issues that range from subcutaneous insulin protocols to care coordination and good hypoglycemia management. In addition, the program offers access to a data center for performance tracking and benchmarking.

Dr. Dizon shared her experience as a participant in the program, and explained its impact on glycemic control at Dignity Health Sequoia Hospital.
 

Could you tell us about your personal involvement with SHM?

I started as the quality lead for glycemic control for Sequoia Hospital in 2017 while serving in the role as the clinical pharmacy manager. Currently, I am the director of pharmacy.

What inspired your institution to enroll in the GC eQUIPS program? What were the challenges it helped you address?

Sequoia Hospital started in this journey to improve overall glycemic control in a collaborative with eight other Dignity Health hospitals in 2011. At Sequoia Hospital, this effort was led by Karen Harrison, RN, MSN, CCRN. At the time, Dignity Health saw variations in insulin management and adverse events, and it inspired this group to review their practices and try to find a better way to standardize them. The hope was that sharing information and making efforts to standardize practices would lead to better glycemic control.

Enrollment in the GC eQUIPS program helped Sequoia Hospital efficiently analyze data that would otherwise be too large to manage. In addition, by tracking and trending these large data sets, it helped us not only to see where the hospital’s greatest challenges are in glycemic control but also observe what the impact is when making changes. We were part of a nine-site study that proved the effectiveness of GC eQUIPS and highlighted the collective success across the health system.
 

What did you find most useful in the suite of resources included in eQUIPS?

The benchmarking webinars and informational webinars that have been provided by Greg Maynard, MD, over the years have been especially helpful. They have broadened my understanding of glycemic control. The glucometrics database is especially helpful for tracking and trending – we share these reports on a monthly basis with nursing and provider leadership. In addition, being able to benchmark ourselves with other hospitals pushes us to improve and keep an eye on glycemic control.

Are there any other highlights from your participation– and your institution’s – in the program that you feel would be beneficial to others who may be considering enrollment?

Having access to the tools available in the GC eQUIPS program is very powerful for data analysis and benchmarking. As a result, it allows the people at an institution to focus on the day-to-day tasks, clinical initiatives, and building a culture that can make a program successful instead of focusing on data collection.

For more information on SHM’s Glycemic Control resources or to enroll in eQUIPS, visit hospitalmedicine.org/gc.

In honor of Diabetes Awareness Month, The Hospitalist spoke recently with Stephanie Dizon, PharmD, BCPS, director of pharmacy at Dignity Health Sequoia Hospital in Redwood City, Calif. Dr. Dizon was the project lead for Dignity Health Sequoia’s participation in the Society of Hospital Medicine’s Glycemic Control eQUIPS program. The Northern California hospital was recognized as a top performer in the program.

Dr. Stephanie Dizon, Dignity Health Sequoia Hospital, Redwood City, Calif.
Dr. Stephanie Dizon

SHM’s eQUIPS offers a virtual library of resources, including a step-by-step implementation guide, that addresses various issues that range from subcutaneous insulin protocols to care coordination and good hypoglycemia management. In addition, the program offers access to a data center for performance tracking and benchmarking.

Dr. Dizon shared her experience as a participant in the program, and explained its impact on glycemic control at Dignity Health Sequoia Hospital.
 

Could you tell us about your personal involvement with SHM?

I started as the quality lead for glycemic control for Sequoia Hospital in 2017 while serving in the role as the clinical pharmacy manager. Currently, I am the director of pharmacy.

What inspired your institution to enroll in the GC eQUIPS program? What were the challenges it helped you address?

Sequoia Hospital started in this journey to improve overall glycemic control in a collaborative with eight other Dignity Health hospitals in 2011. At Sequoia Hospital, this effort was led by Karen Harrison, RN, MSN, CCRN. At the time, Dignity Health saw variations in insulin management and adverse events, and it inspired this group to review their practices and try to find a better way to standardize them. The hope was that sharing information and making efforts to standardize practices would lead to better glycemic control.

Enrollment in the GC eQUIPS program helped Sequoia Hospital efficiently analyze data that would otherwise be too large to manage. In addition, by tracking and trending these large data sets, it helped us not only to see where the hospital’s greatest challenges are in glycemic control but also observe what the impact is when making changes. We were part of a nine-site study that proved the effectiveness of GC eQUIPS and highlighted the collective success across the health system.
 

What did you find most useful in the suite of resources included in eQUIPS?

The benchmarking webinars and informational webinars that have been provided by Greg Maynard, MD, over the years have been especially helpful. They have broadened my understanding of glycemic control. The glucometrics database is especially helpful for tracking and trending – we share these reports on a monthly basis with nursing and provider leadership. In addition, being able to benchmark ourselves with other hospitals pushes us to improve and keep an eye on glycemic control.

Are there any other highlights from your participation– and your institution’s – in the program that you feel would be beneficial to others who may be considering enrollment?

Having access to the tools available in the GC eQUIPS program is very powerful for data analysis and benchmarking. As a result, it allows the people at an institution to focus on the day-to-day tasks, clinical initiatives, and building a culture that can make a program successful instead of focusing on data collection.

For more information on SHM’s Glycemic Control resources or to enroll in eQUIPS, visit hospitalmedicine.org/gc.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Better time data from in-hospital resuscitations

Article Type
Changed
Mon, 11/18/2019 - 14:43

Benefits of an undocumented defibrillator feature

Research and quality improvement (QI) related to in-hospital cardiopulmonary resuscitation attempts (“codes” from here forward) are hampered significantly by the poor quality of data on time intervals from arrest onset to clinical interventions.1

John A. Stewart, a hospital nurse in Seattle
John A. Stewart

In 2000, the American Heart Association’s (AHA) Emergency Cardiac Care Guidelines said that current data were inaccurate and that greater accuracy was “the key to future high-quality research”2 – but since then, the general situation has not improved: Time intervals reported by the national AHA-supported registry Get With the Guidelines–Resuscitation (GWTG-R, 200+ hospitals enrolled) include a figure from all hospitals for times to first defibrillation of 1 minute median and 0 minutes first interquartile.3 Such numbers are typical – when they are tracked at all – but they strain credulity, and prima facie evidence is available at most clinical simulation centers simply by timing simulated defibrillation attempts under realistic conditions, as in “mock codes.”4,5

Taking artificially short time-interval data from GWTG-R or other sources at face value can hide serious delays in response to in-hospital arrests. It can also lead to flawed studies and highly questionable conclusions.6

The key to accuracy of critical time intervals – the intervals from arrest to key interventions – is an accurate time of arrest.7 Codes are typically recorded in handwritten form, though they may later be transcribed or scanned into electronic records. The “start” of the code for unmonitored arrests and most monitored arrests is typically taken to be the time that a human bedside recorder, arriving at an unknown interval after the arrest, writes down the first intervention. Researchers acknowledged the problem of artificially short time intervals in 2005, but they did not propose a remedy.1 Since then, the problem of in-hospital resuscitation delays has received little to no attention in the professional literature.
 

Description of feature

To get better time data from unmonitored resuscitation attempts, it is necessary to use a “surrogate marker” – a stand-in or substitute event – for the time of arrest. This event should occur reliably for each code, and as near as possible to the actual time of arrest. The main early events in a code are starting basic CPR, paging the code, and moving the defibrillator (usually on a code cart) to the scene. Ideally these events occur almost simultaneously, but that is not consistently achieved.

There are significant problems with use of the first two events as surrogate markers: the time of starting CPR cannot be determined accurately, and paging the code is dependent on several intermediate steps that lead to inaccuracy. Furthermore, the times of both markers are recorded using clocks that are typically not synchronized with the clock used for recording the code (defibrillator clock or the human recorder’s timepiece). Reconciliation of these times with the code record, while not particularly difficult,8 is rarely if ever done.

Defibrillator Power On is recorded on the defibrillator timeline and thus does not need to be reconciled with the defibrillator clock, but it is not suitable as a surrogate marker because this time is highly variable: It often does not occur until the time that monitoring pads are placed. Moving the code cart to the scene, which must occur early in the code, is a much more valid surrogate marker, with the added benefit that it can be marked on the defibrillator timeline.

The undocumented feature described here provides that marker. This feature has been a part of the LIFEPAK 20/20e’s design since it was launched in 2002, but it has not been publicized until now and is not documented in the user manual.

Hospital defibrillators are connected to alternating-current (AC) power when not in use. When the defibrillator is moved to the scene of the code, it is obviously necessary to disconnect the defibrillator from the wall outlet, at which time “AC Power Loss” is recorded on the event record generated by the LIFEPAK 20/20e defibrillators. The defibrillator may be powered on up to 10 minutes later while retaining the AC Power Loss marker in the event record. This surrogate marker for the start time will be on the same timeline as other events recorded by the defibrillator, including times of first monitoring and shocks.

Once the event record is acquired, determining time intervals is accomplished by subtracting clock times (see example, Figure 1).

Figure 1. Sample event record

In the example, using AC Power Loss as the start time, time intervals from arrest to first monitoring (Initial Rhythm on the Event Record) and first shock were 3:12 (07:16:34 minus 07:13:22) and 8:42 (07:22:14 minus 07:13:22). Note that if Power On were used as the surrogate time of arrest in the example, the calculated intervals would be artificially shorter, by 2 min 12 sec.

Using this undocumented feature, any facility using LIFEPAK 20/20e defibrillators can easily measure critical time intervals during resuscitation attempts with much greater accuracy, including times to first monitoring and first defibrillation. Each defibrillator stores code summaries sufficient for dozens of events and accessing past data is simple. Analysis of the data can provide a much-improved measure of the facility’s speed of response as a baseline for QI.

If desired, the time-interval data thus obtained can also be integrated with the handwritten record. The usual handwritten code sheet records times only in whole minutes, but with one of the more accurate intervals from the defibrillator – to first monitoring or first defibrillation – an adjusted time of arrest can be added to any code record to get other intervals that better approximate real-world response times.9

Figure 2. Approximate time intervals

 

 

 

Research prospects

The feature opens multiple avenues for future research. Acquiring data by this method should be simple for any facility using LIFEPAK 20/20e defibrillators as its standard devices. Matching the existing handwritten code records with the time intervals obtained using this surrogate time marker will show how inaccurate the commonly reported data are. This can be done with a retrospective study comparing the time intervals from the archived event records with those from the handwritten records, to provide an example of the inaccuracy of data reported in the medical literature. The more accurate picture of time intervals can provide a much-needed yardstick for future research aimed at shortening response times.

The feature can facilitate aggregation of data across multiple facilities that use the LIFEPAK 20/20e as their standard defibrillator. Also, it is possible that other defibrillator manufacturers will duplicate this feature with their devices – it should produce valid data with any defibrillator – although there may be legal and technical obstacles to adopting it.

Combining data from multiple sites might lead to an important contribution to resuscitation research: a reasonably accurate overall survival curve for in-hospital tachyarrhythmic arrests. A commonly cited but crude guideline is that survival from tachyarrhythmic arrests decreases by 10%-15% per minute as defibrillation is delayed,10 but it seems unlikely that the relationship would be linear: Experience and the literature suggest that survival drops very quickly in the first few minutes, flattening out as elapsed time after arrest increases. Aggregating the much more accurate time-interval data from multiple facilities should produce a survival curve for in-hospital tachyarrhythmic arrests that comes much closer to reality.
 

Conclusion

It is unknown whether this feature will be used to improve the accuracy of reported code response times. It greatly facilitates acquiring more accurate times, but the task has never been especially difficult – particularly when balanced with the importance of better time data for QI and research.8 One possible impediment may be institutional obstacles to publishing studies with accurate response times due to concerns about public relations or legal exposure: The more accurate times will almost certainly be longer than those generally reported.

As was stated almost 2 decades ago and remains true today, acquiring accurate time-interval data is “the key to future high-quality research.”2 It is also key to improving any hospital’s quality of code response. As described in this article, better time data can easily be acquired. It is time for this important problem to be recognized and remedied.
 

Mr. Stewart has worked as a hospital nurse in Seattle for many years, and has numerous publications to his credit related to resuscitation issues. You can contact him at jastewart325@gmail.com.

References

1. Kaye W et al. When minutes count – the fallacy of accurate time documentation during in-hospital resuscitation. Resuscitation. 2005;65(3):285-90.

2. The American Heart Association in collaboration with the International Liaison Committee on Resuscitation. Guidelines 2000 for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care, Part 4: the automated external defibrillator: key link in the chain of survival. Circulation. 2000;102(8 Suppl):I-60-76.

3. Chan PS et al. American Heart Association National Registry of Cardiopulmonary Resuscitation Investigators. Delayed time to defibrillation after in-hospital cardiac arrest. N Engl J Med. 2008 Jan 3;358(1):9-17. doi: 10.1056/NEJMoa0706467.

4. Hunt EA et al. Simulation of in-hospital pediatric medical emergencies and cardiopulmonary arrests: Highlighting the importance of the first 5 minutes. Pediatrics. 2008;121(1):e34-e43. doi: 10.1542/peds.2007-0029.

5. Reeson M et al. Defibrillator design and usability may be impeding timely defibrillation. Comm J Qual Patient Saf. 2018 Sep;44(9):536-544. doi: 10.1016/j.jcjq.2018.01.005.

6. Hunt EA et al. American Heart Association’s Get With The Guidelines – Resuscitation Investigators. Association between time to defibrillation and survival in pediatric in-hospital cardiac arrest with a first documented shockable rhythm JAMA Netw Open. 2018;1(5):e182643. doi: 10.1001/jamanetworkopen.2018.2643.

7. Cummins RO et al. Recommended guidelines for reviewing, reporting, and conducting research on in-hospital resuscitation: the in-hospital “Utstein” style. Circulation. 1997;95:2213-39.

8. Stewart JA. Determining accurate call-to-shock times is easy. Resuscitation. 2005 Oct;67(1):150-1.

9. In infrequent cases, the code cart and defibrillator may be moved to a deteriorating patient before a full arrest. Such occurrences should be analyzed separately or excluded from analysis.

10. Valenzuela TD et al. Estimating effectiveness of cardiac arrest interventions: a logistic regression survival model. Circulation. 1997;96(10):3308-13. doi: 10.1161/01.cir.96.10.3308.

Publications
Topics
Sections

Benefits of an undocumented defibrillator feature

Benefits of an undocumented defibrillator feature

Research and quality improvement (QI) related to in-hospital cardiopulmonary resuscitation attempts (“codes” from here forward) are hampered significantly by the poor quality of data on time intervals from arrest onset to clinical interventions.1

John A. Stewart, a hospital nurse in Seattle
John A. Stewart

In 2000, the American Heart Association’s (AHA) Emergency Cardiac Care Guidelines said that current data were inaccurate and that greater accuracy was “the key to future high-quality research”2 – but since then, the general situation has not improved: Time intervals reported by the national AHA-supported registry Get With the Guidelines–Resuscitation (GWTG-R, 200+ hospitals enrolled) include a figure from all hospitals for times to first defibrillation of 1 minute median and 0 minutes first interquartile.3 Such numbers are typical – when they are tracked at all – but they strain credulity, and prima facie evidence is available at most clinical simulation centers simply by timing simulated defibrillation attempts under realistic conditions, as in “mock codes.”4,5

Taking artificially short time-interval data from GWTG-R or other sources at face value can hide serious delays in response to in-hospital arrests. It can also lead to flawed studies and highly questionable conclusions.6

The key to accuracy of critical time intervals – the intervals from arrest to key interventions – is an accurate time of arrest.7 Codes are typically recorded in handwritten form, though they may later be transcribed or scanned into electronic records. The “start” of the code for unmonitored arrests and most monitored arrests is typically taken to be the time that a human bedside recorder, arriving at an unknown interval after the arrest, writes down the first intervention. Researchers acknowledged the problem of artificially short time intervals in 2005, but they did not propose a remedy.1 Since then, the problem of in-hospital resuscitation delays has received little to no attention in the professional literature.
 

Description of feature

To get better time data from unmonitored resuscitation attempts, it is necessary to use a “surrogate marker” – a stand-in or substitute event – for the time of arrest. This event should occur reliably for each code, and as near as possible to the actual time of arrest. The main early events in a code are starting basic CPR, paging the code, and moving the defibrillator (usually on a code cart) to the scene. Ideally these events occur almost simultaneously, but that is not consistently achieved.

There are significant problems with use of the first two events as surrogate markers: the time of starting CPR cannot be determined accurately, and paging the code is dependent on several intermediate steps that lead to inaccuracy. Furthermore, the times of both markers are recorded using clocks that are typically not synchronized with the clock used for recording the code (defibrillator clock or the human recorder’s timepiece). Reconciliation of these times with the code record, while not particularly difficult,8 is rarely if ever done.

Defibrillator Power On is recorded on the defibrillator timeline and thus does not need to be reconciled with the defibrillator clock, but it is not suitable as a surrogate marker because this time is highly variable: It often does not occur until the time that monitoring pads are placed. Moving the code cart to the scene, which must occur early in the code, is a much more valid surrogate marker, with the added benefit that it can be marked on the defibrillator timeline.

The undocumented feature described here provides that marker. This feature has been a part of the LIFEPAK 20/20e’s design since it was launched in 2002, but it has not been publicized until now and is not documented in the user manual.

Hospital defibrillators are connected to alternating-current (AC) power when not in use. When the defibrillator is moved to the scene of the code, it is obviously necessary to disconnect the defibrillator from the wall outlet, at which time “AC Power Loss” is recorded on the event record generated by the LIFEPAK 20/20e defibrillators. The defibrillator may be powered on up to 10 minutes later while retaining the AC Power Loss marker in the event record. This surrogate marker for the start time will be on the same timeline as other events recorded by the defibrillator, including times of first monitoring and shocks.

Once the event record is acquired, determining time intervals is accomplished by subtracting clock times (see example, Figure 1).

Figure 1. Sample event record

In the example, using AC Power Loss as the start time, time intervals from arrest to first monitoring (Initial Rhythm on the Event Record) and first shock were 3:12 (07:16:34 minus 07:13:22) and 8:42 (07:22:14 minus 07:13:22). Note that if Power On were used as the surrogate time of arrest in the example, the calculated intervals would be artificially shorter, by 2 min 12 sec.

Using this undocumented feature, any facility using LIFEPAK 20/20e defibrillators can easily measure critical time intervals during resuscitation attempts with much greater accuracy, including times to first monitoring and first defibrillation. Each defibrillator stores code summaries sufficient for dozens of events and accessing past data is simple. Analysis of the data can provide a much-improved measure of the facility’s speed of response as a baseline for QI.

If desired, the time-interval data thus obtained can also be integrated with the handwritten record. The usual handwritten code sheet records times only in whole minutes, but with one of the more accurate intervals from the defibrillator – to first monitoring or first defibrillation – an adjusted time of arrest can be added to any code record to get other intervals that better approximate real-world response times.9

Figure 2. Approximate time intervals

 

 

 

Research prospects

The feature opens multiple avenues for future research. Acquiring data by this method should be simple for any facility using LIFEPAK 20/20e defibrillators as its standard devices. Matching the existing handwritten code records with the time intervals obtained using this surrogate time marker will show how inaccurate the commonly reported data are. This can be done with a retrospective study comparing the time intervals from the archived event records with those from the handwritten records, to provide an example of the inaccuracy of data reported in the medical literature. The more accurate picture of time intervals can provide a much-needed yardstick for future research aimed at shortening response times.

The feature can facilitate aggregation of data across multiple facilities that use the LIFEPAK 20/20e as their standard defibrillator. Also, it is possible that other defibrillator manufacturers will duplicate this feature with their devices – it should produce valid data with any defibrillator – although there may be legal and technical obstacles to adopting it.

Combining data from multiple sites might lead to an important contribution to resuscitation research: a reasonably accurate overall survival curve for in-hospital tachyarrhythmic arrests. A commonly cited but crude guideline is that survival from tachyarrhythmic arrests decreases by 10%-15% per minute as defibrillation is delayed,10 but it seems unlikely that the relationship would be linear: Experience and the literature suggest that survival drops very quickly in the first few minutes, flattening out as elapsed time after arrest increases. Aggregating the much more accurate time-interval data from multiple facilities should produce a survival curve for in-hospital tachyarrhythmic arrests that comes much closer to reality.
 

Conclusion

It is unknown whether this feature will be used to improve the accuracy of reported code response times. It greatly facilitates acquiring more accurate times, but the task has never been especially difficult – particularly when balanced with the importance of better time data for QI and research.8 One possible impediment may be institutional obstacles to publishing studies with accurate response times due to concerns about public relations or legal exposure: The more accurate times will almost certainly be longer than those generally reported.

As was stated almost 2 decades ago and remains true today, acquiring accurate time-interval data is “the key to future high-quality research.”2 It is also key to improving any hospital’s quality of code response. As described in this article, better time data can easily be acquired. It is time for this important problem to be recognized and remedied.
 

Mr. Stewart has worked as a hospital nurse in Seattle for many years, and has numerous publications to his credit related to resuscitation issues. You can contact him at jastewart325@gmail.com.

References

1. Kaye W et al. When minutes count – the fallacy of accurate time documentation during in-hospital resuscitation. Resuscitation. 2005;65(3):285-90.

2. The American Heart Association in collaboration with the International Liaison Committee on Resuscitation. Guidelines 2000 for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care, Part 4: the automated external defibrillator: key link in the chain of survival. Circulation. 2000;102(8 Suppl):I-60-76.

3. Chan PS et al. American Heart Association National Registry of Cardiopulmonary Resuscitation Investigators. Delayed time to defibrillation after in-hospital cardiac arrest. N Engl J Med. 2008 Jan 3;358(1):9-17. doi: 10.1056/NEJMoa0706467.

4. Hunt EA et al. Simulation of in-hospital pediatric medical emergencies and cardiopulmonary arrests: Highlighting the importance of the first 5 minutes. Pediatrics. 2008;121(1):e34-e43. doi: 10.1542/peds.2007-0029.

5. Reeson M et al. Defibrillator design and usability may be impeding timely defibrillation. Comm J Qual Patient Saf. 2018 Sep;44(9):536-544. doi: 10.1016/j.jcjq.2018.01.005.

6. Hunt EA et al. American Heart Association’s Get With The Guidelines – Resuscitation Investigators. Association between time to defibrillation and survival in pediatric in-hospital cardiac arrest with a first documented shockable rhythm JAMA Netw Open. 2018;1(5):e182643. doi: 10.1001/jamanetworkopen.2018.2643.

7. Cummins RO et al. Recommended guidelines for reviewing, reporting, and conducting research on in-hospital resuscitation: the in-hospital “Utstein” style. Circulation. 1997;95:2213-39.

8. Stewart JA. Determining accurate call-to-shock times is easy. Resuscitation. 2005 Oct;67(1):150-1.

9. In infrequent cases, the code cart and defibrillator may be moved to a deteriorating patient before a full arrest. Such occurrences should be analyzed separately or excluded from analysis.

10. Valenzuela TD et al. Estimating effectiveness of cardiac arrest interventions: a logistic regression survival model. Circulation. 1997;96(10):3308-13. doi: 10.1161/01.cir.96.10.3308.

Research and quality improvement (QI) related to in-hospital cardiopulmonary resuscitation attempts (“codes” from here forward) are hampered significantly by the poor quality of data on time intervals from arrest onset to clinical interventions.1

John A. Stewart, a hospital nurse in Seattle
John A. Stewart

In 2000, the American Heart Association’s (AHA) Emergency Cardiac Care Guidelines said that current data were inaccurate and that greater accuracy was “the key to future high-quality research”2 – but since then, the general situation has not improved: Time intervals reported by the national AHA-supported registry Get With the Guidelines–Resuscitation (GWTG-R, 200+ hospitals enrolled) include a figure from all hospitals for times to first defibrillation of 1 minute median and 0 minutes first interquartile.3 Such numbers are typical – when they are tracked at all – but they strain credulity, and prima facie evidence is available at most clinical simulation centers simply by timing simulated defibrillation attempts under realistic conditions, as in “mock codes.”4,5

Taking artificially short time-interval data from GWTG-R or other sources at face value can hide serious delays in response to in-hospital arrests. It can also lead to flawed studies and highly questionable conclusions.6

The key to accuracy of critical time intervals – the intervals from arrest to key interventions – is an accurate time of arrest.7 Codes are typically recorded in handwritten form, though they may later be transcribed or scanned into electronic records. The “start” of the code for unmonitored arrests and most monitored arrests is typically taken to be the time that a human bedside recorder, arriving at an unknown interval after the arrest, writes down the first intervention. Researchers acknowledged the problem of artificially short time intervals in 2005, but they did not propose a remedy.1 Since then, the problem of in-hospital resuscitation delays has received little to no attention in the professional literature.
 

Description of feature

To get better time data from unmonitored resuscitation attempts, it is necessary to use a “surrogate marker” – a stand-in or substitute event – for the time of arrest. This event should occur reliably for each code, and as near as possible to the actual time of arrest. The main early events in a code are starting basic CPR, paging the code, and moving the defibrillator (usually on a code cart) to the scene. Ideally these events occur almost simultaneously, but that is not consistently achieved.

There are significant problems with use of the first two events as surrogate markers: the time of starting CPR cannot be determined accurately, and paging the code is dependent on several intermediate steps that lead to inaccuracy. Furthermore, the times of both markers are recorded using clocks that are typically not synchronized with the clock used for recording the code (defibrillator clock or the human recorder’s timepiece). Reconciliation of these times with the code record, while not particularly difficult,8 is rarely if ever done.

Defibrillator Power On is recorded on the defibrillator timeline and thus does not need to be reconciled with the defibrillator clock, but it is not suitable as a surrogate marker because this time is highly variable: It often does not occur until the time that monitoring pads are placed. Moving the code cart to the scene, which must occur early in the code, is a much more valid surrogate marker, with the added benefit that it can be marked on the defibrillator timeline.

The undocumented feature described here provides that marker. This feature has been a part of the LIFEPAK 20/20e’s design since it was launched in 2002, but it has not been publicized until now and is not documented in the user manual.

Hospital defibrillators are connected to alternating-current (AC) power when not in use. When the defibrillator is moved to the scene of the code, it is obviously necessary to disconnect the defibrillator from the wall outlet, at which time “AC Power Loss” is recorded on the event record generated by the LIFEPAK 20/20e defibrillators. The defibrillator may be powered on up to 10 minutes later while retaining the AC Power Loss marker in the event record. This surrogate marker for the start time will be on the same timeline as other events recorded by the defibrillator, including times of first monitoring and shocks.

Once the event record is acquired, determining time intervals is accomplished by subtracting clock times (see example, Figure 1).

Figure 1. Sample event record

In the example, using AC Power Loss as the start time, time intervals from arrest to first monitoring (Initial Rhythm on the Event Record) and first shock were 3:12 (07:16:34 minus 07:13:22) and 8:42 (07:22:14 minus 07:13:22). Note that if Power On were used as the surrogate time of arrest in the example, the calculated intervals would be artificially shorter, by 2 min 12 sec.

Using this undocumented feature, any facility using LIFEPAK 20/20e defibrillators can easily measure critical time intervals during resuscitation attempts with much greater accuracy, including times to first monitoring and first defibrillation. Each defibrillator stores code summaries sufficient for dozens of events and accessing past data is simple. Analysis of the data can provide a much-improved measure of the facility’s speed of response as a baseline for QI.

If desired, the time-interval data thus obtained can also be integrated with the handwritten record. The usual handwritten code sheet records times only in whole minutes, but with one of the more accurate intervals from the defibrillator – to first monitoring or first defibrillation – an adjusted time of arrest can be added to any code record to get other intervals that better approximate real-world response times.9

Figure 2. Approximate time intervals

 

 

 

Research prospects

The feature opens multiple avenues for future research. Acquiring data by this method should be simple for any facility using LIFEPAK 20/20e defibrillators as its standard devices. Matching the existing handwritten code records with the time intervals obtained using this surrogate time marker will show how inaccurate the commonly reported data are. This can be done with a retrospective study comparing the time intervals from the archived event records with those from the handwritten records, to provide an example of the inaccuracy of data reported in the medical literature. The more accurate picture of time intervals can provide a much-needed yardstick for future research aimed at shortening response times.

The feature can facilitate aggregation of data across multiple facilities that use the LIFEPAK 20/20e as their standard defibrillator. Also, it is possible that other defibrillator manufacturers will duplicate this feature with their devices – it should produce valid data with any defibrillator – although there may be legal and technical obstacles to adopting it.

Combining data from multiple sites might lead to an important contribution to resuscitation research: a reasonably accurate overall survival curve for in-hospital tachyarrhythmic arrests. A commonly cited but crude guideline is that survival from tachyarrhythmic arrests decreases by 10%-15% per minute as defibrillation is delayed,10 but it seems unlikely that the relationship would be linear: Experience and the literature suggest that survival drops very quickly in the first few minutes, flattening out as elapsed time after arrest increases. Aggregating the much more accurate time-interval data from multiple facilities should produce a survival curve for in-hospital tachyarrhythmic arrests that comes much closer to reality.
 

Conclusion

It is unknown whether this feature will be used to improve the accuracy of reported code response times. It greatly facilitates acquiring more accurate times, but the task has never been especially difficult – particularly when balanced with the importance of better time data for QI and research.8 One possible impediment may be institutional obstacles to publishing studies with accurate response times due to concerns about public relations or legal exposure: The more accurate times will almost certainly be longer than those generally reported.

As was stated almost 2 decades ago and remains true today, acquiring accurate time-interval data is “the key to future high-quality research.”2 It is also key to improving any hospital’s quality of code response. As described in this article, better time data can easily be acquired. It is time for this important problem to be recognized and remedied.
 

Mr. Stewart has worked as a hospital nurse in Seattle for many years, and has numerous publications to his credit related to resuscitation issues. You can contact him at jastewart325@gmail.com.

References

1. Kaye W et al. When minutes count – the fallacy of accurate time documentation during in-hospital resuscitation. Resuscitation. 2005;65(3):285-90.

2. The American Heart Association in collaboration with the International Liaison Committee on Resuscitation. Guidelines 2000 for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care, Part 4: the automated external defibrillator: key link in the chain of survival. Circulation. 2000;102(8 Suppl):I-60-76.

3. Chan PS et al. American Heart Association National Registry of Cardiopulmonary Resuscitation Investigators. Delayed time to defibrillation after in-hospital cardiac arrest. N Engl J Med. 2008 Jan 3;358(1):9-17. doi: 10.1056/NEJMoa0706467.

4. Hunt EA et al. Simulation of in-hospital pediatric medical emergencies and cardiopulmonary arrests: Highlighting the importance of the first 5 minutes. Pediatrics. 2008;121(1):e34-e43. doi: 10.1542/peds.2007-0029.

5. Reeson M et al. Defibrillator design and usability may be impeding timely defibrillation. Comm J Qual Patient Saf. 2018 Sep;44(9):536-544. doi: 10.1016/j.jcjq.2018.01.005.

6. Hunt EA et al. American Heart Association’s Get With The Guidelines – Resuscitation Investigators. Association between time to defibrillation and survival in pediatric in-hospital cardiac arrest with a first documented shockable rhythm JAMA Netw Open. 2018;1(5):e182643. doi: 10.1001/jamanetworkopen.2018.2643.

7. Cummins RO et al. Recommended guidelines for reviewing, reporting, and conducting research on in-hospital resuscitation: the in-hospital “Utstein” style. Circulation. 1997;95:2213-39.

8. Stewart JA. Determining accurate call-to-shock times is easy. Resuscitation. 2005 Oct;67(1):150-1.

9. In infrequent cases, the code cart and defibrillator may be moved to a deteriorating patient before a full arrest. Such occurrences should be analyzed separately or excluded from analysis.

10. Valenzuela TD et al. Estimating effectiveness of cardiac arrest interventions: a logistic regression survival model. Circulation. 1997;96(10):3308-13. doi: 10.1161/01.cir.96.10.3308.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Hospitalists finding their role in hospital quality ratings

Article Type
Changed
Mon, 11/11/2019 - 11:47

CMS considers how to assess socioeconomic factors

Since 2005 the government website Hospital Compare has publicly reported quality data on hospitals, with periodic updates of their performance, including specific measures of quality. But how accurately do the ratings reflect a hospital’s actual quality of care, and what do the ratings mean for hospitalists?

Dr. Kate Goodrich of the George Washington Hospital Center in Washington
Dr. Kate Goodrich

Hospital Compare provides searchable, comparable information to consumers on reported quality of care data submitted by more than 4,000 Medicare-certified hospitals, along with Veterans Administration and military health system hospitals. It is designed to allow consumers to select hospitals and directly compare their mortality, complication, infection, and other performance measures on conditions such as heart attacks, heart failure, pneumonia, and surgical outcomes.

The Overall Hospital Quality Star Ratings, which began in 2016, combine data from more than 50 quality measures publicly reported on Hospital Compare into an overall rating of one to five stars for each hospital. These ratings are designed to enhance and supplement existing quality measures with a more “customer-centric” measure that makes it easier for consumers to act on the information. Obviously, this would be helpful to consumers who feel overwhelmed by the volume of data on the Hospital Compare website, and by the complexity of some of the measures.

A posted call in spring 2019 by CMS for public comment on possible methodological changes to the Overall Hospital Quality Star Ratings received more than 800 comments from 150 different organizations. And this past summer, the Centers for Medicare & Medicaid Services decided to delay posting the refreshed Star Ratings in its Hospital Compare data preview reports for July 2019. The agency says it intends to release the updated information in early 2020. Meanwhile, the reported data – particularly the overall star ratings – continue to generate controversy for the hospital field.
 

Hospitalists’ critical role

Hospitalists are not rated individually on Hospital Compare, but they play important roles in the quality of care their hospital provides – and thus ultimately the hospital’s publicly reported rankings. Hospitalists typically are not specifically incentivized or penalized for their hospital’s performance, but this does happen in some cases.

“Hospital administrators absolutely take note of their hospital’s star ratings. These are the people hospitalists work for, and this is definitely top of their minds,” said Kate Goodrich, MD, MHS, director of the Center for Clinical Standards and Quality at CMS. “I recently spoke at an SHM annual conference and every question I was asked was about hospital ratings and the star system,” noted Dr. Goodrich, herself a practicing hospitalist at George Washington University Medical Center in Washington.

The government’s aim for Hospital Compare is to give consumers easy-to-understand indicators of the quality of care provided by hospitals, especially where they might have a choice of hospitals, such as for an elective surgery. Making that information public is also viewed as a motivator to help drive improvements in hospital performance, Dr. Goodrich said.

“In terms of what we measure, we try to make sure it’s important to patients and to clinicians. We have frontline practicing physicians, patients, and families advising us, along with methodologists and PhD researchers. These stakeholders tell us what is important to measure and why,” she said. “Hospitals and all health providers need more actionable and timely data to improve their quality of care, especially if they want to participate in accountable care organizations. And we need to make the information easy to understand.”

Dr. Goodrich sees two main themes in the public response to its request for comment. “People say the methodology we use to calculate star ratings is frustrating for hospitals, which have found it difficult to model their performance, predict their star ratings, or explain the discrepancies.” Hospitals taking care of sicker patients with lower socioeconomic status also say the ratings unfairly penalize them. “I work in a large urban hospital, and I understand this. They say we don’t take that sufficiently into account in the ratings,” she said.

“While our modeling shows that current ratings highly correlate with performance on individual measures, we have asked for comment on if and how we could adjust for socioeconomic factors. We are actively considering how to make changes to address these concerns,” Dr. Goodrich said.

In August 2019, CMS acknowledged that it plans to change the methodology used to calculate hospital star ratings in early 2021, but has not yet revealed specific details about the nature of the changes. The agency intends to propose the changes through the public rule-making process sometime in 2020.
 

 

 

Continuing controversy

The American Hospital Association – which has had strong concerns about the methodology and the usefulness of hospital star ratings – is pushing back on some of the changes to the system being considered by CMS. In its submitted comments, AHA supported only three of the 14 potential star ratings methodology changes being considered. AHA and the American Association of Medical Colleges, among others, have urged taking down the star ratings until major changes can be made.

“When the star ratings were first implemented, a lot of challenges became apparent right away,” said Akin Demehin, MPH, AHA’s director of quality policy. “We began to see that those hospitals that treat more complicated patients and poorer patients tended to perform more poorly on the ratings. So there was something wrong with the methodology. Then, starting in 2018, hospitals began seeing real shifts in their performance ratings when the underlying data hadn’t really changed.”

CMS uses a statistical approach called latent variable modeling. Its underlying assumption is that you can say something about a hospital’s underlying quality based on the data you already have, Mr. Demehin said, but noted “that can be a questionable assumption.” He also emphasized the need for ratings that compare hospitals that are similar in size and model to each other.

Dr. Suparna Dutta, division chief, hospital medicine, Rush University, Chicago
Dr. Suparna Dutta

Suparna Dutta, MD, division chief, hospital medicine, Rush University, Chicago, said analyses done at Rush showed that the statistical model CMS used in calculating the star ratings was dynamically changing the weighting of certain measures in every release. “That meant one specific performance measure could play an outsized role in determining a final rating,” she said. In particular the methodology inadvertently penalized large hospitals, academic medical centers, and institutions that provide heroic care.

“We fundamentally believe that consumers should have meaningful information about hospital quality,” said Nancy Foster, AHA’s vice president for quality and patient safety policy at AHA. “We understand the complexities of Hospital Compare and the challenges of getting simple information for consumers. To its credit, CMS is thinking about how to do that, and we support them in that effort.”
 

Getting a handle on quality

Hospitalists are responsible for ensuring that their hospitals excel in the care of patients, said Julius Yang, MD, hospitalist and director of quality at Beth Israel Deaconess Medical Center in Boston. That also requires keeping up on the primary public ways these issues are addressed through reporting of quality data and through reimbursement policy. “That should be part of our core competencies as hospitalists.”

Some of the measures on Hospital Compare don’t overlap much with the work of hospitalists, he noted. But for others, such as for pneumonia, COPD, and care of patients with stroke, or for mortality and 30-day readmissions rates, “we are involved, even if not directly, and certainly responsible for contributing to the outcomes and the opportunity to add value,” he said.

“When it comes to 30-day readmission rates, do we really understand the risk factors for readmissions and the barriers to patients remaining in the community after their hospital stay? Are our patients stable enough to be discharged, and have we worked with the care coordination team to make sure they have the resources they need? And have we communicated adequately with the outpatient doctor? All of these things are within the wheelhouse of the hospitalist,” Dr. Yang said. “Let’s accept that the readmissions rate, for example, is not a perfect measure of quality. But as an imperfect measure, it can point us in the right direction.”

Dr. Jose Figueroa, Harvard Medical School, Boston
Dr. Jose Figueroa

Jose Figueroa, MD, MPH, hospitalist and assistant professor at Harvard Medical School, has been studying for his health system the impact of hospital penalties such as the Hospital Readmissions Reduction Program on health equity. In general, hospitalists play an important role in dictating processes of care and serving on quality-oriented committees across multiple realms of the hospital, he said.

“What’s hard from the hospitalist’s perspective is that there don’t seem to be simple solutions to move the dial on many of these measures,” Dr. Figueroa said. “If the hospital is at three stars, can we say, okay, if we do X, Y, and Z, then our hospital will move from three to five stars? Some of these measures are so broad and not in our purview. Which ones apply to me as a hospitalist and my care processes?”

Dr. Dutta sits on the SHM Policy Committee, which has been working to bring these issues to the attention of frontline hospitalists. “Hospitalists are always going to be aligned with their hospital’s priorities. We’re in it to provide high-quality care, but there’s no magic way to do that,” she said.

Hospital Compare measures sometimes end up in hospitalist incentives plans – for example, the readmission penalty rates – even though that is a fairly arbitrary measure and hard to pin to one doctor, Dr. Dutta explained. “If you look at the evidence regarding these metrics, there are not a lot of data to show that the metrics lead to what we really want, which is better care for patients.”

A recent study in the British Medical Journal, for example, examined the association between the penalties on hospitals in the Hospital Acquired Condition Reduction Program and clinical outcome.1 The researchers concluded that the penalties were not associated with significant change or found to drive meaningful clinical improvement.
 

 

 

How can hospitalists engage with Compare?

Dr. Goodrich refers hospitalists seeking quality resources to their local quality improvement organizations (QIO) and to Hospital Improvement Innovation Networks at the regional, state, national, or hospital system level.

One helpful thing that any group of hospitalists could do, added Dr. Figueroa, is to examine the measures closely and determine which ones they think they can influence. “Then look for the hospitals that resemble ours and care for similar patients, based on the demographics. We can then say: ‘Okay, that’s a fair comparison. This can be a benchmark with our peers,’” he said. Then it’s important to ask how your hospital is doing over time on these measures, and use that to prioritize.

“You also have to appreciate that these are broad quality measures, and to impact them you have to do broad quality improvement efforts. Another piece of this is getting good at collecting and analyzing data internally in a timely fashion. You don’t want to wait 2-3 years to find out in Hospital Compare that you’re not performing well. You care about the care you provided today, not 2 or 3 years ago. Without this internal check, it’s impossible to know what to invest in – and to see if things you do are having an impact,” Dr. Figueroa said.

“As physician leaders, this is a real opportunity for us to trigger a conversation with our hospital’s administration around what we went into medicine for in the first place – to improve our patients’ care,” said Dr. Goodrich. She said Hospital Compare is one tool for sparking systemic quality improvement across the hospital – which is an important part of the hospitalist’s job. “If you want to be a bigger star within your hospital, show that level of commitment. It likely would be welcomed by your hospital.”
 

Reference

1. Sankaran R et al. Changes in hospital safety following penalties in the US Hospital Acquired Condition Reduction Program: retrospective cohort study. BMJ. 2019 Jul 3 doi: 10.1136/bmj.l4109.

Publications
Topics
Sections

CMS considers how to assess socioeconomic factors

CMS considers how to assess socioeconomic factors

Since 2005 the government website Hospital Compare has publicly reported quality data on hospitals, with periodic updates of their performance, including specific measures of quality. But how accurately do the ratings reflect a hospital’s actual quality of care, and what do the ratings mean for hospitalists?

Dr. Kate Goodrich of the George Washington Hospital Center in Washington
Dr. Kate Goodrich

Hospital Compare provides searchable, comparable information to consumers on reported quality of care data submitted by more than 4,000 Medicare-certified hospitals, along with Veterans Administration and military health system hospitals. It is designed to allow consumers to select hospitals and directly compare their mortality, complication, infection, and other performance measures on conditions such as heart attacks, heart failure, pneumonia, and surgical outcomes.

The Overall Hospital Quality Star Ratings, which began in 2016, combine data from more than 50 quality measures publicly reported on Hospital Compare into an overall rating of one to five stars for each hospital. These ratings are designed to enhance and supplement existing quality measures with a more “customer-centric” measure that makes it easier for consumers to act on the information. Obviously, this would be helpful to consumers who feel overwhelmed by the volume of data on the Hospital Compare website, and by the complexity of some of the measures.

A posted call in spring 2019 by CMS for public comment on possible methodological changes to the Overall Hospital Quality Star Ratings received more than 800 comments from 150 different organizations. And this past summer, the Centers for Medicare & Medicaid Services decided to delay posting the refreshed Star Ratings in its Hospital Compare data preview reports for July 2019. The agency says it intends to release the updated information in early 2020. Meanwhile, the reported data – particularly the overall star ratings – continue to generate controversy for the hospital field.
 

Hospitalists’ critical role

Hospitalists are not rated individually on Hospital Compare, but they play important roles in the quality of care their hospital provides – and thus ultimately the hospital’s publicly reported rankings. Hospitalists typically are not specifically incentivized or penalized for their hospital’s performance, but this does happen in some cases.

“Hospital administrators absolutely take note of their hospital’s star ratings. These are the people hospitalists work for, and this is definitely top of their minds,” said Kate Goodrich, MD, MHS, director of the Center for Clinical Standards and Quality at CMS. “I recently spoke at an SHM annual conference and every question I was asked was about hospital ratings and the star system,” noted Dr. Goodrich, herself a practicing hospitalist at George Washington University Medical Center in Washington.

The government’s aim for Hospital Compare is to give consumers easy-to-understand indicators of the quality of care provided by hospitals, especially where they might have a choice of hospitals, such as for an elective surgery. Making that information public is also viewed as a motivator to help drive improvements in hospital performance, Dr. Goodrich said.

“In terms of what we measure, we try to make sure it’s important to patients and to clinicians. We have frontline practicing physicians, patients, and families advising us, along with methodologists and PhD researchers. These stakeholders tell us what is important to measure and why,” she said. “Hospitals and all health providers need more actionable and timely data to improve their quality of care, especially if they want to participate in accountable care organizations. And we need to make the information easy to understand.”

Dr. Goodrich sees two main themes in the public response to its request for comment. “People say the methodology we use to calculate star ratings is frustrating for hospitals, which have found it difficult to model their performance, predict their star ratings, or explain the discrepancies.” Hospitals taking care of sicker patients with lower socioeconomic status also say the ratings unfairly penalize them. “I work in a large urban hospital, and I understand this. They say we don’t take that sufficiently into account in the ratings,” she said.

“While our modeling shows that current ratings highly correlate with performance on individual measures, we have asked for comment on if and how we could adjust for socioeconomic factors. We are actively considering how to make changes to address these concerns,” Dr. Goodrich said.

In August 2019, CMS acknowledged that it plans to change the methodology used to calculate hospital star ratings in early 2021, but has not yet revealed specific details about the nature of the changes. The agency intends to propose the changes through the public rule-making process sometime in 2020.
 

 

 

Continuing controversy

The American Hospital Association – which has had strong concerns about the methodology and the usefulness of hospital star ratings – is pushing back on some of the changes to the system being considered by CMS. In its submitted comments, AHA supported only three of the 14 potential star ratings methodology changes being considered. AHA and the American Association of Medical Colleges, among others, have urged taking down the star ratings until major changes can be made.

“When the star ratings were first implemented, a lot of challenges became apparent right away,” said Akin Demehin, MPH, AHA’s director of quality policy. “We began to see that those hospitals that treat more complicated patients and poorer patients tended to perform more poorly on the ratings. So there was something wrong with the methodology. Then, starting in 2018, hospitals began seeing real shifts in their performance ratings when the underlying data hadn’t really changed.”

CMS uses a statistical approach called latent variable modeling. Its underlying assumption is that you can say something about a hospital’s underlying quality based on the data you already have, Mr. Demehin said, but noted “that can be a questionable assumption.” He also emphasized the need for ratings that compare hospitals that are similar in size and model to each other.

Dr. Suparna Dutta, division chief, hospital medicine, Rush University, Chicago
Dr. Suparna Dutta

Suparna Dutta, MD, division chief, hospital medicine, Rush University, Chicago, said analyses done at Rush showed that the statistical model CMS used in calculating the star ratings was dynamically changing the weighting of certain measures in every release. “That meant one specific performance measure could play an outsized role in determining a final rating,” she said. In particular the methodology inadvertently penalized large hospitals, academic medical centers, and institutions that provide heroic care.

“We fundamentally believe that consumers should have meaningful information about hospital quality,” said Nancy Foster, AHA’s vice president for quality and patient safety policy at AHA. “We understand the complexities of Hospital Compare and the challenges of getting simple information for consumers. To its credit, CMS is thinking about how to do that, and we support them in that effort.”
 

Getting a handle on quality

Hospitalists are responsible for ensuring that their hospitals excel in the care of patients, said Julius Yang, MD, hospitalist and director of quality at Beth Israel Deaconess Medical Center in Boston. That also requires keeping up on the primary public ways these issues are addressed through reporting of quality data and through reimbursement policy. “That should be part of our core competencies as hospitalists.”

Some of the measures on Hospital Compare don’t overlap much with the work of hospitalists, he noted. But for others, such as for pneumonia, COPD, and care of patients with stroke, or for mortality and 30-day readmissions rates, “we are involved, even if not directly, and certainly responsible for contributing to the outcomes and the opportunity to add value,” he said.

“When it comes to 30-day readmission rates, do we really understand the risk factors for readmissions and the barriers to patients remaining in the community after their hospital stay? Are our patients stable enough to be discharged, and have we worked with the care coordination team to make sure they have the resources they need? And have we communicated adequately with the outpatient doctor? All of these things are within the wheelhouse of the hospitalist,” Dr. Yang said. “Let’s accept that the readmissions rate, for example, is not a perfect measure of quality. But as an imperfect measure, it can point us in the right direction.”

Dr. Jose Figueroa, Harvard Medical School, Boston
Dr. Jose Figueroa

Jose Figueroa, MD, MPH, hospitalist and assistant professor at Harvard Medical School, has been studying for his health system the impact of hospital penalties such as the Hospital Readmissions Reduction Program on health equity. In general, hospitalists play an important role in dictating processes of care and serving on quality-oriented committees across multiple realms of the hospital, he said.

“What’s hard from the hospitalist’s perspective is that there don’t seem to be simple solutions to move the dial on many of these measures,” Dr. Figueroa said. “If the hospital is at three stars, can we say, okay, if we do X, Y, and Z, then our hospital will move from three to five stars? Some of these measures are so broad and not in our purview. Which ones apply to me as a hospitalist and my care processes?”

Dr. Dutta sits on the SHM Policy Committee, which has been working to bring these issues to the attention of frontline hospitalists. “Hospitalists are always going to be aligned with their hospital’s priorities. We’re in it to provide high-quality care, but there’s no magic way to do that,” she said.

Hospital Compare measures sometimes end up in hospitalist incentives plans – for example, the readmission penalty rates – even though that is a fairly arbitrary measure and hard to pin to one doctor, Dr. Dutta explained. “If you look at the evidence regarding these metrics, there are not a lot of data to show that the metrics lead to what we really want, which is better care for patients.”

A recent study in the British Medical Journal, for example, examined the association between the penalties on hospitals in the Hospital Acquired Condition Reduction Program and clinical outcome.1 The researchers concluded that the penalties were not associated with significant change or found to drive meaningful clinical improvement.
 

 

 

How can hospitalists engage with Compare?

Dr. Goodrich refers hospitalists seeking quality resources to their local quality improvement organizations (QIO) and to Hospital Improvement Innovation Networks at the regional, state, national, or hospital system level.

One helpful thing that any group of hospitalists could do, added Dr. Figueroa, is to examine the measures closely and determine which ones they think they can influence. “Then look for the hospitals that resemble ours and care for similar patients, based on the demographics. We can then say: ‘Okay, that’s a fair comparison. This can be a benchmark with our peers,’” he said. Then it’s important to ask how your hospital is doing over time on these measures, and use that to prioritize.

“You also have to appreciate that these are broad quality measures, and to impact them you have to do broad quality improvement efforts. Another piece of this is getting good at collecting and analyzing data internally in a timely fashion. You don’t want to wait 2-3 years to find out in Hospital Compare that you’re not performing well. You care about the care you provided today, not 2 or 3 years ago. Without this internal check, it’s impossible to know what to invest in – and to see if things you do are having an impact,” Dr. Figueroa said.

“As physician leaders, this is a real opportunity for us to trigger a conversation with our hospital’s administration around what we went into medicine for in the first place – to improve our patients’ care,” said Dr. Goodrich. She said Hospital Compare is one tool for sparking systemic quality improvement across the hospital – which is an important part of the hospitalist’s job. “If you want to be a bigger star within your hospital, show that level of commitment. It likely would be welcomed by your hospital.”
 

Reference

1. Sankaran R et al. Changes in hospital safety following penalties in the US Hospital Acquired Condition Reduction Program: retrospective cohort study. BMJ. 2019 Jul 3 doi: 10.1136/bmj.l4109.

Since 2005 the government website Hospital Compare has publicly reported quality data on hospitals, with periodic updates of their performance, including specific measures of quality. But how accurately do the ratings reflect a hospital’s actual quality of care, and what do the ratings mean for hospitalists?

Dr. Kate Goodrich of the George Washington Hospital Center in Washington
Dr. Kate Goodrich

Hospital Compare provides searchable, comparable information to consumers on reported quality of care data submitted by more than 4,000 Medicare-certified hospitals, along with Veterans Administration and military health system hospitals. It is designed to allow consumers to select hospitals and directly compare their mortality, complication, infection, and other performance measures on conditions such as heart attacks, heart failure, pneumonia, and surgical outcomes.

The Overall Hospital Quality Star Ratings, which began in 2016, combine data from more than 50 quality measures publicly reported on Hospital Compare into an overall rating of one to five stars for each hospital. These ratings are designed to enhance and supplement existing quality measures with a more “customer-centric” measure that makes it easier for consumers to act on the information. Obviously, this would be helpful to consumers who feel overwhelmed by the volume of data on the Hospital Compare website, and by the complexity of some of the measures.

A posted call in spring 2019 by CMS for public comment on possible methodological changes to the Overall Hospital Quality Star Ratings received more than 800 comments from 150 different organizations. And this past summer, the Centers for Medicare & Medicaid Services decided to delay posting the refreshed Star Ratings in its Hospital Compare data preview reports for July 2019. The agency says it intends to release the updated information in early 2020. Meanwhile, the reported data – particularly the overall star ratings – continue to generate controversy for the hospital field.
 

Hospitalists’ critical role

Hospitalists are not rated individually on Hospital Compare, but they play important roles in the quality of care their hospital provides – and thus ultimately the hospital’s publicly reported rankings. Hospitalists typically are not specifically incentivized or penalized for their hospital’s performance, but this does happen in some cases.

“Hospital administrators absolutely take note of their hospital’s star ratings. These are the people hospitalists work for, and this is definitely top of their minds,” said Kate Goodrich, MD, MHS, director of the Center for Clinical Standards and Quality at CMS. “I recently spoke at an SHM annual conference and every question I was asked was about hospital ratings and the star system,” noted Dr. Goodrich, herself a practicing hospitalist at George Washington University Medical Center in Washington.

The government’s aim for Hospital Compare is to give consumers easy-to-understand indicators of the quality of care provided by hospitals, especially where they might have a choice of hospitals, such as for an elective surgery. Making that information public is also viewed as a motivator to help drive improvements in hospital performance, Dr. Goodrich said.

“In terms of what we measure, we try to make sure it’s important to patients and to clinicians. We have frontline practicing physicians, patients, and families advising us, along with methodologists and PhD researchers. These stakeholders tell us what is important to measure and why,” she said. “Hospitals and all health providers need more actionable and timely data to improve their quality of care, especially if they want to participate in accountable care organizations. And we need to make the information easy to understand.”

Dr. Goodrich sees two main themes in the public response to its request for comment. “People say the methodology we use to calculate star ratings is frustrating for hospitals, which have found it difficult to model their performance, predict their star ratings, or explain the discrepancies.” Hospitals taking care of sicker patients with lower socioeconomic status also say the ratings unfairly penalize them. “I work in a large urban hospital, and I understand this. They say we don’t take that sufficiently into account in the ratings,” she said.

“While our modeling shows that current ratings highly correlate with performance on individual measures, we have asked for comment on if and how we could adjust for socioeconomic factors. We are actively considering how to make changes to address these concerns,” Dr. Goodrich said.

In August 2019, CMS acknowledged that it plans to change the methodology used to calculate hospital star ratings in early 2021, but has not yet revealed specific details about the nature of the changes. The agency intends to propose the changes through the public rule-making process sometime in 2020.
 

 

 

Continuing controversy

The American Hospital Association – which has had strong concerns about the methodology and the usefulness of hospital star ratings – is pushing back on some of the changes to the system being considered by CMS. In its submitted comments, AHA supported only three of the 14 potential star ratings methodology changes being considered. AHA and the American Association of Medical Colleges, among others, have urged taking down the star ratings until major changes can be made.

“When the star ratings were first implemented, a lot of challenges became apparent right away,” said Akin Demehin, MPH, AHA’s director of quality policy. “We began to see that those hospitals that treat more complicated patients and poorer patients tended to perform more poorly on the ratings. So there was something wrong with the methodology. Then, starting in 2018, hospitals began seeing real shifts in their performance ratings when the underlying data hadn’t really changed.”

CMS uses a statistical approach called latent variable modeling. Its underlying assumption is that you can say something about a hospital’s underlying quality based on the data you already have, Mr. Demehin said, but noted “that can be a questionable assumption.” He also emphasized the need for ratings that compare hospitals that are similar in size and model to each other.

Dr. Suparna Dutta, division chief, hospital medicine, Rush University, Chicago
Dr. Suparna Dutta

Suparna Dutta, MD, division chief, hospital medicine, Rush University, Chicago, said analyses done at Rush showed that the statistical model CMS used in calculating the star ratings was dynamically changing the weighting of certain measures in every release. “That meant one specific performance measure could play an outsized role in determining a final rating,” she said. In particular the methodology inadvertently penalized large hospitals, academic medical centers, and institutions that provide heroic care.

“We fundamentally believe that consumers should have meaningful information about hospital quality,” said Nancy Foster, AHA’s vice president for quality and patient safety policy at AHA. “We understand the complexities of Hospital Compare and the challenges of getting simple information for consumers. To its credit, CMS is thinking about how to do that, and we support them in that effort.”
 

Getting a handle on quality

Hospitalists are responsible for ensuring that their hospitals excel in the care of patients, said Julius Yang, MD, hospitalist and director of quality at Beth Israel Deaconess Medical Center in Boston. That also requires keeping up on the primary public ways these issues are addressed through reporting of quality data and through reimbursement policy. “That should be part of our core competencies as hospitalists.”

Some of the measures on Hospital Compare don’t overlap much with the work of hospitalists, he noted. But for others, such as for pneumonia, COPD, and care of patients with stroke, or for mortality and 30-day readmissions rates, “we are involved, even if not directly, and certainly responsible for contributing to the outcomes and the opportunity to add value,” he said.

“When it comes to 30-day readmission rates, do we really understand the risk factors for readmissions and the barriers to patients remaining in the community after their hospital stay? Are our patients stable enough to be discharged, and have we worked with the care coordination team to make sure they have the resources they need? And have we communicated adequately with the outpatient doctor? All of these things are within the wheelhouse of the hospitalist,” Dr. Yang said. “Let’s accept that the readmissions rate, for example, is not a perfect measure of quality. But as an imperfect measure, it can point us in the right direction.”

Dr. Jose Figueroa, Harvard Medical School, Boston
Dr. Jose Figueroa

Jose Figueroa, MD, MPH, hospitalist and assistant professor at Harvard Medical School, has been studying for his health system the impact of hospital penalties such as the Hospital Readmissions Reduction Program on health equity. In general, hospitalists play an important role in dictating processes of care and serving on quality-oriented committees across multiple realms of the hospital, he said.

“What’s hard from the hospitalist’s perspective is that there don’t seem to be simple solutions to move the dial on many of these measures,” Dr. Figueroa said. “If the hospital is at three stars, can we say, okay, if we do X, Y, and Z, then our hospital will move from three to five stars? Some of these measures are so broad and not in our purview. Which ones apply to me as a hospitalist and my care processes?”

Dr. Dutta sits on the SHM Policy Committee, which has been working to bring these issues to the attention of frontline hospitalists. “Hospitalists are always going to be aligned with their hospital’s priorities. We’re in it to provide high-quality care, but there’s no magic way to do that,” she said.

Hospital Compare measures sometimes end up in hospitalist incentives plans – for example, the readmission penalty rates – even though that is a fairly arbitrary measure and hard to pin to one doctor, Dr. Dutta explained. “If you look at the evidence regarding these metrics, there are not a lot of data to show that the metrics lead to what we really want, which is better care for patients.”

A recent study in the British Medical Journal, for example, examined the association between the penalties on hospitals in the Hospital Acquired Condition Reduction Program and clinical outcome.1 The researchers concluded that the penalties were not associated with significant change or found to drive meaningful clinical improvement.
 

 

 

How can hospitalists engage with Compare?

Dr. Goodrich refers hospitalists seeking quality resources to their local quality improvement organizations (QIO) and to Hospital Improvement Innovation Networks at the regional, state, national, or hospital system level.

One helpful thing that any group of hospitalists could do, added Dr. Figueroa, is to examine the measures closely and determine which ones they think they can influence. “Then look for the hospitals that resemble ours and care for similar patients, based on the demographics. We can then say: ‘Okay, that’s a fair comparison. This can be a benchmark with our peers,’” he said. Then it’s important to ask how your hospital is doing over time on these measures, and use that to prioritize.

“You also have to appreciate that these are broad quality measures, and to impact them you have to do broad quality improvement efforts. Another piece of this is getting good at collecting and analyzing data internally in a timely fashion. You don’t want to wait 2-3 years to find out in Hospital Compare that you’re not performing well. You care about the care you provided today, not 2 or 3 years ago. Without this internal check, it’s impossible to know what to invest in – and to see if things you do are having an impact,” Dr. Figueroa said.

“As physician leaders, this is a real opportunity for us to trigger a conversation with our hospital’s administration around what we went into medicine for in the first place – to improve our patients’ care,” said Dr. Goodrich. She said Hospital Compare is one tool for sparking systemic quality improvement across the hospital – which is an important part of the hospitalist’s job. “If you want to be a bigger star within your hospital, show that level of commitment. It likely would be welcomed by your hospital.”
 

Reference

1. Sankaran R et al. Changes in hospital safety following penalties in the US Hospital Acquired Condition Reduction Program: retrospective cohort study. BMJ. 2019 Jul 3 doi: 10.1136/bmj.l4109.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Considering the value of productivity bonuses

Article Type
Changed
Wed, 10/09/2019 - 13:00

Connect high-value care with reimbursement

 

Physician payment models that include productivity bonuses are widespread, says Reshma Gupta, MD, MSHPM.

A stethoscope, shaped in a dollar sign, representing the high costs of medical treatment

“These payment models are thought to affect clinician behavior, with productivity bonuses incentivizing clinicians to do more. While new policies aim to reduce total costs of care, little is known about the association between physician payment models and the culture of delivering high-value care,” said Dr. Gupta, the medical director for quality improvement at UCLA Health in Los Angeles.

To find out if hospitalist reimbursement models are associated with high-value culture in university, community, and safety-net hospitals, internal medicine hospitalists from 12 hospitals across California completed a cross-sectional survey assessing their perceptions of high-value care culture within their institutions. Dr. Gupta and colleagues summarized the results.

The study found that nearly 30% of hospitalists who were sampled reported payment with productivity bonuses, while only 5% of hospitalists sampled reported quality or value-based bonuses, Dr. Gupta said. “Hospitalists who reported payment with productivity bonuses were more likely to report lower high-value care culture within their programs.”

Hospitalist leaders interested in improving high-value care culture can use the High Value Care Culture Survey (http://www.highvaluecareculturesurvey.com) to quickly assess the culture within their programs, diagnose areas of opportunity and target improvement efforts.

“They can test new physician payment models within their programs and evaluate their high-value care culture to identify areas of opportunity for improvement,” Dr. Gupta said.

Reference

1. Gupta R et al. Association between hospitalist productivity payments and high-value care culture. J Hosp Med. 2019;1;16-21.

Publications
Topics
Sections

Connect high-value care with reimbursement

Connect high-value care with reimbursement

 

Physician payment models that include productivity bonuses are widespread, says Reshma Gupta, MD, MSHPM.

A stethoscope, shaped in a dollar sign, representing the high costs of medical treatment

“These payment models are thought to affect clinician behavior, with productivity bonuses incentivizing clinicians to do more. While new policies aim to reduce total costs of care, little is known about the association between physician payment models and the culture of delivering high-value care,” said Dr. Gupta, the medical director for quality improvement at UCLA Health in Los Angeles.

To find out if hospitalist reimbursement models are associated with high-value culture in university, community, and safety-net hospitals, internal medicine hospitalists from 12 hospitals across California completed a cross-sectional survey assessing their perceptions of high-value care culture within their institutions. Dr. Gupta and colleagues summarized the results.

The study found that nearly 30% of hospitalists who were sampled reported payment with productivity bonuses, while only 5% of hospitalists sampled reported quality or value-based bonuses, Dr. Gupta said. “Hospitalists who reported payment with productivity bonuses were more likely to report lower high-value care culture within their programs.”

Hospitalist leaders interested in improving high-value care culture can use the High Value Care Culture Survey (http://www.highvaluecareculturesurvey.com) to quickly assess the culture within their programs, diagnose areas of opportunity and target improvement efforts.

“They can test new physician payment models within their programs and evaluate their high-value care culture to identify areas of opportunity for improvement,” Dr. Gupta said.

Reference

1. Gupta R et al. Association between hospitalist productivity payments and high-value care culture. J Hosp Med. 2019;1;16-21.

 

Physician payment models that include productivity bonuses are widespread, says Reshma Gupta, MD, MSHPM.

A stethoscope, shaped in a dollar sign, representing the high costs of medical treatment

“These payment models are thought to affect clinician behavior, with productivity bonuses incentivizing clinicians to do more. While new policies aim to reduce total costs of care, little is known about the association between physician payment models and the culture of delivering high-value care,” said Dr. Gupta, the medical director for quality improvement at UCLA Health in Los Angeles.

To find out if hospitalist reimbursement models are associated with high-value culture in university, community, and safety-net hospitals, internal medicine hospitalists from 12 hospitals across California completed a cross-sectional survey assessing their perceptions of high-value care culture within their institutions. Dr. Gupta and colleagues summarized the results.

The study found that nearly 30% of hospitalists who were sampled reported payment with productivity bonuses, while only 5% of hospitalists sampled reported quality or value-based bonuses, Dr. Gupta said. “Hospitalists who reported payment with productivity bonuses were more likely to report lower high-value care culture within their programs.”

Hospitalist leaders interested in improving high-value care culture can use the High Value Care Culture Survey (http://www.highvaluecareculturesurvey.com) to quickly assess the culture within their programs, diagnose areas of opportunity and target improvement efforts.

“They can test new physician payment models within their programs and evaluate their high-value care culture to identify areas of opportunity for improvement,” Dr. Gupta said.

Reference

1. Gupta R et al. Association between hospitalist productivity payments and high-value care culture. J Hosp Med. 2019;1;16-21.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Clinician burnout can impact quality of care

Consider the limitations of burnout studies
Article Type
Changed
Mon, 11/04/2019 - 14:37

 

Burnout among health care professionals has been associated with lower quality of care, but the effect may be smaller than it seems, based on data from a meta-analysis of more than 200,000 clinicians.

Previous studies have reported associations between burnout and lower quality of care, but a standardized approach to analyze bias in the studies is lacking, wrote Daniel S. Tawfik, MD, of Stanford (Calif.) University and colleagues.

In a study published in the Annals of Internal Medicine, the researchers identified 123 publications from 1994 to 2019 with 142 study populations that included 241,553 health care providers.

Emotional exhaustion was the primary predictor for lower quality of care in 75 study populations, and overall burnout and depersonalization were the primary predictors for 56 and 11 study populations, respectively.

In an analysis of 114 unique burnout-quality combinations, 58 showed effects of burnout related to poor-quality care, 6 showed burnout related to high-quality care, and 50 showed no significant effect. Approximately one-third (33%) of the burnout-quality combinations were reported at least three times. In a review of the 46 burnout-quality combinations with primary effect sizes, 24 showed a significant effect of burnout on poor quality of care, 1 showed a significant effect of burnout on high quality of care, and 21 showed no significant effect.

The researchers also tested study bias using the Ioannidis test and found “an excess of observed versus predicted statistically significant studies (73% observed vs. 62%).”

The findings were limited by several factors, including the use of many cross-sectional, observational studies that could not show causality, the researchers noted. However, the results suggest several implications for future research including the need to consider exaggerated effects and reduce bias.

“Although the effect sizes in the published literature are modestly strong, our finding of excess significance implies that the true magnitude may be smaller than reported, and the studies that attempted to lower the risk of bias demonstrate fewer significant associations than the full evidence base,” the researchers noted.

“Whether curtailing burnout improves quality of care, or whether improving quality of care reduces burnout, is not yet known, and adequately powered and designed randomized trials will be indispensable in answering these questions,” they concluded.

The study was supported by the Stanford Maternal and Child Health Research Institute. Dr. Tawfik disclosed grants from Stanford Maternal and Child Health Research Institute during the study period.

SOURCE: Tawfik DS et al. Ann Intern Med. 2019 Oct 8. doi: 10.7326/M19-1152.

Body

 

The current meta-analysis is consistent with previous research, but offers nothing new on the relationship between clinician burnout and quality of care, wrote Carolyn S. Dewa, MPH, PhD, Karen Nieuwenhuijsen, PhD, and Jeffrey S. Hoch, PhD, in an accompanying editorial.

Some of the concerns they expressed included variability in the methods used to measure provider burnout, as well as variability in measuring and defining medical error. They suggested that the researchers could have conducted a subgroup analysis based on error definition. “Such analyses might shed light on the types of errors associated with burnout and suggest directions for the design of robust psychometric studies about the error metrics,” they wrote.

The editorialists also expressed concerns about the heterogeneity of the studies included in the review and the potential for confounding. Finally, they noted that the use of observational studies in a meta-analysis can be challenging because “the assessment of observational studies is not straightforward.” They added that knowing the limitations of the studies is important in allowing readers to be confident in the estimates from any meta-analyses.

“Considering the limitations of the available literature, prior reviews, and Tawfik and colleagues’ current meta-analysis, we conclude that higher burnout is associated with lower quality, but we are left without clear answers about the magnitude or clinical significance of the relationship,” they wrote (Ann Intern Med. 2019 Oct 8. doi: 10.7326/M19-2760).

Dr. Dewa and Dr. Hoch are affiliated with the University of California, Davis. Dr. Nieuwenhuijsen is affiliated with the University of Amsterdam. The editorialists had no financial conflicts to disclose.

Publications
Topics
Sections
Body

 

The current meta-analysis is consistent with previous research, but offers nothing new on the relationship between clinician burnout and quality of care, wrote Carolyn S. Dewa, MPH, PhD, Karen Nieuwenhuijsen, PhD, and Jeffrey S. Hoch, PhD, in an accompanying editorial.

Some of the concerns they expressed included variability in the methods used to measure provider burnout, as well as variability in measuring and defining medical error. They suggested that the researchers could have conducted a subgroup analysis based on error definition. “Such analyses might shed light on the types of errors associated with burnout and suggest directions for the design of robust psychometric studies about the error metrics,” they wrote.

The editorialists also expressed concerns about the heterogeneity of the studies included in the review and the potential for confounding. Finally, they noted that the use of observational studies in a meta-analysis can be challenging because “the assessment of observational studies is not straightforward.” They added that knowing the limitations of the studies is important in allowing readers to be confident in the estimates from any meta-analyses.

“Considering the limitations of the available literature, prior reviews, and Tawfik and colleagues’ current meta-analysis, we conclude that higher burnout is associated with lower quality, but we are left without clear answers about the magnitude or clinical significance of the relationship,” they wrote (Ann Intern Med. 2019 Oct 8. doi: 10.7326/M19-2760).

Dr. Dewa and Dr. Hoch are affiliated with the University of California, Davis. Dr. Nieuwenhuijsen is affiliated with the University of Amsterdam. The editorialists had no financial conflicts to disclose.

Body

 

The current meta-analysis is consistent with previous research, but offers nothing new on the relationship between clinician burnout and quality of care, wrote Carolyn S. Dewa, MPH, PhD, Karen Nieuwenhuijsen, PhD, and Jeffrey S. Hoch, PhD, in an accompanying editorial.

Some of the concerns they expressed included variability in the methods used to measure provider burnout, as well as variability in measuring and defining medical error. They suggested that the researchers could have conducted a subgroup analysis based on error definition. “Such analyses might shed light on the types of errors associated with burnout and suggest directions for the design of robust psychometric studies about the error metrics,” they wrote.

The editorialists also expressed concerns about the heterogeneity of the studies included in the review and the potential for confounding. Finally, they noted that the use of observational studies in a meta-analysis can be challenging because “the assessment of observational studies is not straightforward.” They added that knowing the limitations of the studies is important in allowing readers to be confident in the estimates from any meta-analyses.

“Considering the limitations of the available literature, prior reviews, and Tawfik and colleagues’ current meta-analysis, we conclude that higher burnout is associated with lower quality, but we are left without clear answers about the magnitude or clinical significance of the relationship,” they wrote (Ann Intern Med. 2019 Oct 8. doi: 10.7326/M19-2760).

Dr. Dewa and Dr. Hoch are affiliated with the University of California, Davis. Dr. Nieuwenhuijsen is affiliated with the University of Amsterdam. The editorialists had no financial conflicts to disclose.

Title
Consider the limitations of burnout studies
Consider the limitations of burnout studies

 

Burnout among health care professionals has been associated with lower quality of care, but the effect may be smaller than it seems, based on data from a meta-analysis of more than 200,000 clinicians.

Previous studies have reported associations between burnout and lower quality of care, but a standardized approach to analyze bias in the studies is lacking, wrote Daniel S. Tawfik, MD, of Stanford (Calif.) University and colleagues.

In a study published in the Annals of Internal Medicine, the researchers identified 123 publications from 1994 to 2019 with 142 study populations that included 241,553 health care providers.

Emotional exhaustion was the primary predictor for lower quality of care in 75 study populations, and overall burnout and depersonalization were the primary predictors for 56 and 11 study populations, respectively.

In an analysis of 114 unique burnout-quality combinations, 58 showed effects of burnout related to poor-quality care, 6 showed burnout related to high-quality care, and 50 showed no significant effect. Approximately one-third (33%) of the burnout-quality combinations were reported at least three times. In a review of the 46 burnout-quality combinations with primary effect sizes, 24 showed a significant effect of burnout on poor quality of care, 1 showed a significant effect of burnout on high quality of care, and 21 showed no significant effect.

The researchers also tested study bias using the Ioannidis test and found “an excess of observed versus predicted statistically significant studies (73% observed vs. 62%).”

The findings were limited by several factors, including the use of many cross-sectional, observational studies that could not show causality, the researchers noted. However, the results suggest several implications for future research including the need to consider exaggerated effects and reduce bias.

“Although the effect sizes in the published literature are modestly strong, our finding of excess significance implies that the true magnitude may be smaller than reported, and the studies that attempted to lower the risk of bias demonstrate fewer significant associations than the full evidence base,” the researchers noted.

“Whether curtailing burnout improves quality of care, or whether improving quality of care reduces burnout, is not yet known, and adequately powered and designed randomized trials will be indispensable in answering these questions,” they concluded.

The study was supported by the Stanford Maternal and Child Health Research Institute. Dr. Tawfik disclosed grants from Stanford Maternal and Child Health Research Institute during the study period.

SOURCE: Tawfik DS et al. Ann Intern Med. 2019 Oct 8. doi: 10.7326/M19-1152.

 

Burnout among health care professionals has been associated with lower quality of care, but the effect may be smaller than it seems, based on data from a meta-analysis of more than 200,000 clinicians.

Previous studies have reported associations between burnout and lower quality of care, but a standardized approach to analyze bias in the studies is lacking, wrote Daniel S. Tawfik, MD, of Stanford (Calif.) University and colleagues.

In a study published in the Annals of Internal Medicine, the researchers identified 123 publications from 1994 to 2019 with 142 study populations that included 241,553 health care providers.

Emotional exhaustion was the primary predictor for lower quality of care in 75 study populations, and overall burnout and depersonalization were the primary predictors for 56 and 11 study populations, respectively.

In an analysis of 114 unique burnout-quality combinations, 58 showed effects of burnout related to poor-quality care, 6 showed burnout related to high-quality care, and 50 showed no significant effect. Approximately one-third (33%) of the burnout-quality combinations were reported at least three times. In a review of the 46 burnout-quality combinations with primary effect sizes, 24 showed a significant effect of burnout on poor quality of care, 1 showed a significant effect of burnout on high quality of care, and 21 showed no significant effect.

The researchers also tested study bias using the Ioannidis test and found “an excess of observed versus predicted statistically significant studies (73% observed vs. 62%).”

The findings were limited by several factors, including the use of many cross-sectional, observational studies that could not show causality, the researchers noted. However, the results suggest several implications for future research including the need to consider exaggerated effects and reduce bias.

“Although the effect sizes in the published literature are modestly strong, our finding of excess significance implies that the true magnitude may be smaller than reported, and the studies that attempted to lower the risk of bias demonstrate fewer significant associations than the full evidence base,” the researchers noted.

“Whether curtailing burnout improves quality of care, or whether improving quality of care reduces burnout, is not yet known, and adequately powered and designed randomized trials will be indispensable in answering these questions,” they concluded.

The study was supported by the Stanford Maternal and Child Health Research Institute. Dr. Tawfik disclosed grants from Stanford Maternal and Child Health Research Institute during the study period.

SOURCE: Tawfik DS et al. Ann Intern Med. 2019 Oct 8. doi: 10.7326/M19-1152.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM THE ANNALS OF INTERNAL MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

i-HOPE study engages patients, families to improve quality of hospital stays

Article Type
Changed
Mon, 10/07/2019 - 13:38

Make patients ‘equal members of the team.’

 

Hospitalization can be a challenging and vulnerable time for patients and their families. While challenges associated with the quality and safety of hospital care are well documented, perspectives of patients, families, caregivers, and other stakeholders are not as easily understood and are important targets of improvement research.

Dr. Luci Leykum, professor of medicine and investigator in the South Texas Veterans Health Care System and incoming associate chair for clinical innovation at Dell Medical School, University of Texas, Austin
Dr. Luci Leykum

This led to the initiation of the i-HOPE Patient Engagement Study, a collaboration including the Society for Hospital Medicine’s Center for Quality Improvement. The team completed a systematic and broad engagement process with patients, families, and caregivers, followed by an in-person prioritization meeting to generate a priority list of research topics that describe the most important gaps in the care of hospitalized patients.

The Hospitalist recently spoke with Luci Leykum, MD, MSc, MBA, SFHM, principal investigator for the i-HOPE Study, professor of medicine and investigator in the South Texas Veterans Health Care System and incoming associate chair for clinical innovation at the University of Texas at Austin.

Why is it so important to include the perspective of the patient during a hospital stay?

We cannot optimally improve outcomes of hospitalized patients if we don’t have patients’ perspectives on what needs to be improved. Hearing these perspectives also provides insights into how we can address gaps in hospital care.

How were patients and other stakeholders engaged during the i-HOPE program?

Patients, caregivers, and stakeholders were engaged throughout the entire project, from conceptualization to dissemination of results.

We worked with seven patient partners to develop the proposal that we submitted to the Patient-Centered Outcomes Research Institute. They were involved in all phases of the project, from developing the informational webinars and surveys to analyzing our results.

We engaged additional patients, caregivers, and stakeholders to submit their highest priority unanswered research questions for improving hospital care. A total of 117 patients and 127 caregivers submitted questions. Our patient partners and more than 30 stakeholders were involved in prioritizing those research questions to develop our final agenda.

What is unique about the approach in the i-HOPE project, compared with other projects that may have had similar intended objectives?

Our project is unique in several respects. First, it was completely patient partnered. Having patients as equal members of the team changed our approach at every level – from how we communicated with patients and stakeholders to how we analyzed and presented our data. Second, we worked with a larger number of stakeholders representing a broad range of constituencies, from professional societies to health care delivery systems to payers.

How has SHM’s Center for Quality Improvement helped the i-HOPE program to realize its goals?

The Center for Quality Improvement helped considerably with the execution of the project. The researchers involved in i-HOPE were all members of the SHM Research Committee and were familiar with SHM’s capability as a partner in these larger-scale projects. The SHM Meetings team was instrumental in making our in-person patient and stakeholder prioritization meeting happen as well.

How can the findings of the i-HOPE program be applied?

We hope everyone can utilize our findings. Patients, families, and caregivers can use our results to improve their own care. Providers and delivery systems can target their improvement efforts using our findings to ensure that their work has the greatest impact on patients. Policy makers and funders can use our findings to direct work to the priority areas we identified. And finally, we hope the hospital research community uses our results to develop novel interventions to improve care.

For more information on the i-HOPE Patient Engagement Study, visit hospitalmedicine.org/ihope.

Publications
Topics
Sections

Make patients ‘equal members of the team.’

Make patients ‘equal members of the team.’

 

Hospitalization can be a challenging and vulnerable time for patients and their families. While challenges associated with the quality and safety of hospital care are well documented, perspectives of patients, families, caregivers, and other stakeholders are not as easily understood and are important targets of improvement research.

Dr. Luci Leykum, professor of medicine and investigator in the South Texas Veterans Health Care System and incoming associate chair for clinical innovation at Dell Medical School, University of Texas, Austin
Dr. Luci Leykum

This led to the initiation of the i-HOPE Patient Engagement Study, a collaboration including the Society for Hospital Medicine’s Center for Quality Improvement. The team completed a systematic and broad engagement process with patients, families, and caregivers, followed by an in-person prioritization meeting to generate a priority list of research topics that describe the most important gaps in the care of hospitalized patients.

The Hospitalist recently spoke with Luci Leykum, MD, MSc, MBA, SFHM, principal investigator for the i-HOPE Study, professor of medicine and investigator in the South Texas Veterans Health Care System and incoming associate chair for clinical innovation at the University of Texas at Austin.

Why is it so important to include the perspective of the patient during a hospital stay?

We cannot optimally improve outcomes of hospitalized patients if we don’t have patients’ perspectives on what needs to be improved. Hearing these perspectives also provides insights into how we can address gaps in hospital care.

How were patients and other stakeholders engaged during the i-HOPE program?

Patients, caregivers, and stakeholders were engaged throughout the entire project, from conceptualization to dissemination of results.

We worked with seven patient partners to develop the proposal that we submitted to the Patient-Centered Outcomes Research Institute. They were involved in all phases of the project, from developing the informational webinars and surveys to analyzing our results.

We engaged additional patients, caregivers, and stakeholders to submit their highest priority unanswered research questions for improving hospital care. A total of 117 patients and 127 caregivers submitted questions. Our patient partners and more than 30 stakeholders were involved in prioritizing those research questions to develop our final agenda.

What is unique about the approach in the i-HOPE project, compared with other projects that may have had similar intended objectives?

Our project is unique in several respects. First, it was completely patient partnered. Having patients as equal members of the team changed our approach at every level – from how we communicated with patients and stakeholders to how we analyzed and presented our data. Second, we worked with a larger number of stakeholders representing a broad range of constituencies, from professional societies to health care delivery systems to payers.

How has SHM’s Center for Quality Improvement helped the i-HOPE program to realize its goals?

The Center for Quality Improvement helped considerably with the execution of the project. The researchers involved in i-HOPE were all members of the SHM Research Committee and were familiar with SHM’s capability as a partner in these larger-scale projects. The SHM Meetings team was instrumental in making our in-person patient and stakeholder prioritization meeting happen as well.

How can the findings of the i-HOPE program be applied?

We hope everyone can utilize our findings. Patients, families, and caregivers can use our results to improve their own care. Providers and delivery systems can target their improvement efforts using our findings to ensure that their work has the greatest impact on patients. Policy makers and funders can use our findings to direct work to the priority areas we identified. And finally, we hope the hospital research community uses our results to develop novel interventions to improve care.

For more information on the i-HOPE Patient Engagement Study, visit hospitalmedicine.org/ihope.

 

Hospitalization can be a challenging and vulnerable time for patients and their families. While challenges associated with the quality and safety of hospital care are well documented, perspectives of patients, families, caregivers, and other stakeholders are not as easily understood and are important targets of improvement research.

Dr. Luci Leykum, professor of medicine and investigator in the South Texas Veterans Health Care System and incoming associate chair for clinical innovation at Dell Medical School, University of Texas, Austin
Dr. Luci Leykum

This led to the initiation of the i-HOPE Patient Engagement Study, a collaboration including the Society for Hospital Medicine’s Center for Quality Improvement. The team completed a systematic and broad engagement process with patients, families, and caregivers, followed by an in-person prioritization meeting to generate a priority list of research topics that describe the most important gaps in the care of hospitalized patients.

The Hospitalist recently spoke with Luci Leykum, MD, MSc, MBA, SFHM, principal investigator for the i-HOPE Study, professor of medicine and investigator in the South Texas Veterans Health Care System and incoming associate chair for clinical innovation at the University of Texas at Austin.

Why is it so important to include the perspective of the patient during a hospital stay?

We cannot optimally improve outcomes of hospitalized patients if we don’t have patients’ perspectives on what needs to be improved. Hearing these perspectives also provides insights into how we can address gaps in hospital care.

How were patients and other stakeholders engaged during the i-HOPE program?

Patients, caregivers, and stakeholders were engaged throughout the entire project, from conceptualization to dissemination of results.

We worked with seven patient partners to develop the proposal that we submitted to the Patient-Centered Outcomes Research Institute. They were involved in all phases of the project, from developing the informational webinars and surveys to analyzing our results.

We engaged additional patients, caregivers, and stakeholders to submit their highest priority unanswered research questions for improving hospital care. A total of 117 patients and 127 caregivers submitted questions. Our patient partners and more than 30 stakeholders were involved in prioritizing those research questions to develop our final agenda.

What is unique about the approach in the i-HOPE project, compared with other projects that may have had similar intended objectives?

Our project is unique in several respects. First, it was completely patient partnered. Having patients as equal members of the team changed our approach at every level – from how we communicated with patients and stakeholders to how we analyzed and presented our data. Second, we worked with a larger number of stakeholders representing a broad range of constituencies, from professional societies to health care delivery systems to payers.

How has SHM’s Center for Quality Improvement helped the i-HOPE program to realize its goals?

The Center for Quality Improvement helped considerably with the execution of the project. The researchers involved in i-HOPE were all members of the SHM Research Committee and were familiar with SHM’s capability as a partner in these larger-scale projects. The SHM Meetings team was instrumental in making our in-person patient and stakeholder prioritization meeting happen as well.

How can the findings of the i-HOPE program be applied?

We hope everyone can utilize our findings. Patients, families, and caregivers can use our results to improve their own care. Providers and delivery systems can target their improvement efforts using our findings to ensure that their work has the greatest impact on patients. Policy makers and funders can use our findings to direct work to the priority areas we identified. And finally, we hope the hospital research community uses our results to develop novel interventions to improve care.

For more information on the i-HOPE Patient Engagement Study, visit hospitalmedicine.org/ihope.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

‘Bridging leaders’ link quality, medical education

Article Type
Changed
Mon, 10/14/2019 - 09:32

A new community emerges

 

In June 2019, a 5-hour preconference seminar at the annual Integrating Quality Conference of the Association of American Medical Colleges (AAMC) in Minneapolis highlighted the emergence of a new concept, and a new community, within the larger field of hospital medicine.

Vineet Arora, MD, MAPP, MHM, University of Chicago Medicine: associate chief medical officer for the clinical learning environment
Dr. Vineet Arora

“Bridging leaders” are clinician-educators with a foot in two worlds: leading quality and safety initiatives within their teaching hospitals – with the hospitalist’s customary participation in a broad spectrum of quality improvement (QI) efforts in the hospital – while helping to train future and current physicians. “Bridging” also extends to the third piece of the quality puzzle, the hospital and/or health system’s senior administrators.

“About 8 years ago, another hospitalist and I found ourselves in this role, bridging graduate medical education with hospital quality and safety,” said Jennifer S. Myers, MD, FHM, director of quality and safety education in the department of medicine at the University of Pennsylvania, Philadelphia. “The role has since begun to proliferate, in teaching settings large and small, and about 30-50 of us with somewhat similar job responsibilities have been trying to create a community.”

Following the lead of the American College of Graduate Medical Education1 and its standards for clinical learning environments that include integration of patient safety and quality improvement, these have become graduate medical education (GME) priorities. Students need to learn the theory and practice of safety and quality improvement on the job as part of their professional development. Residency program directors and other trainers thus need to find opportunities for them to practice these techniques in the clinical practice environment.

At the same time, mobilizing those eager medical learners to plan and conduct quality improvement projects can enhance a hospital’s ability to advance its mission in the new health care environment of accountable care and population health.
 

New concept arises

Is bridging leaders a real thing? The short answer is yes, said Thomas Ciesielski, MD, GME medical director for patient safety, quality education, and clinical learning environment review program development at Washington University in St. Louis. “This is a new trend, but it’s still in the process of defining itself. Every bridging leader has their own identity based on their institution. Some play a bridging role for the entire institution; others play similar roles but only within a specific department or division. There’s a lot of learning going on in our community,” he said.

The first Bridging Leaders track was held last year at AAMC’s 2018 Integrating Quality Conference, an event which has been held annually for the past decade. The concept was also highlighted in a 2017 article in the Journal of Graduate Medical Education2 by bridging leaders, including many of the faculty at the subsequent AAMC sessions, highlighting their roles and programs at six academic medical centers.

One of those coauthors, hospitalist Vineet Arora, MD, MAPP, MHM, was recently appointed to a new position at University of Chicago Medicine: associate chief medical officer for the clinical learning environment – which pulls together many of the threads of the bridging leaders movement into a single job title. Dr. Arora said her job builds on her prior work in GME and improves the clinical learning environment for residents and fellows by integrating them into the health system’s institutional quality, safety, and value missions. It also expands on that work to include faculty and allied health professionals. “I just happen to come from the health system side,” she said.
 

 

 

Natural bridges: From clinical to educational

As with the early days of the hospitalist movement, bridging leaders are trying to build a community of peers with common interests.

“We’re just at the beginning,” Dr. Arora said. “Hospitalists have been the natural torch bearers for quality and safety in their hospitals, and also play roles in the education of residents and medical students, working alongside residency program directors. They are well-versed in quality and in education. So, they are the natural bridges between education and clinical care,” she said. “We also know this is a young group that comes to our meetings. One-third of them have been doing this for only the past 2 years or less – so they are early in their career paths.”

Front-line clinical providers, such as residents, often have good ideas, and bridging leaders can bring these ideas to the health system’s leaders, Dr. Arora said. “Bridging at the leadership level also involves thinking about the larger priorities of the system.” There are trust issues that these leaders can help to bridge, as well as internal communication barriers. “We also realize that health systems have to move quickly in response to a rapidly changing environment,” she noted.

“You don’t want a hundred quality improvement projects being done by students that are unaligned with the organization’s priorities. That leads to waste, and highlights the need for greater alignment,” Dr. Arora added. “Think about using front-line staff as agents of change, of engaging with learners as a win/win – as a way to actually solve the problems we are facing.”

Dr. Darlene Tad-y, associate professor and hospitalist at the University of Colorado Hospital, Denver
Dr. Darlene Tad-y

A bridging leader occupies a role in which they can influence and affect these two parts of the mission of health care, somebody whose leadership responsibilities sit at the intersection of these two areas, said Darlene Tad-y, MD, director of GME quality and safety programs at the University of Colorado, Aurora. “Once, these people were mostly in academic medical centers, but that’s not so true anymore. A director of quality for a hospital medicine group is responsible for developing the group’s quality strategy, but at the same time responsible for teaching members of the group – not only doing QI but teaching others how to do it,” she said.

“Hospitalists make terrific bridging leaders. We really are in that sweet spot, and we can and should step into these leadership roles,” Dr. Tad-y said. “Because of our role in the hospital, we know the ins and outs of how processes work or don’t work. We have an insider’s view of the system’s dysfunction, which puts us in a great place to lead these efforts.”

The bridging leaders movement has been a grass-roots development, Dr. Tad-y explained. “It’s not like people started with the job title. But because all of this work was needed, a few people started doing it – and they began seeking each other out. Then they found that there were more than a few of us. We just hadn’t known what it was called.”
 

 

 

What is being bridged?

There has long been a relationship between individuals who lead in the clinical environment and those who lead in education, such as the program directors of residency programs, said Janis Orlowski, MD, chief health care officer for AAMC, which represents 154 MD-granting medical schools and their associated teaching hospitals.

Dr. Janis Orlowski
Dr. Janis Orlowski

“Our association’s three missions of research, education, and patient care really come together around the bridging leaders concept. So, this movement is well aligned. And as bridging leaders started to develop as a group, they found a home in AAMC and at our Integrating Quality Conference,” she said.

“Where we see this integration is in the teaching of residents and medical students in the clinical environment,” Dr. Orlowski said. “It’s not just their knowledge of disease or treatments or procedural skills that needs to be taught. They also need to understand the safe and effective clinical environment, and the role of learners in patient safety, quality improvement, and efficient and cost-effective hospital care. They need to understand value.” A new field of health systems science is emerging and quality improvement is evolving to incorporate population health. But traditional medical faculty may not be that comfortable teaching it.

Any physician who sees that they have a role in the clinical, administrative, and educational worlds can do the bridging, Dr. Orlowski said. “It could be any environment in which care is provided and learning takes place. I mentioned QI and patient safety, but among the other essential skills for the doctor of tomorrow are teamwork, inter-professional training in how to work with, for example, the pharmacist and dietitian, and understanding the value they bring.”

Whenever quality improvement projects are undertaken as part of post-graduate medical education, they should be aligned with the institution’s quality improvement plan and with the priorities of the health system, said Rob Dressler, MD, MBA, quality and safety officer at Christiana Health Care System in Newark, Del., and president of the Alliance of Independent Academic Medical Centers (AIAMC), which represents 80 hospital and health systems active in the emerging movement for bridging leaders.

“GME needs to keep the C-suite aware of its front-line efforts to improve quality and safety, so the institution’s return on investment can be recognized,” he said. “The AIAMC has consistently advocated for the building of bridges between GME leaders and their C-suites at our member hospitals. If you are doing process improvement, you need to be aligned with the organization and its priorities, or you’ll be less successful.”

AIAMC convenes the National Initiative – a multi-institutional collaborative in which residents lead multi-disciplinary teams in quality improvement projects. A total of 64 hospitals and health systems have participated since the program started in 2007. “We need to train our clinicians to solve the problems of tomorrow,” Dr. Dressler said.
 

Bridging leaders in action

The leaders contacted for this article offered some examples of bridging in action. Dr. Arora has used “crowd sourcing” – a technique employed extensively in her work with Costs of Care, a global nonprofit trying to drive better health care at lower cost – to implement a local program for front-line clinicians to generate ideas on how to improve value and reduce unnecessary treatment.

 

 

“We created our local ‘Choosing Wisely’ challenge for residents and staff at the University of Chicago – with the understanding that the winner would get analytic and time support to pursue their project,” she said. A resident winner was a finalist in the RIV (Research, Innovations and Clinical Vignettes) competition at a recent SHM Annual Conference.

At the University of Colorado, there is an associate program director who is responsible for the quality improvement curriculum for residents, Dr. Tad-y said. Because teaching QI means doing QI, the associate program director had to start implementing QI in the hospital, learning how to choose appropriate QI projects for the residents. That meant looking at quality priorities for the hospital – including VTE prophylaxis, fall prevention, and rates of central line–associated bloodstream infections and catheter-associated urinary tract infections. “A critical priority was to align the learners’ QI projects with what the hospital is already working on,” she explained.

“In our practice, all fellows need education and training in patient safety, how to recognize medical errors and close calls, and how to use our errors reporting system,” Dr. Myers said. “They also need to participate in errors analysis discussions. But we have struggled to get residents to attend those meetings. There’s not enough time in their schedules, and here at Penn, we have 1,500 residents and fellows, and maybe only 20 of these formal medical errors conferences per year,” she said.

Dr. Myers worked with the hospital’s patient safety officer and head of GME to design a simulated approach to fill the gap, a simulation of the root cause analysis process – how it works, the various roles played by different individuals, and what happens after it is done. “In my role, I trained one faculty member in each large residency program in how to identify a case and how to use the simulation,” she said. “They can now teach their own learners and make it more relevant to their specialty.”

Penn also has a blueprint for quality – a road map for how the organization socializes health care quality, safety, and value, Dr. Myers said. “Every 3 or 4 years our CEO looks at the road map and tries to get feedback on its direction from payers and insurers, quality leaders, academic department heads – and residents. I was in a good position to organize a session for a representative group of residents to get together and talk about where they see the quality and safety gaps in their everyday work.”

The role of the bridging leader is a viable career path or target for many hospitalists, Dr. Arora said. “But even if it’s not a career path for you, knowing that hospitalists are at the forefront of the bridging leaders movement could help you energize your health system. If you are seeing gaps in quality and safety, this is an issue you can bring before the system.”

These days doctors are wearing a lot of hats and filling roles that weren’t seen as much before, said Dr. Orlowski. “Bridging leaders are not an exclusive group but open to anyone who finds their passion in teaching quality and safety. Maybe you’re doing quality and safety, but not education, but you recognize its importance, or vice versa. First of all, look to see what this bridging leaders thing really is, and how it might apply to you. You might say: ‘That accurately describes what I’m doing now. I have the interest; I want to learn more.’”
 

References

1. Accreditation Council for Graduate Medical Education. CLER pathways to excellence.

2. Myers JS et al. Bridging leadership roles in quality and patient safety: Experience of 6 U.S. Academic Medical Centers. J Grad Med Educ. 2017 Feb;9(1): 9-13.
 

Publications
Topics
Sections

A new community emerges

A new community emerges

 

In June 2019, a 5-hour preconference seminar at the annual Integrating Quality Conference of the Association of American Medical Colleges (AAMC) in Minneapolis highlighted the emergence of a new concept, and a new community, within the larger field of hospital medicine.

Vineet Arora, MD, MAPP, MHM, University of Chicago Medicine: associate chief medical officer for the clinical learning environment
Dr. Vineet Arora

“Bridging leaders” are clinician-educators with a foot in two worlds: leading quality and safety initiatives within their teaching hospitals – with the hospitalist’s customary participation in a broad spectrum of quality improvement (QI) efforts in the hospital – while helping to train future and current physicians. “Bridging” also extends to the third piece of the quality puzzle, the hospital and/or health system’s senior administrators.

“About 8 years ago, another hospitalist and I found ourselves in this role, bridging graduate medical education with hospital quality and safety,” said Jennifer S. Myers, MD, FHM, director of quality and safety education in the department of medicine at the University of Pennsylvania, Philadelphia. “The role has since begun to proliferate, in teaching settings large and small, and about 30-50 of us with somewhat similar job responsibilities have been trying to create a community.”

Following the lead of the American College of Graduate Medical Education1 and its standards for clinical learning environments that include integration of patient safety and quality improvement, these have become graduate medical education (GME) priorities. Students need to learn the theory and practice of safety and quality improvement on the job as part of their professional development. Residency program directors and other trainers thus need to find opportunities for them to practice these techniques in the clinical practice environment.

At the same time, mobilizing those eager medical learners to plan and conduct quality improvement projects can enhance a hospital’s ability to advance its mission in the new health care environment of accountable care and population health.
 

New concept arises

Is bridging leaders a real thing? The short answer is yes, said Thomas Ciesielski, MD, GME medical director for patient safety, quality education, and clinical learning environment review program development at Washington University in St. Louis. “This is a new trend, but it’s still in the process of defining itself. Every bridging leader has their own identity based on their institution. Some play a bridging role for the entire institution; others play similar roles but only within a specific department or division. There’s a lot of learning going on in our community,” he said.

The first Bridging Leaders track was held last year at AAMC’s 2018 Integrating Quality Conference, an event which has been held annually for the past decade. The concept was also highlighted in a 2017 article in the Journal of Graduate Medical Education2 by bridging leaders, including many of the faculty at the subsequent AAMC sessions, highlighting their roles and programs at six academic medical centers.

One of those coauthors, hospitalist Vineet Arora, MD, MAPP, MHM, was recently appointed to a new position at University of Chicago Medicine: associate chief medical officer for the clinical learning environment – which pulls together many of the threads of the bridging leaders movement into a single job title. Dr. Arora said her job builds on her prior work in GME and improves the clinical learning environment for residents and fellows by integrating them into the health system’s institutional quality, safety, and value missions. It also expands on that work to include faculty and allied health professionals. “I just happen to come from the health system side,” she said.
 

 

 

Natural bridges: From clinical to educational

As with the early days of the hospitalist movement, bridging leaders are trying to build a community of peers with common interests.

“We’re just at the beginning,” Dr. Arora said. “Hospitalists have been the natural torch bearers for quality and safety in their hospitals, and also play roles in the education of residents and medical students, working alongside residency program directors. They are well-versed in quality and in education. So, they are the natural bridges between education and clinical care,” she said. “We also know this is a young group that comes to our meetings. One-third of them have been doing this for only the past 2 years or less – so they are early in their career paths.”

Front-line clinical providers, such as residents, often have good ideas, and bridging leaders can bring these ideas to the health system’s leaders, Dr. Arora said. “Bridging at the leadership level also involves thinking about the larger priorities of the system.” There are trust issues that these leaders can help to bridge, as well as internal communication barriers. “We also realize that health systems have to move quickly in response to a rapidly changing environment,” she noted.

“You don’t want a hundred quality improvement projects being done by students that are unaligned with the organization’s priorities. That leads to waste, and highlights the need for greater alignment,” Dr. Arora added. “Think about using front-line staff as agents of change, of engaging with learners as a win/win – as a way to actually solve the problems we are facing.”

Dr. Darlene Tad-y, associate professor and hospitalist at the University of Colorado Hospital, Denver
Dr. Darlene Tad-y

A bridging leader occupies a role in which they can influence and affect these two parts of the mission of health care, somebody whose leadership responsibilities sit at the intersection of these two areas, said Darlene Tad-y, MD, director of GME quality and safety programs at the University of Colorado, Aurora. “Once, these people were mostly in academic medical centers, but that’s not so true anymore. A director of quality for a hospital medicine group is responsible for developing the group’s quality strategy, but at the same time responsible for teaching members of the group – not only doing QI but teaching others how to do it,” she said.

“Hospitalists make terrific bridging leaders. We really are in that sweet spot, and we can and should step into these leadership roles,” Dr. Tad-y said. “Because of our role in the hospital, we know the ins and outs of how processes work or don’t work. We have an insider’s view of the system’s dysfunction, which puts us in a great place to lead these efforts.”

The bridging leaders movement has been a grass-roots development, Dr. Tad-y explained. “It’s not like people started with the job title. But because all of this work was needed, a few people started doing it – and they began seeking each other out. Then they found that there were more than a few of us. We just hadn’t known what it was called.”
 

 

 

What is being bridged?

There has long been a relationship between individuals who lead in the clinical environment and those who lead in education, such as the program directors of residency programs, said Janis Orlowski, MD, chief health care officer for AAMC, which represents 154 MD-granting medical schools and their associated teaching hospitals.

Dr. Janis Orlowski
Dr. Janis Orlowski

“Our association’s three missions of research, education, and patient care really come together around the bridging leaders concept. So, this movement is well aligned. And as bridging leaders started to develop as a group, they found a home in AAMC and at our Integrating Quality Conference,” she said.

“Where we see this integration is in the teaching of residents and medical students in the clinical environment,” Dr. Orlowski said. “It’s not just their knowledge of disease or treatments or procedural skills that needs to be taught. They also need to understand the safe and effective clinical environment, and the role of learners in patient safety, quality improvement, and efficient and cost-effective hospital care. They need to understand value.” A new field of health systems science is emerging and quality improvement is evolving to incorporate population health. But traditional medical faculty may not be that comfortable teaching it.

Any physician who sees that they have a role in the clinical, administrative, and educational worlds can do the bridging, Dr. Orlowski said. “It could be any environment in which care is provided and learning takes place. I mentioned QI and patient safety, but among the other essential skills for the doctor of tomorrow are teamwork, inter-professional training in how to work with, for example, the pharmacist and dietitian, and understanding the value they bring.”

Whenever quality improvement projects are undertaken as part of post-graduate medical education, they should be aligned with the institution’s quality improvement plan and with the priorities of the health system, said Rob Dressler, MD, MBA, quality and safety officer at Christiana Health Care System in Newark, Del., and president of the Alliance of Independent Academic Medical Centers (AIAMC), which represents 80 hospital and health systems active in the emerging movement for bridging leaders.

“GME needs to keep the C-suite aware of its front-line efforts to improve quality and safety, so the institution’s return on investment can be recognized,” he said. “The AIAMC has consistently advocated for the building of bridges between GME leaders and their C-suites at our member hospitals. If you are doing process improvement, you need to be aligned with the organization and its priorities, or you’ll be less successful.”

AIAMC convenes the National Initiative – a multi-institutional collaborative in which residents lead multi-disciplinary teams in quality improvement projects. A total of 64 hospitals and health systems have participated since the program started in 2007. “We need to train our clinicians to solve the problems of tomorrow,” Dr. Dressler said.
 

Bridging leaders in action

The leaders contacted for this article offered some examples of bridging in action. Dr. Arora has used “crowd sourcing” – a technique employed extensively in her work with Costs of Care, a global nonprofit trying to drive better health care at lower cost – to implement a local program for front-line clinicians to generate ideas on how to improve value and reduce unnecessary treatment.

 

 

“We created our local ‘Choosing Wisely’ challenge for residents and staff at the University of Chicago – with the understanding that the winner would get analytic and time support to pursue their project,” she said. A resident winner was a finalist in the RIV (Research, Innovations and Clinical Vignettes) competition at a recent SHM Annual Conference.

At the University of Colorado, there is an associate program director who is responsible for the quality improvement curriculum for residents, Dr. Tad-y said. Because teaching QI means doing QI, the associate program director had to start implementing QI in the hospital, learning how to choose appropriate QI projects for the residents. That meant looking at quality priorities for the hospital – including VTE prophylaxis, fall prevention, and rates of central line–associated bloodstream infections and catheter-associated urinary tract infections. “A critical priority was to align the learners’ QI projects with what the hospital is already working on,” she explained.

“In our practice, all fellows need education and training in patient safety, how to recognize medical errors and close calls, and how to use our errors reporting system,” Dr. Myers said. “They also need to participate in errors analysis discussions. But we have struggled to get residents to attend those meetings. There’s not enough time in their schedules, and here at Penn, we have 1,500 residents and fellows, and maybe only 20 of these formal medical errors conferences per year,” she said.

Dr. Myers worked with the hospital’s patient safety officer and head of GME to design a simulated approach to fill the gap, a simulation of the root cause analysis process – how it works, the various roles played by different individuals, and what happens after it is done. “In my role, I trained one faculty member in each large residency program in how to identify a case and how to use the simulation,” she said. “They can now teach their own learners and make it more relevant to their specialty.”

Penn also has a blueprint for quality – a road map for how the organization socializes health care quality, safety, and value, Dr. Myers said. “Every 3 or 4 years our CEO looks at the road map and tries to get feedback on its direction from payers and insurers, quality leaders, academic department heads – and residents. I was in a good position to organize a session for a representative group of residents to get together and talk about where they see the quality and safety gaps in their everyday work.”

The role of the bridging leader is a viable career path or target for many hospitalists, Dr. Arora said. “But even if it’s not a career path for you, knowing that hospitalists are at the forefront of the bridging leaders movement could help you energize your health system. If you are seeing gaps in quality and safety, this is an issue you can bring before the system.”

These days doctors are wearing a lot of hats and filling roles that weren’t seen as much before, said Dr. Orlowski. “Bridging leaders are not an exclusive group but open to anyone who finds their passion in teaching quality and safety. Maybe you’re doing quality and safety, but not education, but you recognize its importance, or vice versa. First of all, look to see what this bridging leaders thing really is, and how it might apply to you. You might say: ‘That accurately describes what I’m doing now. I have the interest; I want to learn more.’”
 

References

1. Accreditation Council for Graduate Medical Education. CLER pathways to excellence.

2. Myers JS et al. Bridging leadership roles in quality and patient safety: Experience of 6 U.S. Academic Medical Centers. J Grad Med Educ. 2017 Feb;9(1): 9-13.
 

 

In June 2019, a 5-hour preconference seminar at the annual Integrating Quality Conference of the Association of American Medical Colleges (AAMC) in Minneapolis highlighted the emergence of a new concept, and a new community, within the larger field of hospital medicine.

Vineet Arora, MD, MAPP, MHM, University of Chicago Medicine: associate chief medical officer for the clinical learning environment
Dr. Vineet Arora

“Bridging leaders” are clinician-educators with a foot in two worlds: leading quality and safety initiatives within their teaching hospitals – with the hospitalist’s customary participation in a broad spectrum of quality improvement (QI) efforts in the hospital – while helping to train future and current physicians. “Bridging” also extends to the third piece of the quality puzzle, the hospital and/or health system’s senior administrators.

“About 8 years ago, another hospitalist and I found ourselves in this role, bridging graduate medical education with hospital quality and safety,” said Jennifer S. Myers, MD, FHM, director of quality and safety education in the department of medicine at the University of Pennsylvania, Philadelphia. “The role has since begun to proliferate, in teaching settings large and small, and about 30-50 of us with somewhat similar job responsibilities have been trying to create a community.”

Following the lead of the American College of Graduate Medical Education1 and its standards for clinical learning environments that include integration of patient safety and quality improvement, these have become graduate medical education (GME) priorities. Students need to learn the theory and practice of safety and quality improvement on the job as part of their professional development. Residency program directors and other trainers thus need to find opportunities for them to practice these techniques in the clinical practice environment.

At the same time, mobilizing those eager medical learners to plan and conduct quality improvement projects can enhance a hospital’s ability to advance its mission in the new health care environment of accountable care and population health.
 

New concept arises

Is bridging leaders a real thing? The short answer is yes, said Thomas Ciesielski, MD, GME medical director for patient safety, quality education, and clinical learning environment review program development at Washington University in St. Louis. “This is a new trend, but it’s still in the process of defining itself. Every bridging leader has their own identity based on their institution. Some play a bridging role for the entire institution; others play similar roles but only within a specific department or division. There’s a lot of learning going on in our community,” he said.

The first Bridging Leaders track was held last year at AAMC’s 2018 Integrating Quality Conference, an event which has been held annually for the past decade. The concept was also highlighted in a 2017 article in the Journal of Graduate Medical Education2 by bridging leaders, including many of the faculty at the subsequent AAMC sessions, highlighting their roles and programs at six academic medical centers.

One of those coauthors, hospitalist Vineet Arora, MD, MAPP, MHM, was recently appointed to a new position at University of Chicago Medicine: associate chief medical officer for the clinical learning environment – which pulls together many of the threads of the bridging leaders movement into a single job title. Dr. Arora said her job builds on her prior work in GME and improves the clinical learning environment for residents and fellows by integrating them into the health system’s institutional quality, safety, and value missions. It also expands on that work to include faculty and allied health professionals. “I just happen to come from the health system side,” she said.
 

 

 

Natural bridges: From clinical to educational

As with the early days of the hospitalist movement, bridging leaders are trying to build a community of peers with common interests.

“We’re just at the beginning,” Dr. Arora said. “Hospitalists have been the natural torch bearers for quality and safety in their hospitals, and also play roles in the education of residents and medical students, working alongside residency program directors. They are well-versed in quality and in education. So, they are the natural bridges between education and clinical care,” she said. “We also know this is a young group that comes to our meetings. One-third of them have been doing this for only the past 2 years or less – so they are early in their career paths.”

Front-line clinical providers, such as residents, often have good ideas, and bridging leaders can bring these ideas to the health system’s leaders, Dr. Arora said. “Bridging at the leadership level also involves thinking about the larger priorities of the system.” There are trust issues that these leaders can help to bridge, as well as internal communication barriers. “We also realize that health systems have to move quickly in response to a rapidly changing environment,” she noted.

“You don’t want a hundred quality improvement projects being done by students that are unaligned with the organization’s priorities. That leads to waste, and highlights the need for greater alignment,” Dr. Arora added. “Think about using front-line staff as agents of change, of engaging with learners as a win/win – as a way to actually solve the problems we are facing.”

Dr. Darlene Tad-y, associate professor and hospitalist at the University of Colorado Hospital, Denver
Dr. Darlene Tad-y

A bridging leader occupies a role in which they can influence and affect these two parts of the mission of health care, somebody whose leadership responsibilities sit at the intersection of these two areas, said Darlene Tad-y, MD, director of GME quality and safety programs at the University of Colorado, Aurora. “Once, these people were mostly in academic medical centers, but that’s not so true anymore. A director of quality for a hospital medicine group is responsible for developing the group’s quality strategy, but at the same time responsible for teaching members of the group – not only doing QI but teaching others how to do it,” she said.

“Hospitalists make terrific bridging leaders. We really are in that sweet spot, and we can and should step into these leadership roles,” Dr. Tad-y said. “Because of our role in the hospital, we know the ins and outs of how processes work or don’t work. We have an insider’s view of the system’s dysfunction, which puts us in a great place to lead these efforts.”

The bridging leaders movement has been a grass-roots development, Dr. Tad-y explained. “It’s not like people started with the job title. But because all of this work was needed, a few people started doing it – and they began seeking each other out. Then they found that there were more than a few of us. We just hadn’t known what it was called.”
 

 

 

What is being bridged?

There has long been a relationship between individuals who lead in the clinical environment and those who lead in education, such as the program directors of residency programs, said Janis Orlowski, MD, chief health care officer for AAMC, which represents 154 MD-granting medical schools and their associated teaching hospitals.

Dr. Janis Orlowski
Dr. Janis Orlowski

“Our association’s three missions of research, education, and patient care really come together around the bridging leaders concept. So, this movement is well aligned. And as bridging leaders started to develop as a group, they found a home in AAMC and at our Integrating Quality Conference,” she said.

“Where we see this integration is in the teaching of residents and medical students in the clinical environment,” Dr. Orlowski said. “It’s not just their knowledge of disease or treatments or procedural skills that needs to be taught. They also need to understand the safe and effective clinical environment, and the role of learners in patient safety, quality improvement, and efficient and cost-effective hospital care. They need to understand value.” A new field of health systems science is emerging and quality improvement is evolving to incorporate population health. But traditional medical faculty may not be that comfortable teaching it.

Any physician who sees that they have a role in the clinical, administrative, and educational worlds can do the bridging, Dr. Orlowski said. “It could be any environment in which care is provided and learning takes place. I mentioned QI and patient safety, but among the other essential skills for the doctor of tomorrow are teamwork, inter-professional training in how to work with, for example, the pharmacist and dietitian, and understanding the value they bring.”

Whenever quality improvement projects are undertaken as part of post-graduate medical education, they should be aligned with the institution’s quality improvement plan and with the priorities of the health system, said Rob Dressler, MD, MBA, quality and safety officer at Christiana Health Care System in Newark, Del., and president of the Alliance of Independent Academic Medical Centers (AIAMC), which represents 80 hospital and health systems active in the emerging movement for bridging leaders.

“GME needs to keep the C-suite aware of its front-line efforts to improve quality and safety, so the institution’s return on investment can be recognized,” he said. “The AIAMC has consistently advocated for the building of bridges between GME leaders and their C-suites at our member hospitals. If you are doing process improvement, you need to be aligned with the organization and its priorities, or you’ll be less successful.”

AIAMC convenes the National Initiative – a multi-institutional collaborative in which residents lead multi-disciplinary teams in quality improvement projects. A total of 64 hospitals and health systems have participated since the program started in 2007. “We need to train our clinicians to solve the problems of tomorrow,” Dr. Dressler said.
 

Bridging leaders in action

The leaders contacted for this article offered some examples of bridging in action. Dr. Arora has used “crowd sourcing” – a technique employed extensively in her work with Costs of Care, a global nonprofit trying to drive better health care at lower cost – to implement a local program for front-line clinicians to generate ideas on how to improve value and reduce unnecessary treatment.

 

 

“We created our local ‘Choosing Wisely’ challenge for residents and staff at the University of Chicago – with the understanding that the winner would get analytic and time support to pursue their project,” she said. A resident winner was a finalist in the RIV (Research, Innovations and Clinical Vignettes) competition at a recent SHM Annual Conference.

At the University of Colorado, there is an associate program director who is responsible for the quality improvement curriculum for residents, Dr. Tad-y said. Because teaching QI means doing QI, the associate program director had to start implementing QI in the hospital, learning how to choose appropriate QI projects for the residents. That meant looking at quality priorities for the hospital – including VTE prophylaxis, fall prevention, and rates of central line–associated bloodstream infections and catheter-associated urinary tract infections. “A critical priority was to align the learners’ QI projects with what the hospital is already working on,” she explained.

“In our practice, all fellows need education and training in patient safety, how to recognize medical errors and close calls, and how to use our errors reporting system,” Dr. Myers said. “They also need to participate in errors analysis discussions. But we have struggled to get residents to attend those meetings. There’s not enough time in their schedules, and here at Penn, we have 1,500 residents and fellows, and maybe only 20 of these formal medical errors conferences per year,” she said.

Dr. Myers worked with the hospital’s patient safety officer and head of GME to design a simulated approach to fill the gap, a simulation of the root cause analysis process – how it works, the various roles played by different individuals, and what happens after it is done. “In my role, I trained one faculty member in each large residency program in how to identify a case and how to use the simulation,” she said. “They can now teach their own learners and make it more relevant to their specialty.”

Penn also has a blueprint for quality – a road map for how the organization socializes health care quality, safety, and value, Dr. Myers said. “Every 3 or 4 years our CEO looks at the road map and tries to get feedback on its direction from payers and insurers, quality leaders, academic department heads – and residents. I was in a good position to organize a session for a representative group of residents to get together and talk about where they see the quality and safety gaps in their everyday work.”

The role of the bridging leader is a viable career path or target for many hospitalists, Dr. Arora said. “But even if it’s not a career path for you, knowing that hospitalists are at the forefront of the bridging leaders movement could help you energize your health system. If you are seeing gaps in quality and safety, this is an issue you can bring before the system.”

These days doctors are wearing a lot of hats and filling roles that weren’t seen as much before, said Dr. Orlowski. “Bridging leaders are not an exclusive group but open to anyone who finds their passion in teaching quality and safety. Maybe you’re doing quality and safety, but not education, but you recognize its importance, or vice versa. First of all, look to see what this bridging leaders thing really is, and how it might apply to you. You might say: ‘That accurately describes what I’m doing now. I have the interest; I want to learn more.’”
 

References

1. Accreditation Council for Graduate Medical Education. CLER pathways to excellence.

2. Myers JS et al. Bridging leadership roles in quality and patient safety: Experience of 6 U.S. Academic Medical Centers. J Grad Med Educ. 2017 Feb;9(1): 9-13.
 

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Productivity-based salary structure not associated with value-based culture

Article Type
Changed
Mon, 09/02/2019 - 21:17

Background: Although new payment models have been implemented by the Centers for Medicare & Medicaid Services (CMS) for hospital reimbursement, little is known about the effects of reimbursement models on the culture of providing value-based care among individual hospitalists. The concern is that productivity-based models increase pressure on hospitalists to maximize volume and billing, as opposed to focusing on value.



Study design: Observational, cross-sectional, survey-based study.

Setting: A total of 12 hospitals in California, which represented university, community, and safety-net settings.

Synopsis: Hospitalists were asked to complete the High-Value Care Culture Survey (HVCCS), a validated tool that assesses value-based decision making. Components of the survey assessed leadership and health system messaging, data transparency and access, comfort with cost conversations, and blame-free environments. Hospitalists were also asked to self-report their reimbursement structure: salary alone, salary plus productivity, or salary plus value-based adjustments.

A total of 255 hospitalists completed the survey. The mean HVCCS score was 50.2 on a 0-100 scale. Hospitalists who reported reimbursement with salary plus productivity adjustments had a lower mean HVCCS score (beta = –6.2; 95% confidence interval, –9.9 to –2.5) when compared with hospitalists paid with salary alone. An association was not found between HVCCS score and reimbursement with salary plus value-based adjustments when compared with salary alone, though this finding may have been limited by sample size.

Bottom line: A hospitalist reimbursement model of salary plus productivity was associated with lower measures of value-based care culture.

Citation: Gupta R et al. Association between hospitalist productivity payments and high-value care culture. J Hosp Med. 2019;14(1):16-21.

Dr. Huang is a physician adviser and associate clinical professor in the division of hospital medicine at the University of California, San Diego.

Publications
Topics
Sections

Background: Although new payment models have been implemented by the Centers for Medicare & Medicaid Services (CMS) for hospital reimbursement, little is known about the effects of reimbursement models on the culture of providing value-based care among individual hospitalists. The concern is that productivity-based models increase pressure on hospitalists to maximize volume and billing, as opposed to focusing on value.



Study design: Observational, cross-sectional, survey-based study.

Setting: A total of 12 hospitals in California, which represented university, community, and safety-net settings.

Synopsis: Hospitalists were asked to complete the High-Value Care Culture Survey (HVCCS), a validated tool that assesses value-based decision making. Components of the survey assessed leadership and health system messaging, data transparency and access, comfort with cost conversations, and blame-free environments. Hospitalists were also asked to self-report their reimbursement structure: salary alone, salary plus productivity, or salary plus value-based adjustments.

A total of 255 hospitalists completed the survey. The mean HVCCS score was 50.2 on a 0-100 scale. Hospitalists who reported reimbursement with salary plus productivity adjustments had a lower mean HVCCS score (beta = –6.2; 95% confidence interval, –9.9 to –2.5) when compared with hospitalists paid with salary alone. An association was not found between HVCCS score and reimbursement with salary plus value-based adjustments when compared with salary alone, though this finding may have been limited by sample size.

Bottom line: A hospitalist reimbursement model of salary plus productivity was associated with lower measures of value-based care culture.

Citation: Gupta R et al. Association between hospitalist productivity payments and high-value care culture. J Hosp Med. 2019;14(1):16-21.

Dr. Huang is a physician adviser and associate clinical professor in the division of hospital medicine at the University of California, San Diego.

Background: Although new payment models have been implemented by the Centers for Medicare & Medicaid Services (CMS) for hospital reimbursement, little is known about the effects of reimbursement models on the culture of providing value-based care among individual hospitalists. The concern is that productivity-based models increase pressure on hospitalists to maximize volume and billing, as opposed to focusing on value.



Study design: Observational, cross-sectional, survey-based study.

Setting: A total of 12 hospitals in California, which represented university, community, and safety-net settings.

Synopsis: Hospitalists were asked to complete the High-Value Care Culture Survey (HVCCS), a validated tool that assesses value-based decision making. Components of the survey assessed leadership and health system messaging, data transparency and access, comfort with cost conversations, and blame-free environments. Hospitalists were also asked to self-report their reimbursement structure: salary alone, salary plus productivity, or salary plus value-based adjustments.

A total of 255 hospitalists completed the survey. The mean HVCCS score was 50.2 on a 0-100 scale. Hospitalists who reported reimbursement with salary plus productivity adjustments had a lower mean HVCCS score (beta = –6.2; 95% confidence interval, –9.9 to –2.5) when compared with hospitalists paid with salary alone. An association was not found between HVCCS score and reimbursement with salary plus value-based adjustments when compared with salary alone, though this finding may have been limited by sample size.

Bottom line: A hospitalist reimbursement model of salary plus productivity was associated with lower measures of value-based care culture.

Citation: Gupta R et al. Association between hospitalist productivity payments and high-value care culture. J Hosp Med. 2019;14(1):16-21.

Dr. Huang is a physician adviser and associate clinical professor in the division of hospital medicine at the University of California, San Diego.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.