Article Type
Changed
Fri, 09/14/2018 - 12:02
Display Headline
Why Aren’t Doctors Following Guidelines?

Take a quick glance through the medical literature, and chances are good that you’ll find a study citing low or variable adherence to clinical guidelines.

One recent paper in Clinical Pediatrics, for example, chronicled low adherence to the 2011 National Heart, Lung, and Blood Institute lipid screening guidelines in primary-care settings.1 Another cautioned providers to “mind the (implementation) gap” in venous thromboembolism prevention guidelines for medical inpatients.2 A third found that lower adherence to guidelines issued by the American College of Cardiology/American Heart Association for acute coronary syndrome patients was significantly associated with higher bleeding and mortality rates.3

William Lewis, MD
William Lewis, MD

Both clinical trials and real-world studies have demonstrated that when guidelines are applied, patients do better, says William Lewis, MD, professor of medicine at Case Western Reserve University and director of the Heart & Vascular Center at MetroHealth in Cleveland. So why aren’t they followed more consistently?

Experts in both HM and other disciplines cite multiple obstacles. Lack of evidence, conflicting evidence, or lack of awareness about evidence can all conspire against the main goal of helping providers deliver consistent high-value care, says Christopher Moriates, MD, assistant clinical professor in the Division of Hospital Medicine at the University of California, San Francisco.

Christopher Moriates, MD
Christopher Moriates, MD

“In our day-to-day lives as hospitalists, for the vast majority probably of what we do there’s no clear guideline or there’s a guideline that doesn’t necessarily apply to the patient standing in front of me,” he says.

Even when a guideline is clear and relevant, other doctors say inadequate dissemination and implementation can still derail quality improvement efforts.

“A lot of what we do as physicians is what we learned in residency, and to incorporate the new data is difficult,” says Leonard Feldman, MD, SFHM, a hospitalist and associate professor of internal medicine and pediatrics at Johns Hopkins School of Medicine in Baltimore.

Leonard Feldman, MD, SFHM
Leonard Feldman, MD, SFHM

Dr. Feldman believes many doctors have yet to integrate recently revised hypertension and cholesterol guidelines into their practice, for example. Some guidelines have proven more complex or controversial, limiting their adoption.

“I know I struggle to keep up with all of the guidelines, and I’m in a big academic center where people are talking about them all the time, and I’m working with residents who are talking about them all the time,” Dr. Feldman says.

Despite the remaining gaps, however, many researchers agree that momentum has built steadily over the past two decades toward a more systematic approach to creating solid evidence-based guidelines and integrating them into real-world decision making.

Emphasis on Evidence and Transparency

Gordon Guyatt, MD, MSc, FRCPC
Gordon Guyatt, MD, MSc, FRCPC

The term “evidence-based medicine” was coined in 1990 by Gordon Guyatt, MD, MSc, FRCPC, distinguished professor of medicine and clinical epidemiology at McMaster University in Hamilton, Ontario. It’s played an active role in formulating guidelines for multiple organizations. The guideline-writing process, Dr. Guyatt says, once consisted of little more than self-selected clinicians sitting around a table.

“It used to be that a bunch of experts got together and decided and made the recommendations with very little in the way of a systematic process and certainly not evidence based,” he says.

Cincinnati Children’s Hospital Medical Center was among the pioneers pushing for a more systematic approach; the hospital began working on its own guidelines in 1995 and published the first of many the following year.

Wendy Gerhardt, MSN
Wendy Gerhardt, MSN

“We started evidence-based guidelines when the docs were still saying, ‘This is cookbook medicine. I don’t know if I want to do this or not,’” says Wendy Gerhardt, MSN, director of evidence-based decision making in the James M. Anderson Center for Health Systems Excellence at Cincinnati Children’s.

 

 

Some doctors also argued that clinical guidelines would stifle innovation, cramp their individual style, or intrude on their relationships with patients. Despite some lingering misgivings among clinicians, however, the process has gained considerable support. In 2000, an organization called the GRADE Working Group (Grading of Recommendations, Assessment, Development and Evaluation) began developing a new approach to raise the quality of evidence and strength of recommendations.

The group’s work led to a 2004 article in BMJ, and the journal subsequently published a six-part series about GRADE for clinicians.4 More recently, the Journal of Clinical Epidemiology also delved into the issue with a 15-part series detailing the GRADE methodology.5 Together, Dr. Guyatt says, the articles have become a go-to guide for guidelines and have helped solidify the focus on evidence.

Cincinnati Children’s and other institutions also have developed tools, and the Institute of Medicine has published guideline-writing standards.

“So it’s easier than it’s ever been to know whether or not you have a decent guideline in your hand,” Gerhardt says.

Likewise, medical organizations are more clearly explaining how they came up with different kinds of guidelines. Evidence-based and consensus guidelines aren’t necessarily mutually exclusive, though consensus building is often used in the absence of high-quality evidence. Some organizations have limited the pool of evidence for guidelines to randomized controlled trial data.

“Unfortunately, for us in the real world, we actually have to make decisions even when there’s not enough data,” Dr. Feldman says.

Sometimes, the best available evidence may be observational studies, and some committees still try to reach a consensus based on that evidence and on the panelists’ professional opinions.

Dr. Guyatt agrees that it’s “absolutely not” true that evidence-based guidelines require randomized controlled trials. “What you need for any recommendation is a thorough review and summary of the best available evidence,” he says.

As part of each final document, Cincinnati Children’s details how it created the guideline, when the literature searches occurred, how the committee reached a consensus, and which panelists participated in the deliberations. The information, Gerhardt says, allows anyone else to “make some sensible decisions about whether or not it’s a guideline you want to use.”

Guideline-crafting institutions are also focusing more on the proper makeup of their panels. In general, Dr. Guyatt says, a panel with more than 10 people can be unwieldy. Guidelines that include many specific recommendations, however, may require multiple subsections, each with its own committee.

Dr. Guyatt is careful to note that, like many other experts, he has multiple potential conflicts of interest, such as working on the anti-thrombotic guidelines issued by the American College of Chest Physicians. Committees, he says, have become increasingly aware of how properly handling conflicts (financial or otherwise) can be critical in building and maintaining trust among clinicians and patients. One technique is to ensure that a diversity of opinions is reflected among a committee whose experts have various conflicts. If one expert’s company makes drug A, for example, then the committee also includes experts involved with drugs B or C. As an alternative, some committees have explicitly barred anyone with a conflict of interest from participating at all.

But experts often provide crucial input, Dr. Guyatt says, and several committees have adopted variations of a middle-ground approach. In an approach that he favors, all guideline-formulating panelists are conflict-free but begin their work by meeting with a separate group of experts who may have some conflicts but can help point out the main issues. The panelists then deliberate and write a draft of the recommendations, after which they meet again with the experts to receive feedback before finalizing the draft.

 

 

In a related approach, experts sit on the panel and discuss the evidence, but those with conflicts recuse themselves before the group votes on any recommendations. Delineating between discussions of the evidence and discussions of recommendations can be tricky, though, increasing the risk that a conflict of interest may influence the outcome. Even so, Dr. Guyatt says the model is still preferable to other alternatives.

Getting the Word Out

Once guidelines have been crafted and vetted, how can hospitalists get up to speed on them? Dr. Feldman’s favorite go-to source is Guideline.gov, a national guideline clearinghouse that he calls one of the best compendiums of available information. Especially helpful, he adds, are details such as how the guidelines were created.

To help maximize his time, he also uses tools like NEJM Journal Watch, which sends daily emails on noteworthy articles and weekend roundups of the most important studies.

“It is a way of at least trying to keep up with what’s going on,” he says. Similarly, he adds, ACP Journal Club provides summaries of important new articles, The Hospitalist can help highlight important guidelines that affect HM, and CME meetings or online modules like SHMconsults.com can help doctors keep pace.

For the past decade, Dr. Guyatt has worked with another popular tool, a guideline-disseminating service called UpToDate. Many alternatives exist, such as DynaMed Plus.

“I think you just need to pick away,” Dr. Feldman says. “You need to decide that as a physician, as a lifelong learner, that you are going to do something that is going to keep you up-to-date. There are many ways of doing it. You just have to decide what you’re going to do and commit to it.”

Lisa Shieh, MD, PhD, FHM
Lisa Shieh, MD, PhD, FHM

Researchers are helping out by studying how to present new guidelines in ways that engage doctors and improve patient outcomes. Another trend is to make guidelines routinely accessible not only in electronic medical records but also on tablets and smartphones. Lisa Shieh, MD, PhD, FHM, a hospitalist and clinical professor of medicine at Stanford University Medical Center, has studied how best-practice alerts, or BPAs, impact adherence to guidelines covering the appropriate use of blood products. Dr. Shieh, who splits her time between quality improvement and hospital medicine, says getting new information and guidelines into clinicians’ hands can be a logistical challenge.

“At Stanford, we had a huge official campaign around the guidelines, and that did make some impact, but it wasn’t huge in improving appropriate blood use,” she says. When the medial center set up a BPA through the electronic medical record system, however, both overall and inappropriate blood use declined significantly. In fact, the percentage of providers ordering blood products for patients with a hemoglobin count above 8 g/dL dropped from 60% to 25%.6

One difference maker, Dr. Shieh says, was providing education at the moment a doctor actually ordered blood. To avoid alert fatigue, the “smart BPA” fires only if a doctor tries to order blood and the patient’s hemoglobin is greater than 7 or 8 g/dL, depending on the diagnosis. If the doctor still wants to transfuse, the system requests a clinical indication for the exception.

Despite the clear improvement in appropriate use, the team wanted to understand why 25% of providers were still ordering blood products for patients with a hemoglobin count greater than 8 despite the triggered BPA and whether additional interventions could yield further improvements. Through their study, the researchers documented several reasons for the continued ordering. In some cases, the system failed to properly document actual or potential bleeding as an indicator. In other cases, the ordering reflected a lack of consensus on the guidelines in fields like hematology and oncology.

 

 

One of the most intriguing reasons, though, was that residents often did the ordering at the behest of an attending who might have never seen the BPA.

“It’s not actually reaching the audience making the decision; it might be reaching the audience that’s just carrying out the order,” Dr. Shieh says.

The insight, she says, may provide an opportunity to talk with attending physicians who may not have completely bought into the guidelines and to involve the entire team in the decision-making process.

Hospitalists, she says, can play a vital role in guideline development and implementation, especially for strategies that include BPAs.

“I think they’re the perfect group to help use this technology wisely because they are at the front lines taking care of patients so they’ll know the best workflow of when these alerts fire and maybe which ones happen the most often,” Dr. Shieh says. “I think this is a fantastic opportunity to get more hospitalists involved in designing these alerts and collaborating with the IT folks.”

Even with widespread buy-in from providers, guidelines may not reach their full potential without a careful consideration of patients’ values and concerns. Experts say joint deliberations and discussions are especially important for guidelines that are complicated, controversial, or carrying potential risks that must be weighed against the benefits.

Some of the conversations are easy, with well-defined risks and benefits and clear patient preferences, but others must traverse vast tracts of gray area. Fortunately, Dr. Feldman says, more tools also are becoming available for this kind of shared decision making. Some use pictorial representations to help patients understand the potential outcomes of alternative courses of action or inaction.

“Sometimes, that pictorial representation is worth the 1,000 words that we wouldn’t be able to adequately describe otherwise,” he says.

Similarly, Cincinnati Children’s has developed tools to help to ease the shared decision-making process.

“We look where there’s equivocal evidence or no evidence and have developed tools that help the clinician have that conversation with the family and then have them informed enough that they can actually weigh in on what they want,” Gerhardt says. One end product is a card or trifold pamphlet that might help parents understand the benefits and side effects of alternate strategies.

“Typically, in medicine, we’re used to telling people what needs to be done,” she says. “So shared decision making is kind of a different thing for clinicians to engage in.” TH


Bryn Nelson, PhD, is a freelance writer in Seattle.

References

  1. Valle CW, Binns HJ, Quadri-Sheriff M, Benuck I, Patel A. Physicians’ lack of adherence to National Heart, Lung, and Blood Institute guidelines for pediatric lipid screening. Clin Pediatr. 2015;54(12):1200-1205.
  2. Maynard G, Jenkins IH, Merli GJ. Venous thromboembolism prevention guidelines for medical inpatients: mind the (implementation) gap. J Hosp Med. 2013;8(10):582-588.
  3. Mehta RH, Chen AY, Alexander KP, Ohman EM, Roe MT, Peterson ED. Doing the right things and doing them the right way: association between hospital guideline adherence, dosing safety, and outcomes among patients with acute coronary syndrome. Circulation. 2015;131(11):980-987.
  4. GRADE Working Group. Grading quality of evidence and strength of recommendations. BMJ. 2004;328:1490
  5. Andrews JC, Schünemann HJ, Oxman AD, et al. GRADE guidelines: 15. Going from evidence to recommendation—determinants of a recommendation’s direction and strength. J Clin Epidemiol. 2013;66(7):726-735.
  6. 6. Chen JH, Fang DZ, Tim Goodnough L, Evans KH, Lee Porter M, Shieh L. Why providers transfuse blood products outside recommended guidelines in spite of integrated electronic best practice alerts. J Hosp Med. 2015;10(1):1-7.

How to Gauge Guidelines

For clinical guidelines to be truly trustworthy, Gordon Guyatt, MD, MSc, FRCPC, distinguished professor of medicine and clinical epidemiology at McMaster University in Hamilton, Ontario, says that they should meet several criteria:

  • They should adhere to an evidence-based process of gathering and summarizing the evidence and summarize that evidence in ways doctors can understand.
  • They should rate the overall evidence used in their deliberations and distinguish between strong and weak recommendations.
  • They should recognize that recommendations are value- and preference-sensitive, make their own judgments explicit, and seek out available evidence about patients’ own values and preferences.
  • They should be clear about how they’re dealing with conflicts of interest.

—Bryn Nelson, PhD

 

 

New Tools of the Trade for Crafting Clinical Guidelines

The well-known GRADE system and similar tools such as Levels of Evidence and Grades of Recommendation have helped guideline writers for years, particularly in evaluating bodies of medical literature and the strength of the studies’ conclusions. Cincinnati Children’s Hospital Medical Center uses a similar strength-of-evidence pyramid to gauge the relative reliability of data: physician expertise and practice at the base, a retrospective or cohort study at a higher level, and a systematic review composed of numerous randomized controlled trials at the pinnacle.

Not every clinician has been taught how to appraise articles, however. Accordingly, Cincinnati Children’s James M. Anderson Center for Health Systems Excellence has developed another system called LEGEND (Let Evidence Guide Every New Decision) to help guideline developers know what to look for when reading a study. The system’s analysis boils down to three main questions: Is it valid? What are the results? And are they applicable to my population?

“If you want to know whether the study that you’re reading is something that should prompt you to change practice, you want to know if the study is a good one,” says Wendy Gerhardt, MSN, the hospital center’s director of evidence-based decision making.

In fact, the hospital has developed tools to assist in nearly every step of the guideline-crafting process. The tools help clinicians learn how to read studies, develop an evidence-based guideline, understand whether a guideline is solid, know where separate recommendations agree and differ, and implement new guidelines into regular practice.

One tool called REACH (Rapid Evidence Adoption to improve Child Health) uses quality improvement consultants and multidisciplinary groups to “translate evidence into point-of-care decision making by clinicians, families and patients,” according to its website. The process takes about 120 days and can result in decision aids such as prepopulated electronic order sets that default to evidence-based suggestions for, say, bronchiolitis inhalation therapies.

“It’s really helpful when you’re working in an academic center and the residents are the ones writing the orders,” says Gerhardt. “So it defaults to the right thing, and they have to actually think about not doing it that way.”

Often, it’s not enough merely to give doctors the link to a new guideline.

“If you can pull up an order set that already has the evidence embedded in it, that’s a little more compelling,” she says. “You kind of have to put the evidence at their point of care instead of in a document. And that’s what, in my mind, makes it real.”

At Cincinnati Children’s, she and her colleagues also have taught doctors how to use PubMed to seek out systematic reviews if they have a question. They have rolling computers, too: Medical librarians sometimes go on rounds with clinicians to help with on-the-spot literature searches.

“It’s however you can make it easier for them to use,” Gerhardt says. “By and large, most people just want to practice, so you have to put that evidence in their way.”

Bryn Nelson, PhD

Issue
The Hospitalist - 2016(10)
Publications
Sections

Take a quick glance through the medical literature, and chances are good that you’ll find a study citing low or variable adherence to clinical guidelines.

One recent paper in Clinical Pediatrics, for example, chronicled low adherence to the 2011 National Heart, Lung, and Blood Institute lipid screening guidelines in primary-care settings.1 Another cautioned providers to “mind the (implementation) gap” in venous thromboembolism prevention guidelines for medical inpatients.2 A third found that lower adherence to guidelines issued by the American College of Cardiology/American Heart Association for acute coronary syndrome patients was significantly associated with higher bleeding and mortality rates.3

William Lewis, MD
William Lewis, MD

Both clinical trials and real-world studies have demonstrated that when guidelines are applied, patients do better, says William Lewis, MD, professor of medicine at Case Western Reserve University and director of the Heart & Vascular Center at MetroHealth in Cleveland. So why aren’t they followed more consistently?

Experts in both HM and other disciplines cite multiple obstacles. Lack of evidence, conflicting evidence, or lack of awareness about evidence can all conspire against the main goal of helping providers deliver consistent high-value care, says Christopher Moriates, MD, assistant clinical professor in the Division of Hospital Medicine at the University of California, San Francisco.

Christopher Moriates, MD
Christopher Moriates, MD

“In our day-to-day lives as hospitalists, for the vast majority probably of what we do there’s no clear guideline or there’s a guideline that doesn’t necessarily apply to the patient standing in front of me,” he says.

Even when a guideline is clear and relevant, other doctors say inadequate dissemination and implementation can still derail quality improvement efforts.

“A lot of what we do as physicians is what we learned in residency, and to incorporate the new data is difficult,” says Leonard Feldman, MD, SFHM, a hospitalist and associate professor of internal medicine and pediatrics at Johns Hopkins School of Medicine in Baltimore.

Leonard Feldman, MD, SFHM
Leonard Feldman, MD, SFHM

Dr. Feldman believes many doctors have yet to integrate recently revised hypertension and cholesterol guidelines into their practice, for example. Some guidelines have proven more complex or controversial, limiting their adoption.

“I know I struggle to keep up with all of the guidelines, and I’m in a big academic center where people are talking about them all the time, and I’m working with residents who are talking about them all the time,” Dr. Feldman says.

Despite the remaining gaps, however, many researchers agree that momentum has built steadily over the past two decades toward a more systematic approach to creating solid evidence-based guidelines and integrating them into real-world decision making.

Emphasis on Evidence and Transparency

Gordon Guyatt, MD, MSc, FRCPC
Gordon Guyatt, MD, MSc, FRCPC

The term “evidence-based medicine” was coined in 1990 by Gordon Guyatt, MD, MSc, FRCPC, distinguished professor of medicine and clinical epidemiology at McMaster University in Hamilton, Ontario. It’s played an active role in formulating guidelines for multiple organizations. The guideline-writing process, Dr. Guyatt says, once consisted of little more than self-selected clinicians sitting around a table.

“It used to be that a bunch of experts got together and decided and made the recommendations with very little in the way of a systematic process and certainly not evidence based,” he says.

Cincinnati Children’s Hospital Medical Center was among the pioneers pushing for a more systematic approach; the hospital began working on its own guidelines in 1995 and published the first of many the following year.

Wendy Gerhardt, MSN
Wendy Gerhardt, MSN

“We started evidence-based guidelines when the docs were still saying, ‘This is cookbook medicine. I don’t know if I want to do this or not,’” says Wendy Gerhardt, MSN, director of evidence-based decision making in the James M. Anderson Center for Health Systems Excellence at Cincinnati Children’s.

 

 

Some doctors also argued that clinical guidelines would stifle innovation, cramp their individual style, or intrude on their relationships with patients. Despite some lingering misgivings among clinicians, however, the process has gained considerable support. In 2000, an organization called the GRADE Working Group (Grading of Recommendations, Assessment, Development and Evaluation) began developing a new approach to raise the quality of evidence and strength of recommendations.

The group’s work led to a 2004 article in BMJ, and the journal subsequently published a six-part series about GRADE for clinicians.4 More recently, the Journal of Clinical Epidemiology also delved into the issue with a 15-part series detailing the GRADE methodology.5 Together, Dr. Guyatt says, the articles have become a go-to guide for guidelines and have helped solidify the focus on evidence.

Cincinnati Children’s and other institutions also have developed tools, and the Institute of Medicine has published guideline-writing standards.

“So it’s easier than it’s ever been to know whether or not you have a decent guideline in your hand,” Gerhardt says.

Likewise, medical organizations are more clearly explaining how they came up with different kinds of guidelines. Evidence-based and consensus guidelines aren’t necessarily mutually exclusive, though consensus building is often used in the absence of high-quality evidence. Some organizations have limited the pool of evidence for guidelines to randomized controlled trial data.

“Unfortunately, for us in the real world, we actually have to make decisions even when there’s not enough data,” Dr. Feldman says.

Sometimes, the best available evidence may be observational studies, and some committees still try to reach a consensus based on that evidence and on the panelists’ professional opinions.

Dr. Guyatt agrees that it’s “absolutely not” true that evidence-based guidelines require randomized controlled trials. “What you need for any recommendation is a thorough review and summary of the best available evidence,” he says.

As part of each final document, Cincinnati Children’s details how it created the guideline, when the literature searches occurred, how the committee reached a consensus, and which panelists participated in the deliberations. The information, Gerhardt says, allows anyone else to “make some sensible decisions about whether or not it’s a guideline you want to use.”

Guideline-crafting institutions are also focusing more on the proper makeup of their panels. In general, Dr. Guyatt says, a panel with more than 10 people can be unwieldy. Guidelines that include many specific recommendations, however, may require multiple subsections, each with its own committee.

Dr. Guyatt is careful to note that, like many other experts, he has multiple potential conflicts of interest, such as working on the anti-thrombotic guidelines issued by the American College of Chest Physicians. Committees, he says, have become increasingly aware of how properly handling conflicts (financial or otherwise) can be critical in building and maintaining trust among clinicians and patients. One technique is to ensure that a diversity of opinions is reflected among a committee whose experts have various conflicts. If one expert’s company makes drug A, for example, then the committee also includes experts involved with drugs B or C. As an alternative, some committees have explicitly barred anyone with a conflict of interest from participating at all.

But experts often provide crucial input, Dr. Guyatt says, and several committees have adopted variations of a middle-ground approach. In an approach that he favors, all guideline-formulating panelists are conflict-free but begin their work by meeting with a separate group of experts who may have some conflicts but can help point out the main issues. The panelists then deliberate and write a draft of the recommendations, after which they meet again with the experts to receive feedback before finalizing the draft.

 

 

In a related approach, experts sit on the panel and discuss the evidence, but those with conflicts recuse themselves before the group votes on any recommendations. Delineating between discussions of the evidence and discussions of recommendations can be tricky, though, increasing the risk that a conflict of interest may influence the outcome. Even so, Dr. Guyatt says the model is still preferable to other alternatives.

Getting the Word Out

Once guidelines have been crafted and vetted, how can hospitalists get up to speed on them? Dr. Feldman’s favorite go-to source is Guideline.gov, a national guideline clearinghouse that he calls one of the best compendiums of available information. Especially helpful, he adds, are details such as how the guidelines were created.

To help maximize his time, he also uses tools like NEJM Journal Watch, which sends daily emails on noteworthy articles and weekend roundups of the most important studies.

“It is a way of at least trying to keep up with what’s going on,” he says. Similarly, he adds, ACP Journal Club provides summaries of important new articles, The Hospitalist can help highlight important guidelines that affect HM, and CME meetings or online modules like SHMconsults.com can help doctors keep pace.

For the past decade, Dr. Guyatt has worked with another popular tool, a guideline-disseminating service called UpToDate. Many alternatives exist, such as DynaMed Plus.

“I think you just need to pick away,” Dr. Feldman says. “You need to decide that as a physician, as a lifelong learner, that you are going to do something that is going to keep you up-to-date. There are many ways of doing it. You just have to decide what you’re going to do and commit to it.”

Lisa Shieh, MD, PhD, FHM
Lisa Shieh, MD, PhD, FHM

Researchers are helping out by studying how to present new guidelines in ways that engage doctors and improve patient outcomes. Another trend is to make guidelines routinely accessible not only in electronic medical records but also on tablets and smartphones. Lisa Shieh, MD, PhD, FHM, a hospitalist and clinical professor of medicine at Stanford University Medical Center, has studied how best-practice alerts, or BPAs, impact adherence to guidelines covering the appropriate use of blood products. Dr. Shieh, who splits her time between quality improvement and hospital medicine, says getting new information and guidelines into clinicians’ hands can be a logistical challenge.

“At Stanford, we had a huge official campaign around the guidelines, and that did make some impact, but it wasn’t huge in improving appropriate blood use,” she says. When the medial center set up a BPA through the electronic medical record system, however, both overall and inappropriate blood use declined significantly. In fact, the percentage of providers ordering blood products for patients with a hemoglobin count above 8 g/dL dropped from 60% to 25%.6

One difference maker, Dr. Shieh says, was providing education at the moment a doctor actually ordered blood. To avoid alert fatigue, the “smart BPA” fires only if a doctor tries to order blood and the patient’s hemoglobin is greater than 7 or 8 g/dL, depending on the diagnosis. If the doctor still wants to transfuse, the system requests a clinical indication for the exception.

Despite the clear improvement in appropriate use, the team wanted to understand why 25% of providers were still ordering blood products for patients with a hemoglobin count greater than 8 despite the triggered BPA and whether additional interventions could yield further improvements. Through their study, the researchers documented several reasons for the continued ordering. In some cases, the system failed to properly document actual or potential bleeding as an indicator. In other cases, the ordering reflected a lack of consensus on the guidelines in fields like hematology and oncology.

 

 

One of the most intriguing reasons, though, was that residents often did the ordering at the behest of an attending who might have never seen the BPA.

“It’s not actually reaching the audience making the decision; it might be reaching the audience that’s just carrying out the order,” Dr. Shieh says.

The insight, she says, may provide an opportunity to talk with attending physicians who may not have completely bought into the guidelines and to involve the entire team in the decision-making process.

Hospitalists, she says, can play a vital role in guideline development and implementation, especially for strategies that include BPAs.

“I think they’re the perfect group to help use this technology wisely because they are at the front lines taking care of patients so they’ll know the best workflow of when these alerts fire and maybe which ones happen the most often,” Dr. Shieh says. “I think this is a fantastic opportunity to get more hospitalists involved in designing these alerts and collaborating with the IT folks.”

Even with widespread buy-in from providers, guidelines may not reach their full potential without a careful consideration of patients’ values and concerns. Experts say joint deliberations and discussions are especially important for guidelines that are complicated, controversial, or carrying potential risks that must be weighed against the benefits.

Some of the conversations are easy, with well-defined risks and benefits and clear patient preferences, but others must traverse vast tracts of gray area. Fortunately, Dr. Feldman says, more tools also are becoming available for this kind of shared decision making. Some use pictorial representations to help patients understand the potential outcomes of alternative courses of action or inaction.

“Sometimes, that pictorial representation is worth the 1,000 words that we wouldn’t be able to adequately describe otherwise,” he says.

Similarly, Cincinnati Children’s has developed tools to help to ease the shared decision-making process.

“We look where there’s equivocal evidence or no evidence and have developed tools that help the clinician have that conversation with the family and then have them informed enough that they can actually weigh in on what they want,” Gerhardt says. One end product is a card or trifold pamphlet that might help parents understand the benefits and side effects of alternate strategies.

“Typically, in medicine, we’re used to telling people what needs to be done,” she says. “So shared decision making is kind of a different thing for clinicians to engage in.” TH


Bryn Nelson, PhD, is a freelance writer in Seattle.

References

  1. Valle CW, Binns HJ, Quadri-Sheriff M, Benuck I, Patel A. Physicians’ lack of adherence to National Heart, Lung, and Blood Institute guidelines for pediatric lipid screening. Clin Pediatr. 2015;54(12):1200-1205.
  2. Maynard G, Jenkins IH, Merli GJ. Venous thromboembolism prevention guidelines for medical inpatients: mind the (implementation) gap. J Hosp Med. 2013;8(10):582-588.
  3. Mehta RH, Chen AY, Alexander KP, Ohman EM, Roe MT, Peterson ED. Doing the right things and doing them the right way: association between hospital guideline adherence, dosing safety, and outcomes among patients with acute coronary syndrome. Circulation. 2015;131(11):980-987.
  4. GRADE Working Group. Grading quality of evidence and strength of recommendations. BMJ. 2004;328:1490
  5. Andrews JC, Schünemann HJ, Oxman AD, et al. GRADE guidelines: 15. Going from evidence to recommendation—determinants of a recommendation’s direction and strength. J Clin Epidemiol. 2013;66(7):726-735.
  6. 6. Chen JH, Fang DZ, Tim Goodnough L, Evans KH, Lee Porter M, Shieh L. Why providers transfuse blood products outside recommended guidelines in spite of integrated electronic best practice alerts. J Hosp Med. 2015;10(1):1-7.

How to Gauge Guidelines

For clinical guidelines to be truly trustworthy, Gordon Guyatt, MD, MSc, FRCPC, distinguished professor of medicine and clinical epidemiology at McMaster University in Hamilton, Ontario, says that they should meet several criteria:

  • They should adhere to an evidence-based process of gathering and summarizing the evidence and summarize that evidence in ways doctors can understand.
  • They should rate the overall evidence used in their deliberations and distinguish between strong and weak recommendations.
  • They should recognize that recommendations are value- and preference-sensitive, make their own judgments explicit, and seek out available evidence about patients’ own values and preferences.
  • They should be clear about how they’re dealing with conflicts of interest.

—Bryn Nelson, PhD

 

 

New Tools of the Trade for Crafting Clinical Guidelines

The well-known GRADE system and similar tools such as Levels of Evidence and Grades of Recommendation have helped guideline writers for years, particularly in evaluating bodies of medical literature and the strength of the studies’ conclusions. Cincinnati Children’s Hospital Medical Center uses a similar strength-of-evidence pyramid to gauge the relative reliability of data: physician expertise and practice at the base, a retrospective or cohort study at a higher level, and a systematic review composed of numerous randomized controlled trials at the pinnacle.

Not every clinician has been taught how to appraise articles, however. Accordingly, Cincinnati Children’s James M. Anderson Center for Health Systems Excellence has developed another system called LEGEND (Let Evidence Guide Every New Decision) to help guideline developers know what to look for when reading a study. The system’s analysis boils down to three main questions: Is it valid? What are the results? And are they applicable to my population?

“If you want to know whether the study that you’re reading is something that should prompt you to change practice, you want to know if the study is a good one,” says Wendy Gerhardt, MSN, the hospital center’s director of evidence-based decision making.

In fact, the hospital has developed tools to assist in nearly every step of the guideline-crafting process. The tools help clinicians learn how to read studies, develop an evidence-based guideline, understand whether a guideline is solid, know where separate recommendations agree and differ, and implement new guidelines into regular practice.

One tool called REACH (Rapid Evidence Adoption to improve Child Health) uses quality improvement consultants and multidisciplinary groups to “translate evidence into point-of-care decision making by clinicians, families and patients,” according to its website. The process takes about 120 days and can result in decision aids such as prepopulated electronic order sets that default to evidence-based suggestions for, say, bronchiolitis inhalation therapies.

“It’s really helpful when you’re working in an academic center and the residents are the ones writing the orders,” says Gerhardt. “So it defaults to the right thing, and they have to actually think about not doing it that way.”

Often, it’s not enough merely to give doctors the link to a new guideline.

“If you can pull up an order set that already has the evidence embedded in it, that’s a little more compelling,” she says. “You kind of have to put the evidence at their point of care instead of in a document. And that’s what, in my mind, makes it real.”

At Cincinnati Children’s, she and her colleagues also have taught doctors how to use PubMed to seek out systematic reviews if they have a question. They have rolling computers, too: Medical librarians sometimes go on rounds with clinicians to help with on-the-spot literature searches.

“It’s however you can make it easier for them to use,” Gerhardt says. “By and large, most people just want to practice, so you have to put that evidence in their way.”

Bryn Nelson, PhD

Take a quick glance through the medical literature, and chances are good that you’ll find a study citing low or variable adherence to clinical guidelines.

One recent paper in Clinical Pediatrics, for example, chronicled low adherence to the 2011 National Heart, Lung, and Blood Institute lipid screening guidelines in primary-care settings.1 Another cautioned providers to “mind the (implementation) gap” in venous thromboembolism prevention guidelines for medical inpatients.2 A third found that lower adherence to guidelines issued by the American College of Cardiology/American Heart Association for acute coronary syndrome patients was significantly associated with higher bleeding and mortality rates.3

William Lewis, MD
William Lewis, MD

Both clinical trials and real-world studies have demonstrated that when guidelines are applied, patients do better, says William Lewis, MD, professor of medicine at Case Western Reserve University and director of the Heart & Vascular Center at MetroHealth in Cleveland. So why aren’t they followed more consistently?

Experts in both HM and other disciplines cite multiple obstacles. Lack of evidence, conflicting evidence, or lack of awareness about evidence can all conspire against the main goal of helping providers deliver consistent high-value care, says Christopher Moriates, MD, assistant clinical professor in the Division of Hospital Medicine at the University of California, San Francisco.

Christopher Moriates, MD
Christopher Moriates, MD

“In our day-to-day lives as hospitalists, for the vast majority probably of what we do there’s no clear guideline or there’s a guideline that doesn’t necessarily apply to the patient standing in front of me,” he says.

Even when a guideline is clear and relevant, other doctors say inadequate dissemination and implementation can still derail quality improvement efforts.

“A lot of what we do as physicians is what we learned in residency, and to incorporate the new data is difficult,” says Leonard Feldman, MD, SFHM, a hospitalist and associate professor of internal medicine and pediatrics at Johns Hopkins School of Medicine in Baltimore.

Leonard Feldman, MD, SFHM
Leonard Feldman, MD, SFHM

Dr. Feldman believes many doctors have yet to integrate recently revised hypertension and cholesterol guidelines into their practice, for example. Some guidelines have proven more complex or controversial, limiting their adoption.

“I know I struggle to keep up with all of the guidelines, and I’m in a big academic center where people are talking about them all the time, and I’m working with residents who are talking about them all the time,” Dr. Feldman says.

Despite the remaining gaps, however, many researchers agree that momentum has built steadily over the past two decades toward a more systematic approach to creating solid evidence-based guidelines and integrating them into real-world decision making.

Emphasis on Evidence and Transparency

Gordon Guyatt, MD, MSc, FRCPC
Gordon Guyatt, MD, MSc, FRCPC

The term “evidence-based medicine” was coined in 1990 by Gordon Guyatt, MD, MSc, FRCPC, distinguished professor of medicine and clinical epidemiology at McMaster University in Hamilton, Ontario. It’s played an active role in formulating guidelines for multiple organizations. The guideline-writing process, Dr. Guyatt says, once consisted of little more than self-selected clinicians sitting around a table.

“It used to be that a bunch of experts got together and decided and made the recommendations with very little in the way of a systematic process and certainly not evidence based,” he says.

Cincinnati Children’s Hospital Medical Center was among the pioneers pushing for a more systematic approach; the hospital began working on its own guidelines in 1995 and published the first of many the following year.

Wendy Gerhardt, MSN
Wendy Gerhardt, MSN

“We started evidence-based guidelines when the docs were still saying, ‘This is cookbook medicine. I don’t know if I want to do this or not,’” says Wendy Gerhardt, MSN, director of evidence-based decision making in the James M. Anderson Center for Health Systems Excellence at Cincinnati Children’s.

 

 

Some doctors also argued that clinical guidelines would stifle innovation, cramp their individual style, or intrude on their relationships with patients. Despite some lingering misgivings among clinicians, however, the process has gained considerable support. In 2000, an organization called the GRADE Working Group (Grading of Recommendations, Assessment, Development and Evaluation) began developing a new approach to raise the quality of evidence and strength of recommendations.

The group’s work led to a 2004 article in BMJ, and the journal subsequently published a six-part series about GRADE for clinicians.4 More recently, the Journal of Clinical Epidemiology also delved into the issue with a 15-part series detailing the GRADE methodology.5 Together, Dr. Guyatt says, the articles have become a go-to guide for guidelines and have helped solidify the focus on evidence.

Cincinnati Children’s and other institutions also have developed tools, and the Institute of Medicine has published guideline-writing standards.

“So it’s easier than it’s ever been to know whether or not you have a decent guideline in your hand,” Gerhardt says.

Likewise, medical organizations are more clearly explaining how they came up with different kinds of guidelines. Evidence-based and consensus guidelines aren’t necessarily mutually exclusive, though consensus building is often used in the absence of high-quality evidence. Some organizations have limited the pool of evidence for guidelines to randomized controlled trial data.

“Unfortunately, for us in the real world, we actually have to make decisions even when there’s not enough data,” Dr. Feldman says.

Sometimes, the best available evidence may be observational studies, and some committees still try to reach a consensus based on that evidence and on the panelists’ professional opinions.

Dr. Guyatt agrees that it’s “absolutely not” true that evidence-based guidelines require randomized controlled trials. “What you need for any recommendation is a thorough review and summary of the best available evidence,” he says.

As part of each final document, Cincinnati Children’s details how it created the guideline, when the literature searches occurred, how the committee reached a consensus, and which panelists participated in the deliberations. The information, Gerhardt says, allows anyone else to “make some sensible decisions about whether or not it’s a guideline you want to use.”

Guideline-crafting institutions are also focusing more on the proper makeup of their panels. In general, Dr. Guyatt says, a panel with more than 10 people can be unwieldy. Guidelines that include many specific recommendations, however, may require multiple subsections, each with its own committee.

Dr. Guyatt is careful to note that, like many other experts, he has multiple potential conflicts of interest, such as working on the anti-thrombotic guidelines issued by the American College of Chest Physicians. Committees, he says, have become increasingly aware of how properly handling conflicts (financial or otherwise) can be critical in building and maintaining trust among clinicians and patients. One technique is to ensure that a diversity of opinions is reflected among a committee whose experts have various conflicts. If one expert’s company makes drug A, for example, then the committee also includes experts involved with drugs B or C. As an alternative, some committees have explicitly barred anyone with a conflict of interest from participating at all.

But experts often provide crucial input, Dr. Guyatt says, and several committees have adopted variations of a middle-ground approach. In an approach that he favors, all guideline-formulating panelists are conflict-free but begin their work by meeting with a separate group of experts who may have some conflicts but can help point out the main issues. The panelists then deliberate and write a draft of the recommendations, after which they meet again with the experts to receive feedback before finalizing the draft.

 

 

In a related approach, experts sit on the panel and discuss the evidence, but those with conflicts recuse themselves before the group votes on any recommendations. Delineating between discussions of the evidence and discussions of recommendations can be tricky, though, increasing the risk that a conflict of interest may influence the outcome. Even so, Dr. Guyatt says the model is still preferable to other alternatives.

Getting the Word Out

Once guidelines have been crafted and vetted, how can hospitalists get up to speed on them? Dr. Feldman’s favorite go-to source is Guideline.gov, a national guideline clearinghouse that he calls one of the best compendiums of available information. Especially helpful, he adds, are details such as how the guidelines were created.

To help maximize his time, he also uses tools like NEJM Journal Watch, which sends daily emails on noteworthy articles and weekend roundups of the most important studies.

“It is a way of at least trying to keep up with what’s going on,” he says. Similarly, he adds, ACP Journal Club provides summaries of important new articles, The Hospitalist can help highlight important guidelines that affect HM, and CME meetings or online modules like SHMconsults.com can help doctors keep pace.

For the past decade, Dr. Guyatt has worked with another popular tool, a guideline-disseminating service called UpToDate. Many alternatives exist, such as DynaMed Plus.

“I think you just need to pick away,” Dr. Feldman says. “You need to decide that as a physician, as a lifelong learner, that you are going to do something that is going to keep you up-to-date. There are many ways of doing it. You just have to decide what you’re going to do and commit to it.”

Lisa Shieh, MD, PhD, FHM
Lisa Shieh, MD, PhD, FHM

Researchers are helping out by studying how to present new guidelines in ways that engage doctors and improve patient outcomes. Another trend is to make guidelines routinely accessible not only in electronic medical records but also on tablets and smartphones. Lisa Shieh, MD, PhD, FHM, a hospitalist and clinical professor of medicine at Stanford University Medical Center, has studied how best-practice alerts, or BPAs, impact adherence to guidelines covering the appropriate use of blood products. Dr. Shieh, who splits her time between quality improvement and hospital medicine, says getting new information and guidelines into clinicians’ hands can be a logistical challenge.

“At Stanford, we had a huge official campaign around the guidelines, and that did make some impact, but it wasn’t huge in improving appropriate blood use,” she says. When the medial center set up a BPA through the electronic medical record system, however, both overall and inappropriate blood use declined significantly. In fact, the percentage of providers ordering blood products for patients with a hemoglobin count above 8 g/dL dropped from 60% to 25%.6

One difference maker, Dr. Shieh says, was providing education at the moment a doctor actually ordered blood. To avoid alert fatigue, the “smart BPA” fires only if a doctor tries to order blood and the patient’s hemoglobin is greater than 7 or 8 g/dL, depending on the diagnosis. If the doctor still wants to transfuse, the system requests a clinical indication for the exception.

Despite the clear improvement in appropriate use, the team wanted to understand why 25% of providers were still ordering blood products for patients with a hemoglobin count greater than 8 despite the triggered BPA and whether additional interventions could yield further improvements. Through their study, the researchers documented several reasons for the continued ordering. In some cases, the system failed to properly document actual or potential bleeding as an indicator. In other cases, the ordering reflected a lack of consensus on the guidelines in fields like hematology and oncology.

 

 

One of the most intriguing reasons, though, was that residents often did the ordering at the behest of an attending who might have never seen the BPA.

“It’s not actually reaching the audience making the decision; it might be reaching the audience that’s just carrying out the order,” Dr. Shieh says.

The insight, she says, may provide an opportunity to talk with attending physicians who may not have completely bought into the guidelines and to involve the entire team in the decision-making process.

Hospitalists, she says, can play a vital role in guideline development and implementation, especially for strategies that include BPAs.

“I think they’re the perfect group to help use this technology wisely because they are at the front lines taking care of patients so they’ll know the best workflow of when these alerts fire and maybe which ones happen the most often,” Dr. Shieh says. “I think this is a fantastic opportunity to get more hospitalists involved in designing these alerts and collaborating with the IT folks.”

Even with widespread buy-in from providers, guidelines may not reach their full potential without a careful consideration of patients’ values and concerns. Experts say joint deliberations and discussions are especially important for guidelines that are complicated, controversial, or carrying potential risks that must be weighed against the benefits.

Some of the conversations are easy, with well-defined risks and benefits and clear patient preferences, but others must traverse vast tracts of gray area. Fortunately, Dr. Feldman says, more tools also are becoming available for this kind of shared decision making. Some use pictorial representations to help patients understand the potential outcomes of alternative courses of action or inaction.

“Sometimes, that pictorial representation is worth the 1,000 words that we wouldn’t be able to adequately describe otherwise,” he says.

Similarly, Cincinnati Children’s has developed tools to help to ease the shared decision-making process.

“We look where there’s equivocal evidence or no evidence and have developed tools that help the clinician have that conversation with the family and then have them informed enough that they can actually weigh in on what they want,” Gerhardt says. One end product is a card or trifold pamphlet that might help parents understand the benefits and side effects of alternate strategies.

“Typically, in medicine, we’re used to telling people what needs to be done,” she says. “So shared decision making is kind of a different thing for clinicians to engage in.” TH


Bryn Nelson, PhD, is a freelance writer in Seattle.

References

  1. Valle CW, Binns HJ, Quadri-Sheriff M, Benuck I, Patel A. Physicians’ lack of adherence to National Heart, Lung, and Blood Institute guidelines for pediatric lipid screening. Clin Pediatr. 2015;54(12):1200-1205.
  2. Maynard G, Jenkins IH, Merli GJ. Venous thromboembolism prevention guidelines for medical inpatients: mind the (implementation) gap. J Hosp Med. 2013;8(10):582-588.
  3. Mehta RH, Chen AY, Alexander KP, Ohman EM, Roe MT, Peterson ED. Doing the right things and doing them the right way: association between hospital guideline adherence, dosing safety, and outcomes among patients with acute coronary syndrome. Circulation. 2015;131(11):980-987.
  4. GRADE Working Group. Grading quality of evidence and strength of recommendations. BMJ. 2004;328:1490
  5. Andrews JC, Schünemann HJ, Oxman AD, et al. GRADE guidelines: 15. Going from evidence to recommendation—determinants of a recommendation’s direction and strength. J Clin Epidemiol. 2013;66(7):726-735.
  6. 6. Chen JH, Fang DZ, Tim Goodnough L, Evans KH, Lee Porter M, Shieh L. Why providers transfuse blood products outside recommended guidelines in spite of integrated electronic best practice alerts. J Hosp Med. 2015;10(1):1-7.

How to Gauge Guidelines

For clinical guidelines to be truly trustworthy, Gordon Guyatt, MD, MSc, FRCPC, distinguished professor of medicine and clinical epidemiology at McMaster University in Hamilton, Ontario, says that they should meet several criteria:

  • They should adhere to an evidence-based process of gathering and summarizing the evidence and summarize that evidence in ways doctors can understand.
  • They should rate the overall evidence used in their deliberations and distinguish between strong and weak recommendations.
  • They should recognize that recommendations are value- and preference-sensitive, make their own judgments explicit, and seek out available evidence about patients’ own values and preferences.
  • They should be clear about how they’re dealing with conflicts of interest.

—Bryn Nelson, PhD

 

 

New Tools of the Trade for Crafting Clinical Guidelines

The well-known GRADE system and similar tools such as Levels of Evidence and Grades of Recommendation have helped guideline writers for years, particularly in evaluating bodies of medical literature and the strength of the studies’ conclusions. Cincinnati Children’s Hospital Medical Center uses a similar strength-of-evidence pyramid to gauge the relative reliability of data: physician expertise and practice at the base, a retrospective or cohort study at a higher level, and a systematic review composed of numerous randomized controlled trials at the pinnacle.

Not every clinician has been taught how to appraise articles, however. Accordingly, Cincinnati Children’s James M. Anderson Center for Health Systems Excellence has developed another system called LEGEND (Let Evidence Guide Every New Decision) to help guideline developers know what to look for when reading a study. The system’s analysis boils down to three main questions: Is it valid? What are the results? And are they applicable to my population?

“If you want to know whether the study that you’re reading is something that should prompt you to change practice, you want to know if the study is a good one,” says Wendy Gerhardt, MSN, the hospital center’s director of evidence-based decision making.

In fact, the hospital has developed tools to assist in nearly every step of the guideline-crafting process. The tools help clinicians learn how to read studies, develop an evidence-based guideline, understand whether a guideline is solid, know where separate recommendations agree and differ, and implement new guidelines into regular practice.

One tool called REACH (Rapid Evidence Adoption to improve Child Health) uses quality improvement consultants and multidisciplinary groups to “translate evidence into point-of-care decision making by clinicians, families and patients,” according to its website. The process takes about 120 days and can result in decision aids such as prepopulated electronic order sets that default to evidence-based suggestions for, say, bronchiolitis inhalation therapies.

“It’s really helpful when you’re working in an academic center and the residents are the ones writing the orders,” says Gerhardt. “So it defaults to the right thing, and they have to actually think about not doing it that way.”

Often, it’s not enough merely to give doctors the link to a new guideline.

“If you can pull up an order set that already has the evidence embedded in it, that’s a little more compelling,” she says. “You kind of have to put the evidence at their point of care instead of in a document. And that’s what, in my mind, makes it real.”

At Cincinnati Children’s, she and her colleagues also have taught doctors how to use PubMed to seek out systematic reviews if they have a question. They have rolling computers, too: Medical librarians sometimes go on rounds with clinicians to help with on-the-spot literature searches.

“It’s however you can make it easier for them to use,” Gerhardt says. “By and large, most people just want to practice, so you have to put that evidence in their way.”

Bryn Nelson, PhD

Issue
The Hospitalist - 2016(10)
Issue
The Hospitalist - 2016(10)
Publications
Publications
Article Type
Display Headline
Why Aren’t Doctors Following Guidelines?
Display Headline
Why Aren’t Doctors Following Guidelines?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)