User login
Growing evidence that certain interventions can significantly lower morbidity and mortality (eg, mammography, immunizations, b-blockers for acute myocardial infarction) has focused attention on the challenges of implementation: translating this evidence into practice. Clinicians do not always offer recommended services to patients, and patients do not always readily accept them. Approximately 50% of smokers report that their physician has never advised them to quit.1 One out of 4 patients with an acute myocardial infarction is discharged without a prescription for a b-blocker.2 Only 40% of patients with atrial fibrillation receive warfarin.3 Conversely, tests that are known to be ineffective, such as routine chest radiographs, urinalyses, and preoperative blood work, are ordered routinely.4
Tools for behavior change
Various programs have been developed to bridge the gap between what should be practiced and what is actually done, but few have been uniformly successful. Passive education, such as conferences or the publication of clinical practice guidelines, has been shown consistently to be ineffective.5 More active strategies to implement guidelines, such as educational outreach, feedback, reminder systems, and continuous quality improvement, offer greater promise and have captured the interest of physicians, health systems, hospitals, managed care plans, and quality improvement organizations.6 To date, however, research on whether these methods produce meaningful change in practice patterns or patient outcomes has yielded mixed results.
One such study appears in this issue of the Journal. McBride and colleagues7 compared 4 strategies for improving preventive cardiology services at 45 Midwestern primary care practices. The control practices attended an educational conference and received a kit of materials. The other 3 groups attended a similar conference but also received a practice consultation, an on-site prevention coordinator, or both. The study used surrogate measures—provider behaviors rather than health outcomes such as lipid levels and blood pressure—to gauge effectiveness, and the results were positive. Patient history questionnaires, problem lists, and flow sheets were used more often by the combined intervention group than by the conference-only control group. Other behaviors, such as documentation of risk factor screening and management in the medical record, improved across all intervention groups. The authors apparently did not examine whether patients in the intervention groups had improved outcomes, such as better control of risk factors or a lower incidence of heart disease. With only 10 or 11 practices per group, the statistical power and duration of follow-up to make such comparisons was probably lacking.
The need for discretion in quality improvement
How should we apply these results? When there is evidence that a particular strategy improves outcomes—in this case, practice consultations and on-site coordinators—should physicians immediately adopt that approach in their own practices? Although McBride and colleagues found full participation in the project, it is doubtful that practices nationwide would have the necessary resources. The practice consultation included 3 meetings and 2 follow-up visits, and the on-site coordinator devoted 4.5 hours per week per physician. Moreover, this is only one of many studies showcasing a promising success in quality improvement. No clinician could adopt the full range of strategies that have been advocated by researchers and health systems. Even if that were possible for one disease, it would be impossible for all of the conditions encountered in primary care for which quality improvement is needed, such as preventive care, heart disease, diabetes, asthma, and depression.
Practices face trade-offs when considering quality improvement.8 Although there are exceptions— improved systems of care for one disease can have spillover benefits for other conditions—quality initiatives in one area tend to draw time, resources, and motivation away from others. Before reconfiguring practice operations, the astute clinician must judge not only whether available resources can support the effort, but whether the strategy offers the best use of resources. In the case of the study by McBride and colleagues, physicians might ask whether the proven benefit—improved chart records—justifies a change when data on patient outcomes are lacking. Even if improved health outcomes are likely, they should judge whether applying the same effort to another aspect of care, perhaps for another disease, would help patients even more.
In weighing these choices, physicians should not rely on the results of a single study. It is best to step back and examine the evidence as a whole, reviewing the results of multiple studies of the same strategy. For example, a Cochrane group analyzed 18 systematic reviews of various methods for disseminating and implementing evidence in practice. Although some interventions were consistently effective (eg, educational outreach, reminders, multifaceted interventions, interactive education), others were rarely or never effective (eg, educational materials, didactic teaching) or inconsistently effective (eg, audit, feedback, local opinion leaders, local adaptation, patient-mediated interventions).9 Similarly, a recent review of 58 studies of strategies for improving preventive care found that most interventions were effective in some studies but not others.10
An analytic framework for behavior change
It makes sense that a particular strategy would not succeed in all cases. The reason that clinicians do not adopt new behaviors, or abandon old ones, is often specific to the disease or procedure in question, local practice conditions, and the personal barriers that each physician faces.11 A common solution would not be expected to work. Most physicians undergo stages of change in adopting new behaviors:
- They must have knowledge (information). They must know about the new data or new practice guidelines that advocate a change in practice behavior. Keeping abreast of this knowledge, with its exponential expansion, is medicine’s great challenge. However, as so many studies have shown, information by itself is not enough.12
- Knowledge must foster a change in attitudes. Clinicians must accept the validity of the evidence and its applicability to their practices and their patients. There must be “buy in” for new practice guidelines and acceptance that the recommendations represent good medical care; have been embraced by peers, local consultants, opinion leaders, or one’s specialty; and are acceptable to patients.
- Even if physicians know about and accept the behavior, they must have the ability to implement it. Enthusiasm by itself is insufficient if there is a lack of time, resources, staff, training, or equipment. Physicians must have access to eligible patients, and those patients must be able and willing to do their part. (Like clinicians, patients may not comply because of barriers to knowledge, attitudes, ability, and reinforcement.) Finally, constraints imposed by office or clinic operations, practice leadership, information systems, regulations, and insurance coverage can impede change.
- Like all people, physicians need reinforcement to maintain behaviors. It is human nature to forget, overlook, or lose interest over time. That 36% of physicians do not notify patients of abnormal test results is not because they doubt the importance of that type of communication.13 The most committed physician needs reminder systems to remember when to implement guidelines, tracking systems to identify patients who need follow-up, and encouragement from practice leaders, systems of care, and patients that their efforts are appreciated.
Putting the framework to use
This 4-part framework helps to organize the menu of implementation tools that are available to physicians. Some tools focus on providing knowledge, such as conferences, journal articles, practice guidelines, the Cochrane database,14 and information mastery programs to help clinicians access useful data.15 Some focus on attitudes, such as local adaptation of guidelines,16 academic detailing,17 endorsements by opinion leaders and specialty societies,18 and feedback from colleagues and patients.19 Some address ability, such as scheduling and staff changes,20 revised delivery systems,21 skill building, teamwork,22 information technology,23 comprehensive disease24 or total quality25 management, and community support. Some provide reinforcement, such as computerized or manual reminder systems, flow sheets, standing orders, provider incentives, and feedback reports.26
Knowing that, in general, the 4 steps occur in sequence helps clarify why so many methods of changing behavior appear successful in some settings but not in others. An intervention that delivers information is not helpful if clinicians already know the facts but lack ability. If a family practice does poorly in administering polio vaccine, the problem is less likely to be solved by circulating a photocopy of the Advisory Committee on Immunization Practices’s immunization schedule—the physicians already know the guidelines and the importance of vaccination—than by implementing a tracking and reminder system to flag eligible patients, the most effective way to boost immunization rates.27 Conversely, reinforcement tools, such as reminder systems and standing orders, are unlikely to succeed if clinicians are at an earlier stage of change (eg, they are unaware of or question the data). Adding a space for exercise counseling on flow sheets or preventive care forms accomplishes little if physicians are fundamentally resistant because they doubt this type of counseling helps patients. A knowledge intervention is precisely what is needed when physicians withhold warfarin for atrial fibrillation because of the mistaken belief that bleeding complications outweigh benefits.28 Citing an earlier description of this model,29 Cabana and colleagues30 expanded its structure to better organize published evidence on barriers to physician adherence to practice guidelines. In their model, barriers to knowledge include lack of awareness and familiarity with guidelines. Barriers to attitudes include lack of agreement with guidelines, lack of outcome expectancy, lack of self-efficacy, and lack of motivation. Barriers to behavior include factors related to patients (eg, patient expectations), the practice environment (eg, lack of time, resources), and the guidelines themselves (eg, conflicting recommendations). Of the 120 studies on barriers covered in their review, 58% examined only one type of barrier.
A diagnostic approach
Too many advocates of quality improvement champion their method as the only way to improve care. Hospitals, practices, and health systems often seize on a particular approach for improving quality, perhaps because it is easier to organize programs around a single theme. But there are no “magic bullets.”31 Seasoned clinicians know this; they understand that proper treatment begins with a good diagnosis. The first step is to “find the lesion,” to determine precisely why the guideline is not followed. Knowing whether the barriers involve knowledge, attitude, ability, or reinforcement is the starting point for designing a targeted solution. The alternative is quality improvement by reflex. No physician can improve everything at once, and this is especially true for primary care physicians because of the spectrum of diseases for which they care. Their special need to set priorities makes a rational diagnostic approach to quality improvement essential.
Seeking outcomes that matter
The study by McBride and colleagues reminds us that the utility of outcomes research often depends on the outcome measures. An effect on surrogate or intermediate end points, such as better use of medical records, does not prove beneficial for patients unless data suggest that a change in such measures improves health.32 Smoking illustrates the ideal surrogate measure because of the strong evidence linking it to disease. Too many researchers rely on less-validated surrogate measures, either because more distal health outcomes are hard to quantify or because statistical power concerns demand too large or lengthy a study. Using surrogates is easier, but it has yielded a profusion of outcome studies that fail to tell us whether patients benefit in ways that matter. It would be better to do fewer studies and conserve the resources for a definitive investigation that gives patient-centered outcomes the attention they deserve.
1. Frank E, Winkleby MA, Altman DG, Rockhill B, Fortmann SP. Predictors of physician’s smoking cessation advice. JAMA 1991;266:3139-44.
2. Krumholz HM, Radford MJ, Wang Y, et al. National use and effectiveness of b-blockers for the treatment of elderly patients after acute myocardial infarction: National Cooperative Cardiovascular Project. JAMA 1998;280:623-9.
3. Stafford RS, Singer DE. Recent national patterns of warfarin use in atrial fibrillation. Circulation 1998;97:1231-3.
4. Allison JG, Bromley HR. Unnecessary preoperative investigations: evaluation and cost analysis. Am Surg 1996;62:686-9.
5. Davis DA, Thomson MA, Oxman AD, Haynes RB. Changing physician performance: a systematic review of the effect of continuing medical education strategies. JAMA 1995;274:700-5.
6. Chassin MR, Halvin RW. and the National Roundtable on Health Care Quality The urgent need to improve health care quality. Institute of Medicine National Roundtable on Health Care Quality. JAMA 1998;280:1000-5.
7. McBride P, Underbakke G, Plane MB, et al. Improving practice prevention systems in primary care: the Health Education and Research Trial (HEART). J Fam Pract 2000;49:115-125.
8. Casalino LP. The unintended consequences of measuring quality on the quality of medical care. N Engl J Med 1999;341:1147-50.
9. Bero LA, Grilli R, Grimshaw JM, et al. Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. BMJ 1998;317:465-8.
10. MEJL, Wensing M, Grol RPTM, van der Weijden T, van Weel C. Interventions to improve the delivery of preventive services in primary care. Am J Public Health 1999;89:737-46.
11. R. Beliefs and evidence in changing clinical practice. BMJ 1997;315:418-21.
12. RM, Cebul RD, Wigton RS. You can lead a horse to water: improving physicians’ knowledge of probabilities may not affect their decisions. Med Decis Making 1995;15:65-75.
13. EA, Ward RE, Uman JE, McCarthy BD. Patient notification and follow-up of abnormal test results: a physician survey. Arch Intern Med 1996;156:327-31.
14. L, Rennie D. The Cochrane Collaboration: preparing, maintaining, and disseminating systematic reviews of the effects of health care. JAMA 1995;274:1935-8.
15. DC, Shaughnessy AF. Teaching information mastery: creating informed consumers of medical information. J Am Board Fam Pract 1999;12:444-9.
16. JB, Shye D, McFarland B. The paradox of guideline implementation: how AHCPR’s depression guideline was adapted at Kaiser Permanente Northwest Region. J Qual Improv 1995;21:5-21.
17. SB, Avorn J. Principles of educational outreach (‘academic detailing’) to improve clinical decision making. JAMA 1990;263:549-56.
18. SB, McLaughlin TJ, Gurwitz JH, et al. Effect of local medical opinion leaders on quality of care for acute myocardial infarction: a randomized controlled trial. JAMA 1998;279:1358-63.
19. BS, Tonesk X, Jacobson PD. Implementing clinical practice guidelines: social influence strategies and behavior change. Qual Rev Bull 1992;18:413-22.
20. AJ, Woodruff CB, Carney PA. Changing office routines to enhance preventive care: the preventive GAPS approach. Arch Fam Med 1994;3:176-83.
21. DM. A primer on leading the improvement of systems. BMJ 1996;312:619-22.
22. LL, Gemson DH, Carney P. Office system intervention supporting primary care-based health behavior change counseling. Am J Prev Med 1999;17:299-308.
23. RE, McKay G, Boles SM, Vogt TM. Interactive computer technology, behavioral science, and family practice. J Fam Pract 1999;48:464-70.
24. Ellrodt G, Cook DJ, Lee J, Cho M, Hunt D, Weingarten S. Evidence-based disease management. JAMA 1997;278:1687-92.
25. LI, Kottke TE, Brekke ML. Will primary care clinics organize themselves to improve the delivery of preventive services? A randomized controlled trial. Prev Med 1998;27:623-31.
26. Corporation Interventions that increase the utilization of Medicare-funded preventive services for persons age 65 and older. Pub. No. HCFA-02151. Baltimore, Md: Health Care Financing Administration; 1999.
27. Force on Community Preventive Services Vaccine-preventable diseases: improving vaccination coverage in children, adolescents, and adults. A report on recommendations from the Task Force on Community Preventive Services. MMWR 1999;48:(RR-8)1-15.
28. J, Gurwitz JH, Rochon PA, Avorn J. Physician attitudes concerning warfarin for stroke prevention in atrial fibrillation: results of a survey of long-term care practitioners. J Am Geriatr Soc 1997;45:1060-5.
29. SH. Practice guidelines: a new reality in medicine. III: impact on patient care. Arch Intern Med 1993;153:2646-55.
30. MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999;282:1458-65.
31. AD, Thomson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. Can Med Assoc J 1995;153:1423-31.
32. RA. Using outcomes to improve quality of research and quality of care. J Am Board Fam Pract 1998;11:465-72.
Growing evidence that certain interventions can significantly lower morbidity and mortality (eg, mammography, immunizations, b-blockers for acute myocardial infarction) has focused attention on the challenges of implementation: translating this evidence into practice. Clinicians do not always offer recommended services to patients, and patients do not always readily accept them. Approximately 50% of smokers report that their physician has never advised them to quit.1 One out of 4 patients with an acute myocardial infarction is discharged without a prescription for a b-blocker.2 Only 40% of patients with atrial fibrillation receive warfarin.3 Conversely, tests that are known to be ineffective, such as routine chest radiographs, urinalyses, and preoperative blood work, are ordered routinely.4
Tools for behavior change
Various programs have been developed to bridge the gap between what should be practiced and what is actually done, but few have been uniformly successful. Passive education, such as conferences or the publication of clinical practice guidelines, has been shown consistently to be ineffective.5 More active strategies to implement guidelines, such as educational outreach, feedback, reminder systems, and continuous quality improvement, offer greater promise and have captured the interest of physicians, health systems, hospitals, managed care plans, and quality improvement organizations.6 To date, however, research on whether these methods produce meaningful change in practice patterns or patient outcomes has yielded mixed results.
One such study appears in this issue of the Journal. McBride and colleagues7 compared 4 strategies for improving preventive cardiology services at 45 Midwestern primary care practices. The control practices attended an educational conference and received a kit of materials. The other 3 groups attended a similar conference but also received a practice consultation, an on-site prevention coordinator, or both. The study used surrogate measures—provider behaviors rather than health outcomes such as lipid levels and blood pressure—to gauge effectiveness, and the results were positive. Patient history questionnaires, problem lists, and flow sheets were used more often by the combined intervention group than by the conference-only control group. Other behaviors, such as documentation of risk factor screening and management in the medical record, improved across all intervention groups. The authors apparently did not examine whether patients in the intervention groups had improved outcomes, such as better control of risk factors or a lower incidence of heart disease. With only 10 or 11 practices per group, the statistical power and duration of follow-up to make such comparisons was probably lacking.
The need for discretion in quality improvement
How should we apply these results? When there is evidence that a particular strategy improves outcomes—in this case, practice consultations and on-site coordinators—should physicians immediately adopt that approach in their own practices? Although McBride and colleagues found full participation in the project, it is doubtful that practices nationwide would have the necessary resources. The practice consultation included 3 meetings and 2 follow-up visits, and the on-site coordinator devoted 4.5 hours per week per physician. Moreover, this is only one of many studies showcasing a promising success in quality improvement. No clinician could adopt the full range of strategies that have been advocated by researchers and health systems. Even if that were possible for one disease, it would be impossible for all of the conditions encountered in primary care for which quality improvement is needed, such as preventive care, heart disease, diabetes, asthma, and depression.
Practices face trade-offs when considering quality improvement.8 Although there are exceptions— improved systems of care for one disease can have spillover benefits for other conditions—quality initiatives in one area tend to draw time, resources, and motivation away from others. Before reconfiguring practice operations, the astute clinician must judge not only whether available resources can support the effort, but whether the strategy offers the best use of resources. In the case of the study by McBride and colleagues, physicians might ask whether the proven benefit—improved chart records—justifies a change when data on patient outcomes are lacking. Even if improved health outcomes are likely, they should judge whether applying the same effort to another aspect of care, perhaps for another disease, would help patients even more.
In weighing these choices, physicians should not rely on the results of a single study. It is best to step back and examine the evidence as a whole, reviewing the results of multiple studies of the same strategy. For example, a Cochrane group analyzed 18 systematic reviews of various methods for disseminating and implementing evidence in practice. Although some interventions were consistently effective (eg, educational outreach, reminders, multifaceted interventions, interactive education), others were rarely or never effective (eg, educational materials, didactic teaching) or inconsistently effective (eg, audit, feedback, local opinion leaders, local adaptation, patient-mediated interventions).9 Similarly, a recent review of 58 studies of strategies for improving preventive care found that most interventions were effective in some studies but not others.10
An analytic framework for behavior change
It makes sense that a particular strategy would not succeed in all cases. The reason that clinicians do not adopt new behaviors, or abandon old ones, is often specific to the disease or procedure in question, local practice conditions, and the personal barriers that each physician faces.11 A common solution would not be expected to work. Most physicians undergo stages of change in adopting new behaviors:
- They must have knowledge (information). They must know about the new data or new practice guidelines that advocate a change in practice behavior. Keeping abreast of this knowledge, with its exponential expansion, is medicine’s great challenge. However, as so many studies have shown, information by itself is not enough.12
- Knowledge must foster a change in attitudes. Clinicians must accept the validity of the evidence and its applicability to their practices and their patients. There must be “buy in” for new practice guidelines and acceptance that the recommendations represent good medical care; have been embraced by peers, local consultants, opinion leaders, or one’s specialty; and are acceptable to patients.
- Even if physicians know about and accept the behavior, they must have the ability to implement it. Enthusiasm by itself is insufficient if there is a lack of time, resources, staff, training, or equipment. Physicians must have access to eligible patients, and those patients must be able and willing to do their part. (Like clinicians, patients may not comply because of barriers to knowledge, attitudes, ability, and reinforcement.) Finally, constraints imposed by office or clinic operations, practice leadership, information systems, regulations, and insurance coverage can impede change.
- Like all people, physicians need reinforcement to maintain behaviors. It is human nature to forget, overlook, or lose interest over time. That 36% of physicians do not notify patients of abnormal test results is not because they doubt the importance of that type of communication.13 The most committed physician needs reminder systems to remember when to implement guidelines, tracking systems to identify patients who need follow-up, and encouragement from practice leaders, systems of care, and patients that their efforts are appreciated.
Putting the framework to use
This 4-part framework helps to organize the menu of implementation tools that are available to physicians. Some tools focus on providing knowledge, such as conferences, journal articles, practice guidelines, the Cochrane database,14 and information mastery programs to help clinicians access useful data.15 Some focus on attitudes, such as local adaptation of guidelines,16 academic detailing,17 endorsements by opinion leaders and specialty societies,18 and feedback from colleagues and patients.19 Some address ability, such as scheduling and staff changes,20 revised delivery systems,21 skill building, teamwork,22 information technology,23 comprehensive disease24 or total quality25 management, and community support. Some provide reinforcement, such as computerized or manual reminder systems, flow sheets, standing orders, provider incentives, and feedback reports.26
Knowing that, in general, the 4 steps occur in sequence helps clarify why so many methods of changing behavior appear successful in some settings but not in others. An intervention that delivers information is not helpful if clinicians already know the facts but lack ability. If a family practice does poorly in administering polio vaccine, the problem is less likely to be solved by circulating a photocopy of the Advisory Committee on Immunization Practices’s immunization schedule—the physicians already know the guidelines and the importance of vaccination—than by implementing a tracking and reminder system to flag eligible patients, the most effective way to boost immunization rates.27 Conversely, reinforcement tools, such as reminder systems and standing orders, are unlikely to succeed if clinicians are at an earlier stage of change (eg, they are unaware of or question the data). Adding a space for exercise counseling on flow sheets or preventive care forms accomplishes little if physicians are fundamentally resistant because they doubt this type of counseling helps patients. A knowledge intervention is precisely what is needed when physicians withhold warfarin for atrial fibrillation because of the mistaken belief that bleeding complications outweigh benefits.28 Citing an earlier description of this model,29 Cabana and colleagues30 expanded its structure to better organize published evidence on barriers to physician adherence to practice guidelines. In their model, barriers to knowledge include lack of awareness and familiarity with guidelines. Barriers to attitudes include lack of agreement with guidelines, lack of outcome expectancy, lack of self-efficacy, and lack of motivation. Barriers to behavior include factors related to patients (eg, patient expectations), the practice environment (eg, lack of time, resources), and the guidelines themselves (eg, conflicting recommendations). Of the 120 studies on barriers covered in their review, 58% examined only one type of barrier.
A diagnostic approach
Too many advocates of quality improvement champion their method as the only way to improve care. Hospitals, practices, and health systems often seize on a particular approach for improving quality, perhaps because it is easier to organize programs around a single theme. But there are no “magic bullets.”31 Seasoned clinicians know this; they understand that proper treatment begins with a good diagnosis. The first step is to “find the lesion,” to determine precisely why the guideline is not followed. Knowing whether the barriers involve knowledge, attitude, ability, or reinforcement is the starting point for designing a targeted solution. The alternative is quality improvement by reflex. No physician can improve everything at once, and this is especially true for primary care physicians because of the spectrum of diseases for which they care. Their special need to set priorities makes a rational diagnostic approach to quality improvement essential.
Seeking outcomes that matter
The study by McBride and colleagues reminds us that the utility of outcomes research often depends on the outcome measures. An effect on surrogate or intermediate end points, such as better use of medical records, does not prove beneficial for patients unless data suggest that a change in such measures improves health.32 Smoking illustrates the ideal surrogate measure because of the strong evidence linking it to disease. Too many researchers rely on less-validated surrogate measures, either because more distal health outcomes are hard to quantify or because statistical power concerns demand too large or lengthy a study. Using surrogates is easier, but it has yielded a profusion of outcome studies that fail to tell us whether patients benefit in ways that matter. It would be better to do fewer studies and conserve the resources for a definitive investigation that gives patient-centered outcomes the attention they deserve.
Growing evidence that certain interventions can significantly lower morbidity and mortality (eg, mammography, immunizations, b-blockers for acute myocardial infarction) has focused attention on the challenges of implementation: translating this evidence into practice. Clinicians do not always offer recommended services to patients, and patients do not always readily accept them. Approximately 50% of smokers report that their physician has never advised them to quit.1 One out of 4 patients with an acute myocardial infarction is discharged without a prescription for a b-blocker.2 Only 40% of patients with atrial fibrillation receive warfarin.3 Conversely, tests that are known to be ineffective, such as routine chest radiographs, urinalyses, and preoperative blood work, are ordered routinely.4
Tools for behavior change
Various programs have been developed to bridge the gap between what should be practiced and what is actually done, but few have been uniformly successful. Passive education, such as conferences or the publication of clinical practice guidelines, has been shown consistently to be ineffective.5 More active strategies to implement guidelines, such as educational outreach, feedback, reminder systems, and continuous quality improvement, offer greater promise and have captured the interest of physicians, health systems, hospitals, managed care plans, and quality improvement organizations.6 To date, however, research on whether these methods produce meaningful change in practice patterns or patient outcomes has yielded mixed results.
One such study appears in this issue of the Journal. McBride and colleagues7 compared 4 strategies for improving preventive cardiology services at 45 Midwestern primary care practices. The control practices attended an educational conference and received a kit of materials. The other 3 groups attended a similar conference but also received a practice consultation, an on-site prevention coordinator, or both. The study used surrogate measures—provider behaviors rather than health outcomes such as lipid levels and blood pressure—to gauge effectiveness, and the results were positive. Patient history questionnaires, problem lists, and flow sheets were used more often by the combined intervention group than by the conference-only control group. Other behaviors, such as documentation of risk factor screening and management in the medical record, improved across all intervention groups. The authors apparently did not examine whether patients in the intervention groups had improved outcomes, such as better control of risk factors or a lower incidence of heart disease. With only 10 or 11 practices per group, the statistical power and duration of follow-up to make such comparisons was probably lacking.
The need for discretion in quality improvement
How should we apply these results? When there is evidence that a particular strategy improves outcomes—in this case, practice consultations and on-site coordinators—should physicians immediately adopt that approach in their own practices? Although McBride and colleagues found full participation in the project, it is doubtful that practices nationwide would have the necessary resources. The practice consultation included 3 meetings and 2 follow-up visits, and the on-site coordinator devoted 4.5 hours per week per physician. Moreover, this is only one of many studies showcasing a promising success in quality improvement. No clinician could adopt the full range of strategies that have been advocated by researchers and health systems. Even if that were possible for one disease, it would be impossible for all of the conditions encountered in primary care for which quality improvement is needed, such as preventive care, heart disease, diabetes, asthma, and depression.
Practices face trade-offs when considering quality improvement.8 Although there are exceptions— improved systems of care for one disease can have spillover benefits for other conditions—quality initiatives in one area tend to draw time, resources, and motivation away from others. Before reconfiguring practice operations, the astute clinician must judge not only whether available resources can support the effort, but whether the strategy offers the best use of resources. In the case of the study by McBride and colleagues, physicians might ask whether the proven benefit—improved chart records—justifies a change when data on patient outcomes are lacking. Even if improved health outcomes are likely, they should judge whether applying the same effort to another aspect of care, perhaps for another disease, would help patients even more.
In weighing these choices, physicians should not rely on the results of a single study. It is best to step back and examine the evidence as a whole, reviewing the results of multiple studies of the same strategy. For example, a Cochrane group analyzed 18 systematic reviews of various methods for disseminating and implementing evidence in practice. Although some interventions were consistently effective (eg, educational outreach, reminders, multifaceted interventions, interactive education), others were rarely or never effective (eg, educational materials, didactic teaching) or inconsistently effective (eg, audit, feedback, local opinion leaders, local adaptation, patient-mediated interventions).9 Similarly, a recent review of 58 studies of strategies for improving preventive care found that most interventions were effective in some studies but not others.10
An analytic framework for behavior change
It makes sense that a particular strategy would not succeed in all cases. The reason that clinicians do not adopt new behaviors, or abandon old ones, is often specific to the disease or procedure in question, local practice conditions, and the personal barriers that each physician faces.11 A common solution would not be expected to work. Most physicians undergo stages of change in adopting new behaviors:
- They must have knowledge (information). They must know about the new data or new practice guidelines that advocate a change in practice behavior. Keeping abreast of this knowledge, with its exponential expansion, is medicine’s great challenge. However, as so many studies have shown, information by itself is not enough.12
- Knowledge must foster a change in attitudes. Clinicians must accept the validity of the evidence and its applicability to their practices and their patients. There must be “buy in” for new practice guidelines and acceptance that the recommendations represent good medical care; have been embraced by peers, local consultants, opinion leaders, or one’s specialty; and are acceptable to patients.
- Even if physicians know about and accept the behavior, they must have the ability to implement it. Enthusiasm by itself is insufficient if there is a lack of time, resources, staff, training, or equipment. Physicians must have access to eligible patients, and those patients must be able and willing to do their part. (Like clinicians, patients may not comply because of barriers to knowledge, attitudes, ability, and reinforcement.) Finally, constraints imposed by office or clinic operations, practice leadership, information systems, regulations, and insurance coverage can impede change.
- Like all people, physicians need reinforcement to maintain behaviors. It is human nature to forget, overlook, or lose interest over time. That 36% of physicians do not notify patients of abnormal test results is not because they doubt the importance of that type of communication.13 The most committed physician needs reminder systems to remember when to implement guidelines, tracking systems to identify patients who need follow-up, and encouragement from practice leaders, systems of care, and patients that their efforts are appreciated.
Putting the framework to use
This 4-part framework helps to organize the menu of implementation tools that are available to physicians. Some tools focus on providing knowledge, such as conferences, journal articles, practice guidelines, the Cochrane database,14 and information mastery programs to help clinicians access useful data.15 Some focus on attitudes, such as local adaptation of guidelines,16 academic detailing,17 endorsements by opinion leaders and specialty societies,18 and feedback from colleagues and patients.19 Some address ability, such as scheduling and staff changes,20 revised delivery systems,21 skill building, teamwork,22 information technology,23 comprehensive disease24 or total quality25 management, and community support. Some provide reinforcement, such as computerized or manual reminder systems, flow sheets, standing orders, provider incentives, and feedback reports.26
Knowing that, in general, the 4 steps occur in sequence helps clarify why so many methods of changing behavior appear successful in some settings but not in others. An intervention that delivers information is not helpful if clinicians already know the facts but lack ability. If a family practice does poorly in administering polio vaccine, the problem is less likely to be solved by circulating a photocopy of the Advisory Committee on Immunization Practices’s immunization schedule—the physicians already know the guidelines and the importance of vaccination—than by implementing a tracking and reminder system to flag eligible patients, the most effective way to boost immunization rates.27 Conversely, reinforcement tools, such as reminder systems and standing orders, are unlikely to succeed if clinicians are at an earlier stage of change (eg, they are unaware of or question the data). Adding a space for exercise counseling on flow sheets or preventive care forms accomplishes little if physicians are fundamentally resistant because they doubt this type of counseling helps patients. A knowledge intervention is precisely what is needed when physicians withhold warfarin for atrial fibrillation because of the mistaken belief that bleeding complications outweigh benefits.28 Citing an earlier description of this model,29 Cabana and colleagues30 expanded its structure to better organize published evidence on barriers to physician adherence to practice guidelines. In their model, barriers to knowledge include lack of awareness and familiarity with guidelines. Barriers to attitudes include lack of agreement with guidelines, lack of outcome expectancy, lack of self-efficacy, and lack of motivation. Barriers to behavior include factors related to patients (eg, patient expectations), the practice environment (eg, lack of time, resources), and the guidelines themselves (eg, conflicting recommendations). Of the 120 studies on barriers covered in their review, 58% examined only one type of barrier.
A diagnostic approach
Too many advocates of quality improvement champion their method as the only way to improve care. Hospitals, practices, and health systems often seize on a particular approach for improving quality, perhaps because it is easier to organize programs around a single theme. But there are no “magic bullets.”31 Seasoned clinicians know this; they understand that proper treatment begins with a good diagnosis. The first step is to “find the lesion,” to determine precisely why the guideline is not followed. Knowing whether the barriers involve knowledge, attitude, ability, or reinforcement is the starting point for designing a targeted solution. The alternative is quality improvement by reflex. No physician can improve everything at once, and this is especially true for primary care physicians because of the spectrum of diseases for which they care. Their special need to set priorities makes a rational diagnostic approach to quality improvement essential.
Seeking outcomes that matter
The study by McBride and colleagues reminds us that the utility of outcomes research often depends on the outcome measures. An effect on surrogate or intermediate end points, such as better use of medical records, does not prove beneficial for patients unless data suggest that a change in such measures improves health.32 Smoking illustrates the ideal surrogate measure because of the strong evidence linking it to disease. Too many researchers rely on less-validated surrogate measures, either because more distal health outcomes are hard to quantify or because statistical power concerns demand too large or lengthy a study. Using surrogates is easier, but it has yielded a profusion of outcome studies that fail to tell us whether patients benefit in ways that matter. It would be better to do fewer studies and conserve the resources for a definitive investigation that gives patient-centered outcomes the attention they deserve.
1. Frank E, Winkleby MA, Altman DG, Rockhill B, Fortmann SP. Predictors of physician’s smoking cessation advice. JAMA 1991;266:3139-44.
2. Krumholz HM, Radford MJ, Wang Y, et al. National use and effectiveness of b-blockers for the treatment of elderly patients after acute myocardial infarction: National Cooperative Cardiovascular Project. JAMA 1998;280:623-9.
3. Stafford RS, Singer DE. Recent national patterns of warfarin use in atrial fibrillation. Circulation 1998;97:1231-3.
4. Allison JG, Bromley HR. Unnecessary preoperative investigations: evaluation and cost analysis. Am Surg 1996;62:686-9.
5. Davis DA, Thomson MA, Oxman AD, Haynes RB. Changing physician performance: a systematic review of the effect of continuing medical education strategies. JAMA 1995;274:700-5.
6. Chassin MR, Halvin RW. and the National Roundtable on Health Care Quality The urgent need to improve health care quality. Institute of Medicine National Roundtable on Health Care Quality. JAMA 1998;280:1000-5.
7. McBride P, Underbakke G, Plane MB, et al. Improving practice prevention systems in primary care: the Health Education and Research Trial (HEART). J Fam Pract 2000;49:115-125.
8. Casalino LP. The unintended consequences of measuring quality on the quality of medical care. N Engl J Med 1999;341:1147-50.
9. Bero LA, Grilli R, Grimshaw JM, et al. Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. BMJ 1998;317:465-8.
10. MEJL, Wensing M, Grol RPTM, van der Weijden T, van Weel C. Interventions to improve the delivery of preventive services in primary care. Am J Public Health 1999;89:737-46.
11. R. Beliefs and evidence in changing clinical practice. BMJ 1997;315:418-21.
12. RM, Cebul RD, Wigton RS. You can lead a horse to water: improving physicians’ knowledge of probabilities may not affect their decisions. Med Decis Making 1995;15:65-75.
13. EA, Ward RE, Uman JE, McCarthy BD. Patient notification and follow-up of abnormal test results: a physician survey. Arch Intern Med 1996;156:327-31.
14. L, Rennie D. The Cochrane Collaboration: preparing, maintaining, and disseminating systematic reviews of the effects of health care. JAMA 1995;274:1935-8.
15. DC, Shaughnessy AF. Teaching information mastery: creating informed consumers of medical information. J Am Board Fam Pract 1999;12:444-9.
16. JB, Shye D, McFarland B. The paradox of guideline implementation: how AHCPR’s depression guideline was adapted at Kaiser Permanente Northwest Region. J Qual Improv 1995;21:5-21.
17. SB, Avorn J. Principles of educational outreach (‘academic detailing’) to improve clinical decision making. JAMA 1990;263:549-56.
18. SB, McLaughlin TJ, Gurwitz JH, et al. Effect of local medical opinion leaders on quality of care for acute myocardial infarction: a randomized controlled trial. JAMA 1998;279:1358-63.
19. BS, Tonesk X, Jacobson PD. Implementing clinical practice guidelines: social influence strategies and behavior change. Qual Rev Bull 1992;18:413-22.
20. AJ, Woodruff CB, Carney PA. Changing office routines to enhance preventive care: the preventive GAPS approach. Arch Fam Med 1994;3:176-83.
21. DM. A primer on leading the improvement of systems. BMJ 1996;312:619-22.
22. LL, Gemson DH, Carney P. Office system intervention supporting primary care-based health behavior change counseling. Am J Prev Med 1999;17:299-308.
23. RE, McKay G, Boles SM, Vogt TM. Interactive computer technology, behavioral science, and family practice. J Fam Pract 1999;48:464-70.
24. Ellrodt G, Cook DJ, Lee J, Cho M, Hunt D, Weingarten S. Evidence-based disease management. JAMA 1997;278:1687-92.
25. LI, Kottke TE, Brekke ML. Will primary care clinics organize themselves to improve the delivery of preventive services? A randomized controlled trial. Prev Med 1998;27:623-31.
26. Corporation Interventions that increase the utilization of Medicare-funded preventive services for persons age 65 and older. Pub. No. HCFA-02151. Baltimore, Md: Health Care Financing Administration; 1999.
27. Force on Community Preventive Services Vaccine-preventable diseases: improving vaccination coverage in children, adolescents, and adults. A report on recommendations from the Task Force on Community Preventive Services. MMWR 1999;48:(RR-8)1-15.
28. J, Gurwitz JH, Rochon PA, Avorn J. Physician attitudes concerning warfarin for stroke prevention in atrial fibrillation: results of a survey of long-term care practitioners. J Am Geriatr Soc 1997;45:1060-5.
29. SH. Practice guidelines: a new reality in medicine. III: impact on patient care. Arch Intern Med 1993;153:2646-55.
30. MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999;282:1458-65.
31. AD, Thomson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. Can Med Assoc J 1995;153:1423-31.
32. RA. Using outcomes to improve quality of research and quality of care. J Am Board Fam Pract 1998;11:465-72.
1. Frank E, Winkleby MA, Altman DG, Rockhill B, Fortmann SP. Predictors of physician’s smoking cessation advice. JAMA 1991;266:3139-44.
2. Krumholz HM, Radford MJ, Wang Y, et al. National use and effectiveness of b-blockers for the treatment of elderly patients after acute myocardial infarction: National Cooperative Cardiovascular Project. JAMA 1998;280:623-9.
3. Stafford RS, Singer DE. Recent national patterns of warfarin use in atrial fibrillation. Circulation 1998;97:1231-3.
4. Allison JG, Bromley HR. Unnecessary preoperative investigations: evaluation and cost analysis. Am Surg 1996;62:686-9.
5. Davis DA, Thomson MA, Oxman AD, Haynes RB. Changing physician performance: a systematic review of the effect of continuing medical education strategies. JAMA 1995;274:700-5.
6. Chassin MR, Halvin RW. and the National Roundtable on Health Care Quality The urgent need to improve health care quality. Institute of Medicine National Roundtable on Health Care Quality. JAMA 1998;280:1000-5.
7. McBride P, Underbakke G, Plane MB, et al. Improving practice prevention systems in primary care: the Health Education and Research Trial (HEART). J Fam Pract 2000;49:115-125.
8. Casalino LP. The unintended consequences of measuring quality on the quality of medical care. N Engl J Med 1999;341:1147-50.
9. Bero LA, Grilli R, Grimshaw JM, et al. Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. BMJ 1998;317:465-8.
10. MEJL, Wensing M, Grol RPTM, van der Weijden T, van Weel C. Interventions to improve the delivery of preventive services in primary care. Am J Public Health 1999;89:737-46.
11. R. Beliefs and evidence in changing clinical practice. BMJ 1997;315:418-21.
12. RM, Cebul RD, Wigton RS. You can lead a horse to water: improving physicians’ knowledge of probabilities may not affect their decisions. Med Decis Making 1995;15:65-75.
13. EA, Ward RE, Uman JE, McCarthy BD. Patient notification and follow-up of abnormal test results: a physician survey. Arch Intern Med 1996;156:327-31.
14. L, Rennie D. The Cochrane Collaboration: preparing, maintaining, and disseminating systematic reviews of the effects of health care. JAMA 1995;274:1935-8.
15. DC, Shaughnessy AF. Teaching information mastery: creating informed consumers of medical information. J Am Board Fam Pract 1999;12:444-9.
16. JB, Shye D, McFarland B. The paradox of guideline implementation: how AHCPR’s depression guideline was adapted at Kaiser Permanente Northwest Region. J Qual Improv 1995;21:5-21.
17. SB, Avorn J. Principles of educational outreach (‘academic detailing’) to improve clinical decision making. JAMA 1990;263:549-56.
18. SB, McLaughlin TJ, Gurwitz JH, et al. Effect of local medical opinion leaders on quality of care for acute myocardial infarction: a randomized controlled trial. JAMA 1998;279:1358-63.
19. BS, Tonesk X, Jacobson PD. Implementing clinical practice guidelines: social influence strategies and behavior change. Qual Rev Bull 1992;18:413-22.
20. AJ, Woodruff CB, Carney PA. Changing office routines to enhance preventive care: the preventive GAPS approach. Arch Fam Med 1994;3:176-83.
21. DM. A primer on leading the improvement of systems. BMJ 1996;312:619-22.
22. LL, Gemson DH, Carney P. Office system intervention supporting primary care-based health behavior change counseling. Am J Prev Med 1999;17:299-308.
23. RE, McKay G, Boles SM, Vogt TM. Interactive computer technology, behavioral science, and family practice. J Fam Pract 1999;48:464-70.
24. Ellrodt G, Cook DJ, Lee J, Cho M, Hunt D, Weingarten S. Evidence-based disease management. JAMA 1997;278:1687-92.
25. LI, Kottke TE, Brekke ML. Will primary care clinics organize themselves to improve the delivery of preventive services? A randomized controlled trial. Prev Med 1998;27:623-31.
26. Corporation Interventions that increase the utilization of Medicare-funded preventive services for persons age 65 and older. Pub. No. HCFA-02151. Baltimore, Md: Health Care Financing Administration; 1999.
27. Force on Community Preventive Services Vaccine-preventable diseases: improving vaccination coverage in children, adolescents, and adults. A report on recommendations from the Task Force on Community Preventive Services. MMWR 1999;48:(RR-8)1-15.
28. J, Gurwitz JH, Rochon PA, Avorn J. Physician attitudes concerning warfarin for stroke prevention in atrial fibrillation: results of a survey of long-term care practitioners. J Am Geriatr Soc 1997;45:1060-5.
29. SH. Practice guidelines: a new reality in medicine. III: impact on patient care. Arch Intern Med 1993;153:2646-55.
30. MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999;282:1458-65.
31. AD, Thomson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. Can Med Assoc J 1995;153:1423-31.
32. RA. Using outcomes to improve quality of research and quality of care. J Am Board Fam Pract 1998;11:465-72.