Article Type
Changed
Fri, 09/14/2018 - 12:39
Display Headline
Take Research Initiative

Hospitalists should consider the hospital their research laboratory,” says Mark V. Williams, MD, FACP, professor of medicine, director, Emory Hospital Medicine Unit, and editor-in-chief of the Journal of Hospital Medicine. “Just as research scientists consider beakers, pipettes, and spectrometers as some of their research tools, we can consider computerized information databases, chart review, and QI projects the tools we use to figure out how we can best deliver care to patients.”

But what are the best ways for hospitalists to conduct research in their institutions? The challenge, says Jeffrey L. Schnipper, MD, MPH, director of clinical research, Brigham and Women’s/Faulkner Hospitalist Service (Boston), and associate physician, Division of General Medicine at Brigham and Women’s Hospital, is that hospitalists are tied to processes—not single interventions.

“The goal of my research,” says Dr. Schnipper, “is to move beyond ‘I got it to work at my hospital’ to ‘this works, in general, at any hospital.’ The vast majority of my projects are related to inpatient quality improvement. Unfortunately, that is not a ‘blue pill.’ If I prove that my quality improvement method improves diabetes control, you still have a lot of work to do to implement it at your hospital.”

How can hospitalists parlay their natural inclinations to improving systems into research that is publishable and generalizable? Healthcare researchers interviewed for this article maintain that savvy use of tools generated from quality improvement research combined with traditional scientific methods can help busy hospitalists streamline their approach to identifying, designing, and conducting valid research projects with publishable results.

Missions Interlaced

Those interviewed for this article agree that the push for quality improvement dovetails with hospitalists’ mission and approach to patient care. “Hospitalists are very systems-oriented,” says Dr. Schnipper. “They are trying to improve not just the care of their individual patients, but the way the whole system works and runs. Frankly, in any environment in which we work, we have a vested interest in making it run better.”

Hospitalists provide a valuable link in the quality improvement chain, agrees Brent James, MD, executive director of the Institute for Healthcare Delivery Research, at Intermountain Healthcare, an integrated delivery system serving 1.2 million patients in Utah, and a leading QI researcher. “Any time you have a group of people who are trying to deliver coordinated care together, and who rely heavily on being able to support each other as a team, this is just an absolute natural model [for conducting QI studies],” he says.

Figure 2

Dr. James was a member of the Institute of Medicine’s National Roundtable on Quality and its subsequent Committee on Quality of Health Care in America that published Crossing the Quality Chasm in 2001. He also just finished a three-year project with the Hastings Center, Garrison, N.Y., funded by an Agency for Healthcare Research and Quality (AHRQ) grant, to examine the ethics of quality improvement.

“Given the quality chasm,” he says, “there is an ethical obligation for physicians, nurses and health professionals to do quality improvement. It surely shouldn’t be a choice—it’s a way of rigorously learning from your own practice.”

Dr. Williams explains that hospitals will increasingly undertake quality improvement initiatives, not just to improve care delivery in their facilities, but in response to pay-for-performance requirements being set up by the federal government and insurers.

“I strongly believe that hospitalists are going to be seen by many hospital administrators as not only collaborators but the leaders of these initiatives,” says Dr. Williams. “And those initiatives can be a form of research if conducted properly. It does require having sufficient resources from the hospital. I don’t think it’s something you can do on Saturday and Sunday nights.”

 

 

Steps to the Research

Dr. James has collaborated with Theodore Speroff, PhD, of the Veterans Affairs Health Services Research Center in Nashville, Tenn., and others on many articles delineating the use of PDSA (plan, do, study, act) methodology—also known as rapid cycle of change methodology—to improve the rigor of quality improvement initiatives.1,2

In a nutshell, says Dr. James, the PDSA model consists of several important steps encompassing a study cycle:

  • Establish key clinical processes at your institution that warrant studying, and build an evidence-based best practices guideline. For instance, at Intermountain Healthcare 10% of the system’s processes accounted for 90% of clinicians’ work. Hospitalists pick the most prominent care process (DVT prophylaxis, for example) and build an evidence-based best practice guideline;
  • Build best practice guidelines into a workflow format (in the form of standing order sets, data, and decision support) to directly support care at the work flow level;
  • Build outcomes data comprising three major sub-categories: medical outcomes, service outcomes, and cost outcomes. Each category is further divided into smaller units. For instance, medical outcomes would include indications for appropriateness, condition-specific complication rates, and achieving therapeutic goals;
  • Use electronic medical records to develop a system of decision support that ties together best practices, work flow and outcomes tracking; and
  • Build educational materials for patients and for the team of professionals delivering the care.

The beauty of rapid cycle of change methodology, says Dr. James, is that it quickly allows teams to correctly identify worthwhile research projects. The team asks: What is our aim or target? How will we know if the target changes or improves (implying a parallel qualitative or quantitative measurement system)? And finally, what might we change to make things work better? “Rapid cycle” connotes a series of PDSA cycles performed one after another in the context of a measurement system.

Figure 3

Increase the Rigor of Studies

Dr. Schnipper believes that continuous quality improvement methods give researchers a toolkit for conducting successful interventional studies. But to use quality improvement methods (e.g., rapid cycle of change—PDSA—methodology) alone may yield less externally valid study results. For example, he says, using QI methods alone, a researcher might continuously change the intervention (for glucose management, for example), watching the results improve over time. This might be the most effective method for improving glucose control in a specific institution, but this renders results “less generalizable to any other institution. It’s never really a before/after study, much less a concurrent randomized controlled trial,” he says. “Many people believe that if you want to conduct research, you have to ‘hold the intervention still’ for at least a certain amount of time so that it’s describable to other people. You may also decide, in the name of generalizability, not to maximally customize your QI intervention to your institution.”

Dr. Schnipper cites a recent Annals of Internal Medicine study that used a combined methodology. The study, by Fine and colleagues at the University of Pittsburgh in conjunction with the Veterans Affairs Center for Health Equity Research and Promotion, compared three intervention strategies (low, moderate, and high intensity) to improve pneumonia care in emergency departments and assessed the performance in institutions assigned to each strategy.3 The high intensity arm used a continuous quality improvement method, allowing each institution to design an intervention that worked best for it. “It was encouraging to see Annals publish an article of this type,” says Dr. Schnipper. “But the question remains: Is this the best way to publish research, such that it’s most useful for other hospitalists who want to improve care at their institutions? Do you include a 20-page online appendix so other people can see exactly what you did?”

 

 

QI methodology, Dr. James concedes, is “inherently an observational study design in the hierarchy of evidence because of the way data is collected.” He maintains that researchers can increase the reliability of quality improvement initiatives “by incorporating prospective non-randomized controlled trials designs, or quasi-experiments, the pinnacle of observational study designs. Staggered implementation, risk adjustment, and case matching can bring a quasi-experimental study design within a hair’s breadth of the same evidence reliability of a full randomized controlled trial.”

Once routine care processes are standardized at an institution, other opportunities for controlled studies will appear. Dr. James cites work done at LDS Hospital, Salt Lake City, Utah, by Allen Morris to produce a best practices guideline for treating acute respiratory distress syndrome (ARDS). Now disseminated via ARDSnet (www.ardsnet.org/clinicalnetwork/; Dublin, Calif.), a national research collaborative, these best practices guidelines are now routine care for ARDS at 16 major academic centers. Because routine care is standardized, says Dr. James, that routine care—with Institutional Review Board (IRB) approval and oversight—can become the control arm for comparing new interventions in additional clinical studies. Also, because researchers do not have to initiate a new control arm, operational overhead for new scientific trials is reduced.

“There are definitely ways in which the two fields [quality improvement and rigorous scientific research] make each other better,” says Dr. Schnipper. For his study of glycemic management of diabetic patients in a non-ICU setting (at press time scheduled to publish in a forthcoming issue of the Journal of Hospital Medicine), Dr. Schnipper and his team conducted rigorous prospective data collection, identifying every diabetic on the general medicine service at the time of admission. Using the APACHE III, the team then assessed each patient’s severity of disease, a known confounder of glucose control in hospitalized diabetics. They conducted a detailed chart review to assess the quality of insulin orders for the diabetic patients. Finally, they used a novel statistical technique (marginal structural models) to remove the confounding by indication that occurs when hyperglycemia results in more intensive insulin therapy. They revealed that better quality insulin orders resulted in better glucose control.

“There’s a lot to be said for designing this research so that it is maximally useful for its consumers—hospitalists and others—who want to improve care in their own hospitals. I think we need to move toward multi-center quality improvement studies. If you can get [an intervention] to work at 10 hospitals, then you’ve gone a long way to say this works, in general. As long as you can answer that question—is there knowledge to be gained—then it’s worth doing a study well, with good methods, and it’s worth publishing.”

In Academia Alone?

Dr. Williams hopes the new emphasis on quality improvement, evidenced in such publications as the AHRQ’s August 2004 technical review, “Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies,” can become a springboard for new areas of research.4 He reports that the Journal of Hospital Medicine has already received article submissions detailing quality improvement initiatives. “We would love to see more,” he says.

It may not always be possible to clear the time for the additional duties of conducting research. Community-based hospitalists do not usually enjoy the same degree of funding and research support infrastructure found in the academic setting. SHM’s “Authoritative Source on the State of the Hospital Medicine Movement” reveals that the majority of hospitalists involved in research are affiliated with universities and medical schools.5 Dr. Williams admits that fitting in research projects can often be a challenge for other busy hospitalists.

“I don’t think it’s something you can just do on nights and weekends. The only way, honestly, that hospitalists can fit research into what they’re doing is if [research] becomes part of their job description,” says Dr. Williams. “And I think the appropriate avenue is through quality improvement initiatives.”

 

 

His advice to young hospitalists who want to undertake research projects? Identify existing resources at their institution and find out how they can collaborate with other members of the healthcare team, including nurses and pharmacists.

Community-based hospitalists will most likely benefit, surmises Dr. Schnipper, from emerging initiatives for public reporting and pay for performance, an offshoot of which will yield more useful data sets. “I think they will probably do more collaborative research. Community hospitalists may not have fellowship training in research and protected time to become independent investigators with federal funding,” he says. “What they do have is incredible clinical insight and exact knowledge of the problems in their hospital. I would love to see more academic-community partnerships, where we could do studies in real-world hospitals, not just my ivory tower. Then, we could get some really good, generalizable, multi-center research – which would make everybody happy.” TH

Gretchen Henkel writes regularly for The Hospitalist.

References

  1. Speroff T, James BC, Nelson EC, et al. Guidelines for appraisal and publication of PDSA quality improvement. Qual Manag Health Care. 2004 Jan-Mar;13(1); 33-39.
  2. Speroff T, O’Connor GT. Study designs for PDSA quality improvement research. Qual Manag Health Care. 2004 Jan-Mar;13(1);17-32.
  3. Yealy DM, Auble TE, Stone RA, et al. Effect of increasing the intensity of implementing pneumonia guidelines: a randomized, controlled trial. Ann Intern Med. 2005 Dec 20;143(12):881-894.
  4. Shojani, KG, McDonald KM, Wachter RM, et al. Closing the quality gap: a critical analysis of quality improvement strategies. Vol. 1, Agency for Healthcare Research and Quality Technical Review; August, 2004. Publication No. 04-0051-1. Available online at www.ahrq.gov/downloads/pub/evidence/pdf/qualgap2/qualgap2.pdf. Last accessed May 30, 2006.
  5. The Society of Hospital Medicine 2005-2006 Survey. The Authoritative Source on the State of the Hospital Medicine Movement, May, 2006. Philadelphia.
Issue
The Hospitalist - 2006(08)
Publications
Sections

Hospitalists should consider the hospital their research laboratory,” says Mark V. Williams, MD, FACP, professor of medicine, director, Emory Hospital Medicine Unit, and editor-in-chief of the Journal of Hospital Medicine. “Just as research scientists consider beakers, pipettes, and spectrometers as some of their research tools, we can consider computerized information databases, chart review, and QI projects the tools we use to figure out how we can best deliver care to patients.”

But what are the best ways for hospitalists to conduct research in their institutions? The challenge, says Jeffrey L. Schnipper, MD, MPH, director of clinical research, Brigham and Women’s/Faulkner Hospitalist Service (Boston), and associate physician, Division of General Medicine at Brigham and Women’s Hospital, is that hospitalists are tied to processes—not single interventions.

“The goal of my research,” says Dr. Schnipper, “is to move beyond ‘I got it to work at my hospital’ to ‘this works, in general, at any hospital.’ The vast majority of my projects are related to inpatient quality improvement. Unfortunately, that is not a ‘blue pill.’ If I prove that my quality improvement method improves diabetes control, you still have a lot of work to do to implement it at your hospital.”

How can hospitalists parlay their natural inclinations to improving systems into research that is publishable and generalizable? Healthcare researchers interviewed for this article maintain that savvy use of tools generated from quality improvement research combined with traditional scientific methods can help busy hospitalists streamline their approach to identifying, designing, and conducting valid research projects with publishable results.

Missions Interlaced

Those interviewed for this article agree that the push for quality improvement dovetails with hospitalists’ mission and approach to patient care. “Hospitalists are very systems-oriented,” says Dr. Schnipper. “They are trying to improve not just the care of their individual patients, but the way the whole system works and runs. Frankly, in any environment in which we work, we have a vested interest in making it run better.”

Hospitalists provide a valuable link in the quality improvement chain, agrees Brent James, MD, executive director of the Institute for Healthcare Delivery Research, at Intermountain Healthcare, an integrated delivery system serving 1.2 million patients in Utah, and a leading QI researcher. “Any time you have a group of people who are trying to deliver coordinated care together, and who rely heavily on being able to support each other as a team, this is just an absolute natural model [for conducting QI studies],” he says.

Figure 2

Dr. James was a member of the Institute of Medicine’s National Roundtable on Quality and its subsequent Committee on Quality of Health Care in America that published Crossing the Quality Chasm in 2001. He also just finished a three-year project with the Hastings Center, Garrison, N.Y., funded by an Agency for Healthcare Research and Quality (AHRQ) grant, to examine the ethics of quality improvement.

“Given the quality chasm,” he says, “there is an ethical obligation for physicians, nurses and health professionals to do quality improvement. It surely shouldn’t be a choice—it’s a way of rigorously learning from your own practice.”

Dr. Williams explains that hospitals will increasingly undertake quality improvement initiatives, not just to improve care delivery in their facilities, but in response to pay-for-performance requirements being set up by the federal government and insurers.

“I strongly believe that hospitalists are going to be seen by many hospital administrators as not only collaborators but the leaders of these initiatives,” says Dr. Williams. “And those initiatives can be a form of research if conducted properly. It does require having sufficient resources from the hospital. I don’t think it’s something you can do on Saturday and Sunday nights.”

 

 

Steps to the Research

Dr. James has collaborated with Theodore Speroff, PhD, of the Veterans Affairs Health Services Research Center in Nashville, Tenn., and others on many articles delineating the use of PDSA (plan, do, study, act) methodology—also known as rapid cycle of change methodology—to improve the rigor of quality improvement initiatives.1,2

In a nutshell, says Dr. James, the PDSA model consists of several important steps encompassing a study cycle:

  • Establish key clinical processes at your institution that warrant studying, and build an evidence-based best practices guideline. For instance, at Intermountain Healthcare 10% of the system’s processes accounted for 90% of clinicians’ work. Hospitalists pick the most prominent care process (DVT prophylaxis, for example) and build an evidence-based best practice guideline;
  • Build best practice guidelines into a workflow format (in the form of standing order sets, data, and decision support) to directly support care at the work flow level;
  • Build outcomes data comprising three major sub-categories: medical outcomes, service outcomes, and cost outcomes. Each category is further divided into smaller units. For instance, medical outcomes would include indications for appropriateness, condition-specific complication rates, and achieving therapeutic goals;
  • Use electronic medical records to develop a system of decision support that ties together best practices, work flow and outcomes tracking; and
  • Build educational materials for patients and for the team of professionals delivering the care.

The beauty of rapid cycle of change methodology, says Dr. James, is that it quickly allows teams to correctly identify worthwhile research projects. The team asks: What is our aim or target? How will we know if the target changes or improves (implying a parallel qualitative or quantitative measurement system)? And finally, what might we change to make things work better? “Rapid cycle” connotes a series of PDSA cycles performed one after another in the context of a measurement system.

Figure 3

Increase the Rigor of Studies

Dr. Schnipper believes that continuous quality improvement methods give researchers a toolkit for conducting successful interventional studies. But to use quality improvement methods (e.g., rapid cycle of change—PDSA—methodology) alone may yield less externally valid study results. For example, he says, using QI methods alone, a researcher might continuously change the intervention (for glucose management, for example), watching the results improve over time. This might be the most effective method for improving glucose control in a specific institution, but this renders results “less generalizable to any other institution. It’s never really a before/after study, much less a concurrent randomized controlled trial,” he says. “Many people believe that if you want to conduct research, you have to ‘hold the intervention still’ for at least a certain amount of time so that it’s describable to other people. You may also decide, in the name of generalizability, not to maximally customize your QI intervention to your institution.”

Dr. Schnipper cites a recent Annals of Internal Medicine study that used a combined methodology. The study, by Fine and colleagues at the University of Pittsburgh in conjunction with the Veterans Affairs Center for Health Equity Research and Promotion, compared three intervention strategies (low, moderate, and high intensity) to improve pneumonia care in emergency departments and assessed the performance in institutions assigned to each strategy.3 The high intensity arm used a continuous quality improvement method, allowing each institution to design an intervention that worked best for it. “It was encouraging to see Annals publish an article of this type,” says Dr. Schnipper. “But the question remains: Is this the best way to publish research, such that it’s most useful for other hospitalists who want to improve care at their institutions? Do you include a 20-page online appendix so other people can see exactly what you did?”

 

 

QI methodology, Dr. James concedes, is “inherently an observational study design in the hierarchy of evidence because of the way data is collected.” He maintains that researchers can increase the reliability of quality improvement initiatives “by incorporating prospective non-randomized controlled trials designs, or quasi-experiments, the pinnacle of observational study designs. Staggered implementation, risk adjustment, and case matching can bring a quasi-experimental study design within a hair’s breadth of the same evidence reliability of a full randomized controlled trial.”

Once routine care processes are standardized at an institution, other opportunities for controlled studies will appear. Dr. James cites work done at LDS Hospital, Salt Lake City, Utah, by Allen Morris to produce a best practices guideline for treating acute respiratory distress syndrome (ARDS). Now disseminated via ARDSnet (www.ardsnet.org/clinicalnetwork/; Dublin, Calif.), a national research collaborative, these best practices guidelines are now routine care for ARDS at 16 major academic centers. Because routine care is standardized, says Dr. James, that routine care—with Institutional Review Board (IRB) approval and oversight—can become the control arm for comparing new interventions in additional clinical studies. Also, because researchers do not have to initiate a new control arm, operational overhead for new scientific trials is reduced.

“There are definitely ways in which the two fields [quality improvement and rigorous scientific research] make each other better,” says Dr. Schnipper. For his study of glycemic management of diabetic patients in a non-ICU setting (at press time scheduled to publish in a forthcoming issue of the Journal of Hospital Medicine), Dr. Schnipper and his team conducted rigorous prospective data collection, identifying every diabetic on the general medicine service at the time of admission. Using the APACHE III, the team then assessed each patient’s severity of disease, a known confounder of glucose control in hospitalized diabetics. They conducted a detailed chart review to assess the quality of insulin orders for the diabetic patients. Finally, they used a novel statistical technique (marginal structural models) to remove the confounding by indication that occurs when hyperglycemia results in more intensive insulin therapy. They revealed that better quality insulin orders resulted in better glucose control.

“There’s a lot to be said for designing this research so that it is maximally useful for its consumers—hospitalists and others—who want to improve care in their own hospitals. I think we need to move toward multi-center quality improvement studies. If you can get [an intervention] to work at 10 hospitals, then you’ve gone a long way to say this works, in general. As long as you can answer that question—is there knowledge to be gained—then it’s worth doing a study well, with good methods, and it’s worth publishing.”

In Academia Alone?

Dr. Williams hopes the new emphasis on quality improvement, evidenced in such publications as the AHRQ’s August 2004 technical review, “Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies,” can become a springboard for new areas of research.4 He reports that the Journal of Hospital Medicine has already received article submissions detailing quality improvement initiatives. “We would love to see more,” he says.

It may not always be possible to clear the time for the additional duties of conducting research. Community-based hospitalists do not usually enjoy the same degree of funding and research support infrastructure found in the academic setting. SHM’s “Authoritative Source on the State of the Hospital Medicine Movement” reveals that the majority of hospitalists involved in research are affiliated with universities and medical schools.5 Dr. Williams admits that fitting in research projects can often be a challenge for other busy hospitalists.

“I don’t think it’s something you can just do on nights and weekends. The only way, honestly, that hospitalists can fit research into what they’re doing is if [research] becomes part of their job description,” says Dr. Williams. “And I think the appropriate avenue is through quality improvement initiatives.”

 

 

His advice to young hospitalists who want to undertake research projects? Identify existing resources at their institution and find out how they can collaborate with other members of the healthcare team, including nurses and pharmacists.

Community-based hospitalists will most likely benefit, surmises Dr. Schnipper, from emerging initiatives for public reporting and pay for performance, an offshoot of which will yield more useful data sets. “I think they will probably do more collaborative research. Community hospitalists may not have fellowship training in research and protected time to become independent investigators with federal funding,” he says. “What they do have is incredible clinical insight and exact knowledge of the problems in their hospital. I would love to see more academic-community partnerships, where we could do studies in real-world hospitals, not just my ivory tower. Then, we could get some really good, generalizable, multi-center research – which would make everybody happy.” TH

Gretchen Henkel writes regularly for The Hospitalist.

References

  1. Speroff T, James BC, Nelson EC, et al. Guidelines for appraisal and publication of PDSA quality improvement. Qual Manag Health Care. 2004 Jan-Mar;13(1); 33-39.
  2. Speroff T, O’Connor GT. Study designs for PDSA quality improvement research. Qual Manag Health Care. 2004 Jan-Mar;13(1);17-32.
  3. Yealy DM, Auble TE, Stone RA, et al. Effect of increasing the intensity of implementing pneumonia guidelines: a randomized, controlled trial. Ann Intern Med. 2005 Dec 20;143(12):881-894.
  4. Shojani, KG, McDonald KM, Wachter RM, et al. Closing the quality gap: a critical analysis of quality improvement strategies. Vol. 1, Agency for Healthcare Research and Quality Technical Review; August, 2004. Publication No. 04-0051-1. Available online at www.ahrq.gov/downloads/pub/evidence/pdf/qualgap2/qualgap2.pdf. Last accessed May 30, 2006.
  5. The Society of Hospital Medicine 2005-2006 Survey. The Authoritative Source on the State of the Hospital Medicine Movement, May, 2006. Philadelphia.

Hospitalists should consider the hospital their research laboratory,” says Mark V. Williams, MD, FACP, professor of medicine, director, Emory Hospital Medicine Unit, and editor-in-chief of the Journal of Hospital Medicine. “Just as research scientists consider beakers, pipettes, and spectrometers as some of their research tools, we can consider computerized information databases, chart review, and QI projects the tools we use to figure out how we can best deliver care to patients.”

But what are the best ways for hospitalists to conduct research in their institutions? The challenge, says Jeffrey L. Schnipper, MD, MPH, director of clinical research, Brigham and Women’s/Faulkner Hospitalist Service (Boston), and associate physician, Division of General Medicine at Brigham and Women’s Hospital, is that hospitalists are tied to processes—not single interventions.

“The goal of my research,” says Dr. Schnipper, “is to move beyond ‘I got it to work at my hospital’ to ‘this works, in general, at any hospital.’ The vast majority of my projects are related to inpatient quality improvement. Unfortunately, that is not a ‘blue pill.’ If I prove that my quality improvement method improves diabetes control, you still have a lot of work to do to implement it at your hospital.”

How can hospitalists parlay their natural inclinations to improving systems into research that is publishable and generalizable? Healthcare researchers interviewed for this article maintain that savvy use of tools generated from quality improvement research combined with traditional scientific methods can help busy hospitalists streamline their approach to identifying, designing, and conducting valid research projects with publishable results.

Missions Interlaced

Those interviewed for this article agree that the push for quality improvement dovetails with hospitalists’ mission and approach to patient care. “Hospitalists are very systems-oriented,” says Dr. Schnipper. “They are trying to improve not just the care of their individual patients, but the way the whole system works and runs. Frankly, in any environment in which we work, we have a vested interest in making it run better.”

Hospitalists provide a valuable link in the quality improvement chain, agrees Brent James, MD, executive director of the Institute for Healthcare Delivery Research, at Intermountain Healthcare, an integrated delivery system serving 1.2 million patients in Utah, and a leading QI researcher. “Any time you have a group of people who are trying to deliver coordinated care together, and who rely heavily on being able to support each other as a team, this is just an absolute natural model [for conducting QI studies],” he says.

Figure 2

Dr. James was a member of the Institute of Medicine’s National Roundtable on Quality and its subsequent Committee on Quality of Health Care in America that published Crossing the Quality Chasm in 2001. He also just finished a three-year project with the Hastings Center, Garrison, N.Y., funded by an Agency for Healthcare Research and Quality (AHRQ) grant, to examine the ethics of quality improvement.

“Given the quality chasm,” he says, “there is an ethical obligation for physicians, nurses and health professionals to do quality improvement. It surely shouldn’t be a choice—it’s a way of rigorously learning from your own practice.”

Dr. Williams explains that hospitals will increasingly undertake quality improvement initiatives, not just to improve care delivery in their facilities, but in response to pay-for-performance requirements being set up by the federal government and insurers.

“I strongly believe that hospitalists are going to be seen by many hospital administrators as not only collaborators but the leaders of these initiatives,” says Dr. Williams. “And those initiatives can be a form of research if conducted properly. It does require having sufficient resources from the hospital. I don’t think it’s something you can do on Saturday and Sunday nights.”

 

 

Steps to the Research

Dr. James has collaborated with Theodore Speroff, PhD, of the Veterans Affairs Health Services Research Center in Nashville, Tenn., and others on many articles delineating the use of PDSA (plan, do, study, act) methodology—also known as rapid cycle of change methodology—to improve the rigor of quality improvement initiatives.1,2

In a nutshell, says Dr. James, the PDSA model consists of several important steps encompassing a study cycle:

  • Establish key clinical processes at your institution that warrant studying, and build an evidence-based best practices guideline. For instance, at Intermountain Healthcare 10% of the system’s processes accounted for 90% of clinicians’ work. Hospitalists pick the most prominent care process (DVT prophylaxis, for example) and build an evidence-based best practice guideline;
  • Build best practice guidelines into a workflow format (in the form of standing order sets, data, and decision support) to directly support care at the work flow level;
  • Build outcomes data comprising three major sub-categories: medical outcomes, service outcomes, and cost outcomes. Each category is further divided into smaller units. For instance, medical outcomes would include indications for appropriateness, condition-specific complication rates, and achieving therapeutic goals;
  • Use electronic medical records to develop a system of decision support that ties together best practices, work flow and outcomes tracking; and
  • Build educational materials for patients and for the team of professionals delivering the care.

The beauty of rapid cycle of change methodology, says Dr. James, is that it quickly allows teams to correctly identify worthwhile research projects. The team asks: What is our aim or target? How will we know if the target changes or improves (implying a parallel qualitative or quantitative measurement system)? And finally, what might we change to make things work better? “Rapid cycle” connotes a series of PDSA cycles performed one after another in the context of a measurement system.

Figure 3

Increase the Rigor of Studies

Dr. Schnipper believes that continuous quality improvement methods give researchers a toolkit for conducting successful interventional studies. But to use quality improvement methods (e.g., rapid cycle of change—PDSA—methodology) alone may yield less externally valid study results. For example, he says, using QI methods alone, a researcher might continuously change the intervention (for glucose management, for example), watching the results improve over time. This might be the most effective method for improving glucose control in a specific institution, but this renders results “less generalizable to any other institution. It’s never really a before/after study, much less a concurrent randomized controlled trial,” he says. “Many people believe that if you want to conduct research, you have to ‘hold the intervention still’ for at least a certain amount of time so that it’s describable to other people. You may also decide, in the name of generalizability, not to maximally customize your QI intervention to your institution.”

Dr. Schnipper cites a recent Annals of Internal Medicine study that used a combined methodology. The study, by Fine and colleagues at the University of Pittsburgh in conjunction with the Veterans Affairs Center for Health Equity Research and Promotion, compared three intervention strategies (low, moderate, and high intensity) to improve pneumonia care in emergency departments and assessed the performance in institutions assigned to each strategy.3 The high intensity arm used a continuous quality improvement method, allowing each institution to design an intervention that worked best for it. “It was encouraging to see Annals publish an article of this type,” says Dr. Schnipper. “But the question remains: Is this the best way to publish research, such that it’s most useful for other hospitalists who want to improve care at their institutions? Do you include a 20-page online appendix so other people can see exactly what you did?”

 

 

QI methodology, Dr. James concedes, is “inherently an observational study design in the hierarchy of evidence because of the way data is collected.” He maintains that researchers can increase the reliability of quality improvement initiatives “by incorporating prospective non-randomized controlled trials designs, or quasi-experiments, the pinnacle of observational study designs. Staggered implementation, risk adjustment, and case matching can bring a quasi-experimental study design within a hair’s breadth of the same evidence reliability of a full randomized controlled trial.”

Once routine care processes are standardized at an institution, other opportunities for controlled studies will appear. Dr. James cites work done at LDS Hospital, Salt Lake City, Utah, by Allen Morris to produce a best practices guideline for treating acute respiratory distress syndrome (ARDS). Now disseminated via ARDSnet (www.ardsnet.org/clinicalnetwork/; Dublin, Calif.), a national research collaborative, these best practices guidelines are now routine care for ARDS at 16 major academic centers. Because routine care is standardized, says Dr. James, that routine care—with Institutional Review Board (IRB) approval and oversight—can become the control arm for comparing new interventions in additional clinical studies. Also, because researchers do not have to initiate a new control arm, operational overhead for new scientific trials is reduced.

“There are definitely ways in which the two fields [quality improvement and rigorous scientific research] make each other better,” says Dr. Schnipper. For his study of glycemic management of diabetic patients in a non-ICU setting (at press time scheduled to publish in a forthcoming issue of the Journal of Hospital Medicine), Dr. Schnipper and his team conducted rigorous prospective data collection, identifying every diabetic on the general medicine service at the time of admission. Using the APACHE III, the team then assessed each patient’s severity of disease, a known confounder of glucose control in hospitalized diabetics. They conducted a detailed chart review to assess the quality of insulin orders for the diabetic patients. Finally, they used a novel statistical technique (marginal structural models) to remove the confounding by indication that occurs when hyperglycemia results in more intensive insulin therapy. They revealed that better quality insulin orders resulted in better glucose control.

“There’s a lot to be said for designing this research so that it is maximally useful for its consumers—hospitalists and others—who want to improve care in their own hospitals. I think we need to move toward multi-center quality improvement studies. If you can get [an intervention] to work at 10 hospitals, then you’ve gone a long way to say this works, in general. As long as you can answer that question—is there knowledge to be gained—then it’s worth doing a study well, with good methods, and it’s worth publishing.”

In Academia Alone?

Dr. Williams hopes the new emphasis on quality improvement, evidenced in such publications as the AHRQ’s August 2004 technical review, “Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies,” can become a springboard for new areas of research.4 He reports that the Journal of Hospital Medicine has already received article submissions detailing quality improvement initiatives. “We would love to see more,” he says.

It may not always be possible to clear the time for the additional duties of conducting research. Community-based hospitalists do not usually enjoy the same degree of funding and research support infrastructure found in the academic setting. SHM’s “Authoritative Source on the State of the Hospital Medicine Movement” reveals that the majority of hospitalists involved in research are affiliated with universities and medical schools.5 Dr. Williams admits that fitting in research projects can often be a challenge for other busy hospitalists.

“I don’t think it’s something you can just do on nights and weekends. The only way, honestly, that hospitalists can fit research into what they’re doing is if [research] becomes part of their job description,” says Dr. Williams. “And I think the appropriate avenue is through quality improvement initiatives.”

 

 

His advice to young hospitalists who want to undertake research projects? Identify existing resources at their institution and find out how they can collaborate with other members of the healthcare team, including nurses and pharmacists.

Community-based hospitalists will most likely benefit, surmises Dr. Schnipper, from emerging initiatives for public reporting and pay for performance, an offshoot of which will yield more useful data sets. “I think they will probably do more collaborative research. Community hospitalists may not have fellowship training in research and protected time to become independent investigators with federal funding,” he says. “What they do have is incredible clinical insight and exact knowledge of the problems in their hospital. I would love to see more academic-community partnerships, where we could do studies in real-world hospitals, not just my ivory tower. Then, we could get some really good, generalizable, multi-center research – which would make everybody happy.” TH

Gretchen Henkel writes regularly for The Hospitalist.

References

  1. Speroff T, James BC, Nelson EC, et al. Guidelines for appraisal and publication of PDSA quality improvement. Qual Manag Health Care. 2004 Jan-Mar;13(1); 33-39.
  2. Speroff T, O’Connor GT. Study designs for PDSA quality improvement research. Qual Manag Health Care. 2004 Jan-Mar;13(1);17-32.
  3. Yealy DM, Auble TE, Stone RA, et al. Effect of increasing the intensity of implementing pneumonia guidelines: a randomized, controlled trial. Ann Intern Med. 2005 Dec 20;143(12):881-894.
  4. Shojani, KG, McDonald KM, Wachter RM, et al. Closing the quality gap: a critical analysis of quality improvement strategies. Vol. 1, Agency for Healthcare Research and Quality Technical Review; August, 2004. Publication No. 04-0051-1. Available online at www.ahrq.gov/downloads/pub/evidence/pdf/qualgap2/qualgap2.pdf. Last accessed May 30, 2006.
  5. The Society of Hospital Medicine 2005-2006 Survey. The Authoritative Source on the State of the Hospital Medicine Movement, May, 2006. Philadelphia.
Issue
The Hospitalist - 2006(08)
Issue
The Hospitalist - 2006(08)
Publications
Publications
Article Type
Display Headline
Take Research Initiative
Display Headline
Take Research Initiative
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)