The plan-do-study-act cycle and data display

Article Type
Changed
Thu, 03/28/2019 - 14:59

 

This month’s column is the second in a series of three articles written by a group from Toronto and Houston. The series imagined that a community of gastroenterologists set out to improve the adenoma detection rates of physicians in their practice. The first article described the design and launch of the project. This month, Dr. Bollegala and her colleagues explain the plan-do-study-act (PDSA) cycle of improvement within a small practice. The PDSA cycle is a fundamental component of successful quality improvement initiatives; it allows a group to systematically analyze what works and what doesn’t. The focus of this article is squarely on small community practices (still the majority of gastrointestinal practices nationally), so its relevance is high. PDSA cycles are small, narrowly focused projects that can be accomplished by all as we strive to improve our care of the patients we serve. Next month, we will learn how to embed a quality initiative within our practices so sustained improvement can be seen.



John I. Allen, MD, MBA, AGAF

Editor in Chief

 

Article 1 of our series focused on the emergence of the adenoma detection rate (ADR) as a quality indicator for colonoscopy-based colorectal cancer screening programs.1 A target ADR of 25% has been established by several national gastroenterology societies and serves as a focus area for those seeking to develop quality improvement (QI) initiatives aimed at reducing the interval incidence of colorectal cancer.2 In this series, you are a community-based urban general gastroenterologist interested in improving your current group ADR of 19% to the established target of 25% for each individual endoscopist within the group over a 12-month period.

This article focuses on a clinician-friendly description of the plan-do-study-act (PDSA) cycle, a key construct within the Model for Improvement framework for QI initiatives. It also describes the importance and key elements of QI data reporting, including the run chart. All core concepts will be framed within the series example of the development of an institutional QI initiative for ADR improvement.
 

Plan-Do-Study-Act cycle

Conventional scientific research in health care generally is based on large-scale projects, performed over long periods of time and producing aggregate data analyzed through summary statistics. QI-related research, as it relates to PDSA, in contrast, is characterized by smaller-scale projects performed over shorter periods of time, with iterative protocols to accommodate local context and therefore optimize intervention success. As such, the framework for their development, implementation, and continual modification requires a conceptual and methodologic shift.

The PDSA cycle is characterized by four key steps. The first step is to plan. This step involves addressing the following questions: 1) what are we trying to accomplish? (aim); 2) how will we know that a change is an improvement? (measure); and 3) what changes can we make that will lead to improvement? (change). Additional considerations include ensuring that the stated goal is attainable, relevant, and that the timeline is feasible. An important aspect of the plan stage is gaining an understanding for the current local context, key participants and their roles, and areas in which performance is excelling or is challenged. This understanding is critical to conceptually linking the identified problem with its proposed solution. Formulating an impact prediction allows subsequent learning and adaptation.

The second step is to do. This step involves execution of the identified plan over a specified period of time. It also involves rigorous qualitative and quantitative data collection, allowing the research team to assess change and document unexpected events. The identification of an implementation leader or champion to ensure protocol adherence, effective communication among team members, and coordinate accurate data collection can be critical for overall success.

The third step is to study. This step requires evaluating whether a change in the outcome measure has occurred, which intervention was successful, and whether an identified change is sustained over time. It also requires interpretation of change within the local context, specifically with respect to unintended consequences, unanticipated events, and the sustainability of any gains. To interpret study findings appropriately, feedback with involved process members, endoscopists, and/or other stakeholder groups may be necessary. This can be important for explaining the results of each cycle, identifying protocol modifications for future cycles, and optimizing the opportunity for success. Studying the data generated by a QI initiative requires clear and accurate data display and rules for interpretation.

The fourth step is to act. This final step allows team members to reflect on the results generated and decide whether the same intervention should be continued, modified, or changed, thereby incorporating lessons learned from previous PDSA cycles (Figure 1).3

AGA Institute
Figure 1
Documentation of each PDSA cycle is an important component of the QI research process, allowing for learning that informs future cycles or initiatives, reflection, and knowledge capture.4 However, a recent systematic review published by Taylor et al.4 reported an “inconsistent approach to the application and reporting of PDSA cycles and a lack of adherence to key principles of the method.” Fewer than 20% (14 of 73) of articles reported each PDSA cycle, with 14% of articles reporting data continuously. Only 9% of articles explicitly documented a theory-based result prediction for each cycle of change. As such, caution was advised in the interpretation and implementation of studies with inadequate PDSA conduct and/or reporting. The Standards for Quality Improvement Reporting Excellence guidelines have proposed a QI-specific publication framework.5,6 However, no standardized criteria for the conduct or reporting of the PDSA framework currently exist. In addition, the PDSA cycle is limited in its reactive nature. It also may inadequately account for system/process complexity, which may lead to varying results for the same change over time.4 Finally, it does not clearly identify the most effective intervention in achieving the target, thereby preventing simplification of the overall intervention strategy.

Despite these challenges, the PDSA framework allows for small-scale and fast-paced initiative testing that reduces patient and institutional risk while minimizing the commitment of resources.4,7 Successful cycles improve stakeholder confidence in the probability for success with larger-scale implementation.

In our series example, step 1 of the PDSA cycle, plan, can be described as follows: Aim: increase the ADR of all group endoscopists to 25% over a 12-month period. Measure: Outcome: the proportion of endoscopists at your institution with an ADR greater than 25%; process – withdrawal time; balancing – staff satisfaction, patient satisfaction, and procedure time. Change: Successive cycles will institute the following: audible timers to ensure adequate withdrawal time, publication of an endoscopist-specific composite score, and training to improve inspection technique.8

In step 2 of the PDSA cycle, do, a physician member of the gastroenterology division incorporates QI into their job description and leads a change team charged with PDSA cycle 1. An administrative assistant calculates the endoscopist-specific ADRs for that month. Documentation of related events for this cycle such as unexpected physician absence, delays in polyp histology reporting, and so forth, is performed.

In step 3 of the PDSA cycle, study, the data generated will be represented on a run chart plotting the proportion of endoscopists with an ADR greater than 25% on the y-axis, and time (in monthly intervals) on the x-axis. This will be described in further detail in a later section.

In the final step of the PDSA cycle, act, continuation and modification of the tested changes can be represented as follows.
 

 

 

Displaying data

The documentation, analysis, and interpretation of data generated by multiple PDSA cycles must be displayed accurately and succinctly. The run chart has been developed as a simple technique for identifying nonrandom patterns (that is, signals), which allows QI researchers to determine the impact of each cycle of change and the stability of that change over a given time period.9 This often is contrasted with conventional statistical approaches that aggregate data and perform summary statistical comparisons at static time points. Instead, the run chart allows for an appreciation of the dynamic nature of PDSA-driven process manipulation and resulting outcome changes.

Correct interpretation of the presented data requires an understanding of common cause variation (CCV) and special cause variation (SCV). CCV occurs randomly and is present in all health care processes. It can never be eliminated completely. SCV, in contrast, is the result of external factors that are imposed on normal processes. For example, the introduction of audible timers within endoscopy rooms to ensure adequate withdrawal time may result in an increase in the ADR. The relatively stable ADR measured in both the pre-intervention and postintervention periods are subject to CCV. However, the postintervention increase in ADR is the result of SCV.10

As shown in Figure 2, the horizontal axis shows the time scale and spans the entire duration of the intervention period. The y-axis shows the outcome measure of interest. A horizontal line representing the median is shown.9 A goal line also may be depicted. Annotations to indicate the implementation of change or other important events (such as unintended consequences or unexpected events) also may be added to facilitate data interpretation.

AGA Institute
Figure 2
Specific rules based on standard statistics govern the objective interpretation of a run chart and allow the differentiation between random and cause-specific patterns of change.

Shift: at least six consecutive data points above or below the median line are needed (points on the median line are skipped).9 To assess a shift appropriately, at least 10 data points are required.

Trend: at least five consecutive data points all increasing in value or all decreasing in value are needed (numerically equivalent points are skipped).9

Runs: a run refers to a series of data points on one side of the median.9 If a random pattern of data points exists on the run chart, there should be an appropriate number of runs on either side of the median. Values outside of this indicate a higher probability of a nonrandom pattern.9,11

Astronomic point: this refers to a data point that subjectively is found to be obviously different from the rest and prompts consideration of the events that led to this.9

Although straightforward to construct and interpret for clinicians without statistical training, the run chart has specific limitations. It is ideal for the display of early data but cannot be used to determine its durability.9 In addition, a run chart does not reflect discrete data with no clear median.

The example run chart in Figure 2 shows that there is a shift in data points from below the median to above the median, ultimately achieving 100% group adherence to the ADR target of greater than 25%. There are only two runs for a total of 12 data points within the 12-month study period, indicating that there is a 5% or less probability that this is a random pattern.11 It appears that our interventions have resulted in incremental improvements in the ADR to exceed the target level in a nonrandom fashion. Although the cumulative effect of these interventions has been successful, it is difficult to predict the durability of this change moving forward. In addition, it would be difficult to select only a single intervention, of the many trialed, that would result in a sustained ADR of 25% or greater.

Summary and next steps

This article selectively reviews the process of change framed by the PDSA cycle. We also discuss the role of data display and interpretation using a run chart. The final article in this series will cover how to sustain change and support a culture of continuous improvement.

References

1. Corley, D.A., Jensen, C.D., Marks, A.R., et al. Adenoma detection rate and risk of colorectal cancer and death. N Engl J Med. 2014;370:1298-306.

2. Cohen, J., Schoenfeld, P., Park, W., et al. Quality indicators for colonoscopy. Gastrointest Endosc. 2015;81:31-53.

3. Module 5: Improvement Cycle. (2013). Available at: http://implementation.fpg.unc.edu/book/export/html/326. Accessed Feb. 1, 2016.

4. Taylor, M.J., McNicholas, C., Nicolay, C., et al. Systematic review of the application of the plan-do-study-act method to improve quality in healthcare. BMJ Qual Saf. 2014;23(4):290-8.

5. Davidoff, F., Batalden, P., Stevens, D. et al. Publication guidelines for quality improvement in health care: evolution of the SQUIRE project. Qual Saf Health Care. 2008;17:i3-9.

6. Ogrinc, G., Mooney, S., Estrada, C., et al. The SQUIRE (standards for Quality Improvement Reporting Excellence) guidelines for quality improvement reporting: explanation and elaboration. Qual Saf Health Care. 2008;17:i13-32.

7. Nelson, E.C., Batalden, B.P., Godfrey, M.M. Quality by design: a clinical microsystems approach. Jossey-Bass, San Francisco; 2007.

8. Coe, S.G.C.J., Diehl, N.N., Wallace, M.B. An endoscopic quality improvement program improves detection of colorectal adenomas. Am J Gastroenterol. 2013;108(2):219-26.

9. Perla, R.J., Provost, L.P., Murray, S.K. The run chart: a simple analytical tool for learning from variation in healthcare processes. BMJ Qual Saf. 2011;20:46-51.

10. Neuhauser, D., Provost, L., Bergman, B. The meaning of variation to healthcare managers, clinical and health-services researchers, and individual patients. BMJ Qual Saf. 2011;20:i36-40.

11. Swed, F.S. Eisenhart, C. Tables for testing randomness of grouping in a sequence of alternatives. Ann Math Statist. 1943;14:66-87

Dr. Bollegala is in the division of gastroenterology, department of medicine, Women’s College Hospital; Dr. Mosko is in the division of gastroenterology, department of medicine, St. Michael’s Hospital, and the Institute of Health Policy, Management, and Evaluation; Dr. Bernstein is in the division of gastroenterology, department of medicine, Sunnybrook Health Sciences Centre; Dr. Brahmania is in the Toronto Center for Liver Diseases, division of gastroenterology, department of medicine, University Health Network; Dr. Liu is in the division of gastroenterology, department of medicine, University Health Network; Dr. Steinhart is at Mount Sinai Hospital Centre for Inflammatory Bowel Disease, department of medicine and Institute of Health Policy, Management, and Evaluation; Dr. Silver is in the division of nephrology, St. Michael’s Hospital; Dr. Bell is in the division of internal medicine, department of medicine, Mount Sinai Hospital; Dr. Nguyen is at Mount Sinai Hospital Centre for Inflammatory Bowel Disease, department of medicine; Dr. Weizman is at the Mount Sinai Hospital Centre for Inflammatory Bowel Disease, department of medicine, and Institute of Health Policy, Management and Evaluation. All are at the University of Toronto. Dr. Patel is in the division of gastroenterology and hepatology, department of medicine, Baylor College of Medicine, Houston. The authors disclose no conflicts.

Publications
Topics
Sections

 

This month’s column is the second in a series of three articles written by a group from Toronto and Houston. The series imagined that a community of gastroenterologists set out to improve the adenoma detection rates of physicians in their practice. The first article described the design and launch of the project. This month, Dr. Bollegala and her colleagues explain the plan-do-study-act (PDSA) cycle of improvement within a small practice. The PDSA cycle is a fundamental component of successful quality improvement initiatives; it allows a group to systematically analyze what works and what doesn’t. The focus of this article is squarely on small community practices (still the majority of gastrointestinal practices nationally), so its relevance is high. PDSA cycles are small, narrowly focused projects that can be accomplished by all as we strive to improve our care of the patients we serve. Next month, we will learn how to embed a quality initiative within our practices so sustained improvement can be seen.



John I. Allen, MD, MBA, AGAF

Editor in Chief

 

Article 1 of our series focused on the emergence of the adenoma detection rate (ADR) as a quality indicator for colonoscopy-based colorectal cancer screening programs.1 A target ADR of 25% has been established by several national gastroenterology societies and serves as a focus area for those seeking to develop quality improvement (QI) initiatives aimed at reducing the interval incidence of colorectal cancer.2 In this series, you are a community-based urban general gastroenterologist interested in improving your current group ADR of 19% to the established target of 25% for each individual endoscopist within the group over a 12-month period.

This article focuses on a clinician-friendly description of the plan-do-study-act (PDSA) cycle, a key construct within the Model for Improvement framework for QI initiatives. It also describes the importance and key elements of QI data reporting, including the run chart. All core concepts will be framed within the series example of the development of an institutional QI initiative for ADR improvement.
 

Plan-Do-Study-Act cycle

Conventional scientific research in health care generally is based on large-scale projects, performed over long periods of time and producing aggregate data analyzed through summary statistics. QI-related research, as it relates to PDSA, in contrast, is characterized by smaller-scale projects performed over shorter periods of time, with iterative protocols to accommodate local context and therefore optimize intervention success. As such, the framework for their development, implementation, and continual modification requires a conceptual and methodologic shift.

The PDSA cycle is characterized by four key steps. The first step is to plan. This step involves addressing the following questions: 1) what are we trying to accomplish? (aim); 2) how will we know that a change is an improvement? (measure); and 3) what changes can we make that will lead to improvement? (change). Additional considerations include ensuring that the stated goal is attainable, relevant, and that the timeline is feasible. An important aspect of the plan stage is gaining an understanding for the current local context, key participants and their roles, and areas in which performance is excelling or is challenged. This understanding is critical to conceptually linking the identified problem with its proposed solution. Formulating an impact prediction allows subsequent learning and adaptation.

The second step is to do. This step involves execution of the identified plan over a specified period of time. It also involves rigorous qualitative and quantitative data collection, allowing the research team to assess change and document unexpected events. The identification of an implementation leader or champion to ensure protocol adherence, effective communication among team members, and coordinate accurate data collection can be critical for overall success.

The third step is to study. This step requires evaluating whether a change in the outcome measure has occurred, which intervention was successful, and whether an identified change is sustained over time. It also requires interpretation of change within the local context, specifically with respect to unintended consequences, unanticipated events, and the sustainability of any gains. To interpret study findings appropriately, feedback with involved process members, endoscopists, and/or other stakeholder groups may be necessary. This can be important for explaining the results of each cycle, identifying protocol modifications for future cycles, and optimizing the opportunity for success. Studying the data generated by a QI initiative requires clear and accurate data display and rules for interpretation.

The fourth step is to act. This final step allows team members to reflect on the results generated and decide whether the same intervention should be continued, modified, or changed, thereby incorporating lessons learned from previous PDSA cycles (Figure 1).3

AGA Institute
Figure 1
Documentation of each PDSA cycle is an important component of the QI research process, allowing for learning that informs future cycles or initiatives, reflection, and knowledge capture.4 However, a recent systematic review published by Taylor et al.4 reported an “inconsistent approach to the application and reporting of PDSA cycles and a lack of adherence to key principles of the method.” Fewer than 20% (14 of 73) of articles reported each PDSA cycle, with 14% of articles reporting data continuously. Only 9% of articles explicitly documented a theory-based result prediction for each cycle of change. As such, caution was advised in the interpretation and implementation of studies with inadequate PDSA conduct and/or reporting. The Standards for Quality Improvement Reporting Excellence guidelines have proposed a QI-specific publication framework.5,6 However, no standardized criteria for the conduct or reporting of the PDSA framework currently exist. In addition, the PDSA cycle is limited in its reactive nature. It also may inadequately account for system/process complexity, which may lead to varying results for the same change over time.4 Finally, it does not clearly identify the most effective intervention in achieving the target, thereby preventing simplification of the overall intervention strategy.

Despite these challenges, the PDSA framework allows for small-scale and fast-paced initiative testing that reduces patient and institutional risk while minimizing the commitment of resources.4,7 Successful cycles improve stakeholder confidence in the probability for success with larger-scale implementation.

In our series example, step 1 of the PDSA cycle, plan, can be described as follows: Aim: increase the ADR of all group endoscopists to 25% over a 12-month period. Measure: Outcome: the proportion of endoscopists at your institution with an ADR greater than 25%; process – withdrawal time; balancing – staff satisfaction, patient satisfaction, and procedure time. Change: Successive cycles will institute the following: audible timers to ensure adequate withdrawal time, publication of an endoscopist-specific composite score, and training to improve inspection technique.8

In step 2 of the PDSA cycle, do, a physician member of the gastroenterology division incorporates QI into their job description and leads a change team charged with PDSA cycle 1. An administrative assistant calculates the endoscopist-specific ADRs for that month. Documentation of related events for this cycle such as unexpected physician absence, delays in polyp histology reporting, and so forth, is performed.

In step 3 of the PDSA cycle, study, the data generated will be represented on a run chart plotting the proportion of endoscopists with an ADR greater than 25% on the y-axis, and time (in monthly intervals) on the x-axis. This will be described in further detail in a later section.

In the final step of the PDSA cycle, act, continuation and modification of the tested changes can be represented as follows.
 

 

 

Displaying data

The documentation, analysis, and interpretation of data generated by multiple PDSA cycles must be displayed accurately and succinctly. The run chart has been developed as a simple technique for identifying nonrandom patterns (that is, signals), which allows QI researchers to determine the impact of each cycle of change and the stability of that change over a given time period.9 This often is contrasted with conventional statistical approaches that aggregate data and perform summary statistical comparisons at static time points. Instead, the run chart allows for an appreciation of the dynamic nature of PDSA-driven process manipulation and resulting outcome changes.

Correct interpretation of the presented data requires an understanding of common cause variation (CCV) and special cause variation (SCV). CCV occurs randomly and is present in all health care processes. It can never be eliminated completely. SCV, in contrast, is the result of external factors that are imposed on normal processes. For example, the introduction of audible timers within endoscopy rooms to ensure adequate withdrawal time may result in an increase in the ADR. The relatively stable ADR measured in both the pre-intervention and postintervention periods are subject to CCV. However, the postintervention increase in ADR is the result of SCV.10

As shown in Figure 2, the horizontal axis shows the time scale and spans the entire duration of the intervention period. The y-axis shows the outcome measure of interest. A horizontal line representing the median is shown.9 A goal line also may be depicted. Annotations to indicate the implementation of change or other important events (such as unintended consequences or unexpected events) also may be added to facilitate data interpretation.

AGA Institute
Figure 2
Specific rules based on standard statistics govern the objective interpretation of a run chart and allow the differentiation between random and cause-specific patterns of change.

Shift: at least six consecutive data points above or below the median line are needed (points on the median line are skipped).9 To assess a shift appropriately, at least 10 data points are required.

Trend: at least five consecutive data points all increasing in value or all decreasing in value are needed (numerically equivalent points are skipped).9

Runs: a run refers to a series of data points on one side of the median.9 If a random pattern of data points exists on the run chart, there should be an appropriate number of runs on either side of the median. Values outside of this indicate a higher probability of a nonrandom pattern.9,11

Astronomic point: this refers to a data point that subjectively is found to be obviously different from the rest and prompts consideration of the events that led to this.9

Although straightforward to construct and interpret for clinicians without statistical training, the run chart has specific limitations. It is ideal for the display of early data but cannot be used to determine its durability.9 In addition, a run chart does not reflect discrete data with no clear median.

The example run chart in Figure 2 shows that there is a shift in data points from below the median to above the median, ultimately achieving 100% group adherence to the ADR target of greater than 25%. There are only two runs for a total of 12 data points within the 12-month study period, indicating that there is a 5% or less probability that this is a random pattern.11 It appears that our interventions have resulted in incremental improvements in the ADR to exceed the target level in a nonrandom fashion. Although the cumulative effect of these interventions has been successful, it is difficult to predict the durability of this change moving forward. In addition, it would be difficult to select only a single intervention, of the many trialed, that would result in a sustained ADR of 25% or greater.

Summary and next steps

This article selectively reviews the process of change framed by the PDSA cycle. We also discuss the role of data display and interpretation using a run chart. The final article in this series will cover how to sustain change and support a culture of continuous improvement.

References

1. Corley, D.A., Jensen, C.D., Marks, A.R., et al. Adenoma detection rate and risk of colorectal cancer and death. N Engl J Med. 2014;370:1298-306.

2. Cohen, J., Schoenfeld, P., Park, W., et al. Quality indicators for colonoscopy. Gastrointest Endosc. 2015;81:31-53.

3. Module 5: Improvement Cycle. (2013). Available at: http://implementation.fpg.unc.edu/book/export/html/326. Accessed Feb. 1, 2016.

4. Taylor, M.J., McNicholas, C., Nicolay, C., et al. Systematic review of the application of the plan-do-study-act method to improve quality in healthcare. BMJ Qual Saf. 2014;23(4):290-8.

5. Davidoff, F., Batalden, P., Stevens, D. et al. Publication guidelines for quality improvement in health care: evolution of the SQUIRE project. Qual Saf Health Care. 2008;17:i3-9.

6. Ogrinc, G., Mooney, S., Estrada, C., et al. The SQUIRE (standards for Quality Improvement Reporting Excellence) guidelines for quality improvement reporting: explanation and elaboration. Qual Saf Health Care. 2008;17:i13-32.

7. Nelson, E.C., Batalden, B.P., Godfrey, M.M. Quality by design: a clinical microsystems approach. Jossey-Bass, San Francisco; 2007.

8. Coe, S.G.C.J., Diehl, N.N., Wallace, M.B. An endoscopic quality improvement program improves detection of colorectal adenomas. Am J Gastroenterol. 2013;108(2):219-26.

9. Perla, R.J., Provost, L.P., Murray, S.K. The run chart: a simple analytical tool for learning from variation in healthcare processes. BMJ Qual Saf. 2011;20:46-51.

10. Neuhauser, D., Provost, L., Bergman, B. The meaning of variation to healthcare managers, clinical and health-services researchers, and individual patients. BMJ Qual Saf. 2011;20:i36-40.

11. Swed, F.S. Eisenhart, C. Tables for testing randomness of grouping in a sequence of alternatives. Ann Math Statist. 1943;14:66-87

Dr. Bollegala is in the division of gastroenterology, department of medicine, Women’s College Hospital; Dr. Mosko is in the division of gastroenterology, department of medicine, St. Michael’s Hospital, and the Institute of Health Policy, Management, and Evaluation; Dr. Bernstein is in the division of gastroenterology, department of medicine, Sunnybrook Health Sciences Centre; Dr. Brahmania is in the Toronto Center for Liver Diseases, division of gastroenterology, department of medicine, University Health Network; Dr. Liu is in the division of gastroenterology, department of medicine, University Health Network; Dr. Steinhart is at Mount Sinai Hospital Centre for Inflammatory Bowel Disease, department of medicine and Institute of Health Policy, Management, and Evaluation; Dr. Silver is in the division of nephrology, St. Michael’s Hospital; Dr. Bell is in the division of internal medicine, department of medicine, Mount Sinai Hospital; Dr. Nguyen is at Mount Sinai Hospital Centre for Inflammatory Bowel Disease, department of medicine; Dr. Weizman is at the Mount Sinai Hospital Centre for Inflammatory Bowel Disease, department of medicine, and Institute of Health Policy, Management and Evaluation. All are at the University of Toronto. Dr. Patel is in the division of gastroenterology and hepatology, department of medicine, Baylor College of Medicine, Houston. The authors disclose no conflicts.

 

This month’s column is the second in a series of three articles written by a group from Toronto and Houston. The series imagined that a community of gastroenterologists set out to improve the adenoma detection rates of physicians in their practice. The first article described the design and launch of the project. This month, Dr. Bollegala and her colleagues explain the plan-do-study-act (PDSA) cycle of improvement within a small practice. The PDSA cycle is a fundamental component of successful quality improvement initiatives; it allows a group to systematically analyze what works and what doesn’t. The focus of this article is squarely on small community practices (still the majority of gastrointestinal practices nationally), so its relevance is high. PDSA cycles are small, narrowly focused projects that can be accomplished by all as we strive to improve our care of the patients we serve. Next month, we will learn how to embed a quality initiative within our practices so sustained improvement can be seen.



John I. Allen, MD, MBA, AGAF

Editor in Chief

 

Article 1 of our series focused on the emergence of the adenoma detection rate (ADR) as a quality indicator for colonoscopy-based colorectal cancer screening programs.1 A target ADR of 25% has been established by several national gastroenterology societies and serves as a focus area for those seeking to develop quality improvement (QI) initiatives aimed at reducing the interval incidence of colorectal cancer.2 In this series, you are a community-based urban general gastroenterologist interested in improving your current group ADR of 19% to the established target of 25% for each individual endoscopist within the group over a 12-month period.

This article focuses on a clinician-friendly description of the plan-do-study-act (PDSA) cycle, a key construct within the Model for Improvement framework for QI initiatives. It also describes the importance and key elements of QI data reporting, including the run chart. All core concepts will be framed within the series example of the development of an institutional QI initiative for ADR improvement.
 

Plan-Do-Study-Act cycle

Conventional scientific research in health care generally is based on large-scale projects, performed over long periods of time and producing aggregate data analyzed through summary statistics. QI-related research, as it relates to PDSA, in contrast, is characterized by smaller-scale projects performed over shorter periods of time, with iterative protocols to accommodate local context and therefore optimize intervention success. As such, the framework for their development, implementation, and continual modification requires a conceptual and methodologic shift.

The PDSA cycle is characterized by four key steps. The first step is to plan. This step involves addressing the following questions: 1) what are we trying to accomplish? (aim); 2) how will we know that a change is an improvement? (measure); and 3) what changes can we make that will lead to improvement? (change). Additional considerations include ensuring that the stated goal is attainable, relevant, and that the timeline is feasible. An important aspect of the plan stage is gaining an understanding for the current local context, key participants and their roles, and areas in which performance is excelling or is challenged. This understanding is critical to conceptually linking the identified problem with its proposed solution. Formulating an impact prediction allows subsequent learning and adaptation.

The second step is to do. This step involves execution of the identified plan over a specified period of time. It also involves rigorous qualitative and quantitative data collection, allowing the research team to assess change and document unexpected events. The identification of an implementation leader or champion to ensure protocol adherence, effective communication among team members, and coordinate accurate data collection can be critical for overall success.

The third step is to study. This step requires evaluating whether a change in the outcome measure has occurred, which intervention was successful, and whether an identified change is sustained over time. It also requires interpretation of change within the local context, specifically with respect to unintended consequences, unanticipated events, and the sustainability of any gains. To interpret study findings appropriately, feedback with involved process members, endoscopists, and/or other stakeholder groups may be necessary. This can be important for explaining the results of each cycle, identifying protocol modifications for future cycles, and optimizing the opportunity for success. Studying the data generated by a QI initiative requires clear and accurate data display and rules for interpretation.

The fourth step is to act. This final step allows team members to reflect on the results generated and decide whether the same intervention should be continued, modified, or changed, thereby incorporating lessons learned from previous PDSA cycles (Figure 1).3

AGA Institute
Figure 1
Documentation of each PDSA cycle is an important component of the QI research process, allowing for learning that informs future cycles or initiatives, reflection, and knowledge capture.4 However, a recent systematic review published by Taylor et al.4 reported an “inconsistent approach to the application and reporting of PDSA cycles and a lack of adherence to key principles of the method.” Fewer than 20% (14 of 73) of articles reported each PDSA cycle, with 14% of articles reporting data continuously. Only 9% of articles explicitly documented a theory-based result prediction for each cycle of change. As such, caution was advised in the interpretation and implementation of studies with inadequate PDSA conduct and/or reporting. The Standards for Quality Improvement Reporting Excellence guidelines have proposed a QI-specific publication framework.5,6 However, no standardized criteria for the conduct or reporting of the PDSA framework currently exist. In addition, the PDSA cycle is limited in its reactive nature. It also may inadequately account for system/process complexity, which may lead to varying results for the same change over time.4 Finally, it does not clearly identify the most effective intervention in achieving the target, thereby preventing simplification of the overall intervention strategy.

Despite these challenges, the PDSA framework allows for small-scale and fast-paced initiative testing that reduces patient and institutional risk while minimizing the commitment of resources.4,7 Successful cycles improve stakeholder confidence in the probability for success with larger-scale implementation.

In our series example, step 1 of the PDSA cycle, plan, can be described as follows: Aim: increase the ADR of all group endoscopists to 25% over a 12-month period. Measure: Outcome: the proportion of endoscopists at your institution with an ADR greater than 25%; process – withdrawal time; balancing – staff satisfaction, patient satisfaction, and procedure time. Change: Successive cycles will institute the following: audible timers to ensure adequate withdrawal time, publication of an endoscopist-specific composite score, and training to improve inspection technique.8

In step 2 of the PDSA cycle, do, a physician member of the gastroenterology division incorporates QI into their job description and leads a change team charged with PDSA cycle 1. An administrative assistant calculates the endoscopist-specific ADRs for that month. Documentation of related events for this cycle such as unexpected physician absence, delays in polyp histology reporting, and so forth, is performed.

In step 3 of the PDSA cycle, study, the data generated will be represented on a run chart plotting the proportion of endoscopists with an ADR greater than 25% on the y-axis, and time (in monthly intervals) on the x-axis. This will be described in further detail in a later section.

In the final step of the PDSA cycle, act, continuation and modification of the tested changes can be represented as follows.
 

 

 

Displaying data

The documentation, analysis, and interpretation of data generated by multiple PDSA cycles must be displayed accurately and succinctly. The run chart has been developed as a simple technique for identifying nonrandom patterns (that is, signals), which allows QI researchers to determine the impact of each cycle of change and the stability of that change over a given time period.9 This often is contrasted with conventional statistical approaches that aggregate data and perform summary statistical comparisons at static time points. Instead, the run chart allows for an appreciation of the dynamic nature of PDSA-driven process manipulation and resulting outcome changes.

Correct interpretation of the presented data requires an understanding of common cause variation (CCV) and special cause variation (SCV). CCV occurs randomly and is present in all health care processes. It can never be eliminated completely. SCV, in contrast, is the result of external factors that are imposed on normal processes. For example, the introduction of audible timers within endoscopy rooms to ensure adequate withdrawal time may result in an increase in the ADR. The relatively stable ADR measured in both the pre-intervention and postintervention periods are subject to CCV. However, the postintervention increase in ADR is the result of SCV.10

As shown in Figure 2, the horizontal axis shows the time scale and spans the entire duration of the intervention period. The y-axis shows the outcome measure of interest. A horizontal line representing the median is shown.9 A goal line also may be depicted. Annotations to indicate the implementation of change or other important events (such as unintended consequences or unexpected events) also may be added to facilitate data interpretation.

AGA Institute
Figure 2
Specific rules based on standard statistics govern the objective interpretation of a run chart and allow the differentiation between random and cause-specific patterns of change.

Shift: at least six consecutive data points above or below the median line are needed (points on the median line are skipped).9 To assess a shift appropriately, at least 10 data points are required.

Trend: at least five consecutive data points all increasing in value or all decreasing in value are needed (numerically equivalent points are skipped).9

Runs: a run refers to a series of data points on one side of the median.9 If a random pattern of data points exists on the run chart, there should be an appropriate number of runs on either side of the median. Values outside of this indicate a higher probability of a nonrandom pattern.9,11

Astronomic point: this refers to a data point that subjectively is found to be obviously different from the rest and prompts consideration of the events that led to this.9

Although straightforward to construct and interpret for clinicians without statistical training, the run chart has specific limitations. It is ideal for the display of early data but cannot be used to determine its durability.9 In addition, a run chart does not reflect discrete data with no clear median.

The example run chart in Figure 2 shows that there is a shift in data points from below the median to above the median, ultimately achieving 100% group adherence to the ADR target of greater than 25%. There are only two runs for a total of 12 data points within the 12-month study period, indicating that there is a 5% or less probability that this is a random pattern.11 It appears that our interventions have resulted in incremental improvements in the ADR to exceed the target level in a nonrandom fashion. Although the cumulative effect of these interventions has been successful, it is difficult to predict the durability of this change moving forward. In addition, it would be difficult to select only a single intervention, of the many trialed, that would result in a sustained ADR of 25% or greater.

Summary and next steps

This article selectively reviews the process of change framed by the PDSA cycle. We also discuss the role of data display and interpretation using a run chart. The final article in this series will cover how to sustain change and support a culture of continuous improvement.

References

1. Corley, D.A., Jensen, C.D., Marks, A.R., et al. Adenoma detection rate and risk of colorectal cancer and death. N Engl J Med. 2014;370:1298-306.

2. Cohen, J., Schoenfeld, P., Park, W., et al. Quality indicators for colonoscopy. Gastrointest Endosc. 2015;81:31-53.

3. Module 5: Improvement Cycle. (2013). Available at: http://implementation.fpg.unc.edu/book/export/html/326. Accessed Feb. 1, 2016.

4. Taylor, M.J., McNicholas, C., Nicolay, C., et al. Systematic review of the application of the plan-do-study-act method to improve quality in healthcare. BMJ Qual Saf. 2014;23(4):290-8.

5. Davidoff, F., Batalden, P., Stevens, D. et al. Publication guidelines for quality improvement in health care: evolution of the SQUIRE project. Qual Saf Health Care. 2008;17:i3-9.

6. Ogrinc, G., Mooney, S., Estrada, C., et al. The SQUIRE (standards for Quality Improvement Reporting Excellence) guidelines for quality improvement reporting: explanation and elaboration. Qual Saf Health Care. 2008;17:i13-32.

7. Nelson, E.C., Batalden, B.P., Godfrey, M.M. Quality by design: a clinical microsystems approach. Jossey-Bass, San Francisco; 2007.

8. Coe, S.G.C.J., Diehl, N.N., Wallace, M.B. An endoscopic quality improvement program improves detection of colorectal adenomas. Am J Gastroenterol. 2013;108(2):219-26.

9. Perla, R.J., Provost, L.P., Murray, S.K. The run chart: a simple analytical tool for learning from variation in healthcare processes. BMJ Qual Saf. 2011;20:46-51.

10. Neuhauser, D., Provost, L., Bergman, B. The meaning of variation to healthcare managers, clinical and health-services researchers, and individual patients. BMJ Qual Saf. 2011;20:i36-40.

11. Swed, F.S. Eisenhart, C. Tables for testing randomness of grouping in a sequence of alternatives. Ann Math Statist. 1943;14:66-87

Dr. Bollegala is in the division of gastroenterology, department of medicine, Women’s College Hospital; Dr. Mosko is in the division of gastroenterology, department of medicine, St. Michael’s Hospital, and the Institute of Health Policy, Management, and Evaluation; Dr. Bernstein is in the division of gastroenterology, department of medicine, Sunnybrook Health Sciences Centre; Dr. Brahmania is in the Toronto Center for Liver Diseases, division of gastroenterology, department of medicine, University Health Network; Dr. Liu is in the division of gastroenterology, department of medicine, University Health Network; Dr. Steinhart is at Mount Sinai Hospital Centre for Inflammatory Bowel Disease, department of medicine and Institute of Health Policy, Management, and Evaluation; Dr. Silver is in the division of nephrology, St. Michael’s Hospital; Dr. Bell is in the division of internal medicine, department of medicine, Mount Sinai Hospital; Dr. Nguyen is at Mount Sinai Hospital Centre for Inflammatory Bowel Disease, department of medicine; Dr. Weizman is at the Mount Sinai Hospital Centre for Inflammatory Bowel Disease, department of medicine, and Institute of Health Policy, Management and Evaluation. All are at the University of Toronto. Dr. Patel is in the division of gastroenterology and hepatology, department of medicine, Baylor College of Medicine, Houston. The authors disclose no conflicts.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads