Affiliations
Department of Internal Medicine, University of Iowa Carver College of Medicine
Given name(s)
Hilary
Family name
Mosher
Degrees
MD

A Prescription for Note Bloat: An Effective Progress Note Template

Article Type
Changed
Wed, 12/26/2018 - 05:33

The widespread adoption of electronic health records (EHRs) has led to significant progress in the modernization of healthcare delivery. Ease of access has improved clinical efficiency, and digital data have allowed for point-of-care decision support tools ranging from predicting the 30-day risk of readmission to providing up-to-date guidelines for the care of various diseases.1,2 Documentation tools such as copy-forward and autopopulation increase the speed of documentation, and typed notes improve legibility and ease of note transmission.3,4

However, all of these benefits come with a potential for harm, particularly with respect to accurate and concise documentation. Many experts have described the perpetuation of false information leading to errors, copying-forward of inconsistent and outdated information, and the phenomenon of “note bloat” — physician notes that contain multiple pages of nonessential information, often leaving key aspects buried or lost.5-7 Providers seem to recognize the hazards of copy-and-paste functionality yet persist in utilizing it. In 1 survey, more than 70% of attendings and residents felt that copy and paste led to inaccurate and outdated information, yet 80% stated they would still use it.8

There is little evidence to guide institutions on ways to improve EHR documentation practices. Recent studies have shown that operative note templates improved documentation and decreased the number of missing components.9,10 In the nonoperative setting, 1 small pilot study of pediatric interns demonstrated that a bundled intervention composed of a note template and classroom teaching resulted in improvement in overall note quality and a decrease in “note clutter.”11 In a larger study of pediatric residents, a standardized and simplified note template resulted in a shorter note, although notes were completed later in the day.12 The present study seeks to build upon these efforts by investigating the effect of didactic teaching and an electronic progress note template on note quality, length, and timeliness across 4 academic internal medicine residency programs.

METHODS

Study Design

This prospective quality improvement study took place across 4 academic institutions: University of California Los Angeles (UCLA), University of California San Francisco (UCSF), University of California San Diego (UCSD), and University of Iowa, all of which use Epic EHR (Epic Corp., Madison, WI). The intervention combined brief educational conferences directed at housestaff and attendings with the implementation of an electronic progress note template. Guided by resident input, a note-writing task force at UCSF and UCLA developed a set of best practice guidelines and an aligned note template for progress notes (supplementary Appendix 1). UCSD and the University of Iowa adopted them at their respective institutions. The template’s design minimized autopopulation while encouraging providers to enter relevant data via free text fields (eg, physical exam), prompts (eg, “I have reviewed all the labs from today. Pertinent labs include…”), and drop-down menus (eg, deep vein thrombosis [DVT] prophylaxis: enoxaparin, heparin subcutaneously, etc; supplementary Appendix 2). Additionally, an inpatient checklist was included at the end of the note to serve as a reminder for key inpatient concerns and quality measures, such as Foley catheter days, discharge planning, and code status. Lectures that focused on issues with documentation in the EHR, the best practice guidelines, and a review of the note template with instructions on how to access it were presented to the housestaff. Each institution tailored the lecture to suit their culture. Housestaff were encouraged but not required to use the note template.

Selection and Grading of Progress Notes

Progress notes were eligible for the study if they were written by an intern on an internal medicine teaching service, from a patient with a hospitalization length of at least 3 days with a progress note selected from hospital day 2 or 3, and written while the patient was on the general medicine wards. The preintervention notes were authored from September 2013 to December 2013 and the postintervention notes from April 2014 to June 2014. One note was selected per patient and no more than 3 notes were selected per intern. Each institution selected the first 50 notes chronologically that met these criteria for both the preintervention and the postintervention periods, for a total of 400 notes. The note-grading tool consisted of the following 3 sections to analyze note quality: (1) a general impression of the note (eg, below average, average, above average); (2) the validated Physician Documentation Quality Instrument, 9-item version (PDQI-9) that evaluates notes on 9 domains (up to date, accurate, thorough, useful, organized, comprehensible, succinct, synthesized, internally consistent) on a Likert scale from 1 (not at all) to 5 (extremely); and (3) a note competency questionnaire based on the Accreditation Council for Graduate Medical Education competency note checklist that asked yes or no questions about best practice elements (eg, is there a relevant and focused physical exam).12

 

 

Graders were internal medicine teaching faculty involved in the study and were assigned to review notes from their respective sites by directly utilizing the EHR. Although this introduces potential for bias, it was felt that many of the grading elements required the grader to know details of the patient that would not be captured if the note was removed from the context of the EHR. Additionally, graders documented note length (number of lines of text), the time signed by the housestaff, and whether the template was used. Three different graders independently evaluated each note and submitted ratings by using Research Electronic Data Capture.13

Statistical Analysis

Means for each item on the grading tool were computed across raters for each progress note. These were summarized by institution as well as by pre- and postintervention. Cumulative logit mixed effects models were used to compare item responses between study conditions. The number of lines per note before and after the note template intervention was compared by using a mixed effects negative binomial regression model. The timestamp on each note, representing the time of day the note was signed, was compared pre- and postintervention by using a linear mixed effects model. All models included random note and rater effects, and fixed institution and intervention period effects, as well as their interaction. Inter-rater reliability of the grading tool was assessed by calculating the intraclass correlation coefficient (ICC) using the estimated variance components. Data obtained from the PDQI-9 portion were analyzed by individual components as well as by sum score combining each component. The sum score was used to generate odds ratios to assess the likelihood that postintervention notes that used the template compared to those that did not would increase PDQI-9 sum scores. Both cumulative and site-specific data were analyzed. P values < .05 were considered statistically significant. All analyses were performed using SAS version 9.4 (SAS Institute Inc, Cary, NC).

RESULTS

A total of 200 preintervention and 199 postintervention notes were graded (1 note was erroneously selected twice, leading to 49 postintervention notes from that institution). Seventy percent of postintervention notes used the best practice note template.

The mean general impression score significantly improved from 2.0 to 2.3 (on a 1-3 scale in which 2 is average) after the intervention (P < .001). Additionally, note quality significantly improved across each domain of the PDQI-9 (P < .001 for all domains, Table 1). The ICC was 0.245 for the general impression score and 0.143 for the PDQI-9 sum score.

Among the competency questionnaire, the most profound improvement was documentation of only “relevant lab values and studies and removal of older data rather than importing all information” (29% preintervention, 63% postintervention, P < .001; Table 2). Additionally, significant improvements were seen in notes being “concise yet adequately complete,” and in documenting a “relevant and focused physical exam,” an “updated problem list,” and “mention of a discharge plan” (Table 2). Copying and pasting a note from another physician did not decrease significantly (P = .36).

Three of 4 institutions documented the number of lines per note and the time the note was signed by the intern. Mean number of lines per note decreased by 25% (361 lines preintervention, 265 lines postintervention, P < .001). Mean time signed was approximately 1 hour and 15 minutes earlier in the day (3:27 pm preintervention and 2:10 pm postintervention, P < .001).

Site-specific data revealed variation between sites. Template use was 92% at UCSF, 90% at UCLA, 79% at Iowa, and 21% at UCSD. The mean general impression score significantly improved at UCSF, UCLA, and UCSD, but not at Iowa. The PDQI-9 score improved across all domains at UCSF and UCLA, 2 domains at UCSD, and 0 domains at Iowa. Documentation of pertinent labs and studies significantly improved at UCSF, UCLA, and Iowa, but not UCSD. Note length decreased at UCSF and UCLA, but not at UCSD. Notes were signed earlier at UCLA and UCSD, but not at UCSF.

When comparing postintervention notes based on template use, notes that used the template were significantly more likely to receive a higher mean impression score (odds ratio [OR] 11.95, P < .001), higher PDQI-9 sum score (OR 3.05, P < .001), be approximately 25% shorter (326 lines vs 239 lines, P < .001), and be completed approximately 1 hour and 20 minutes earlier (3:07 pm vs 1:45 pm, P < .001) than nontemplated notes from that same period. Additionally, at each institution, templated notes were more likely than nontemplated notes to receive a higher PDQI-9 sum score (OR at UCSF 6.81, P < .05; OR at UCLA 17.95, P < .001; OR at UCSD 10.99, P < .001; OR at Iowa 4.01, P < .05).

 

 

DISCUSSION

A bundled intervention consisting of educational lectures and a best practice progress note template significantly improved the quality, decreased the length, and resulted in earlier completion of inpatient progress notes. These findings are consistent with a prior study that demonstrated that a bundled note template intervention improved total note score and reduced note clutter.11 We saw a broad improvement in progress notes across all 9 domains of the PDQI-9, which corresponded with an improved general impression score. We also found statistically significant improvements in 7 of the 13 categories of the competency questionnaire.

Arguably the greatest impact of the intervention was shortening the documentation of labs and studies. Autopopulation can lead to the appearance of a comprehensive note; however, key data are often lost in a sea of numbers and imaging reports.6,14 Using simple prompts followed by free text such as, “I have reviewed all the labs from today. Pertinent labs include…” reduced autopopulation and reminded housestaff to identify only the key information that affected patient care for that day, resulting in a more streamlined, clear, and high-yield note.

The time spent documenting care is an important consideration for physician workflow and for uptake of any note intervention.14-18 One study from 2016 revealed that internal medicine housestaff spend more than half of an average shift using the computer, with 52% of that time spent on documentation.17 Although functions such as autopopulation and copy-forward were created as efficiency tools, we hypothesize that they may actually prolong note writing time by leading to disorganized, distended notes that are difficult to use the following day. There was concern that limiting these “efficiency functions” might discourage housestaff from using the progress note template. It was encouraging to find that postintervention notes were signed 1.3 hours earlier in the day. This study did not measure the impact of shorter notes and earlier completion time, but in theory, this could allow interns to spend more time in direct patient care and to be at lower risk of duty hour violations.19 Furthermore, while the clinical impact of this is unknown, it is possible that timely note completion may improve patient care by making notes available earlier for consultants and other members of the care team.

We found that adding an “inpatient checklist” to the progress note template facilitated a review of key inpatient concerns and quality measures. Although we did not specifically compare before-and-after documentation of all of the components of the checklist, there appeared to be improvement in the domains measured. Notably, there was a 31% increase (P < .001) in the percentage of notes documenting the “discharge plan, goals of hospitalization, or estimated length of stay.” In the surgical literature, studies have demonstrated that incorporating checklists improves patient safety, the delivery of care, and potentially shortens the length of stay.20-22 Future studies should explore the impact of adding a checklist to the daily progress note, as there may be potential to improve both process and outcome measures.

Institution-specific data provided insightful results. UCSD encountered low template use among their interns; however, they still had evidence of improvement in note quality, though not at the same level of UCLA and UCSF. Some barriers to uptake identified were as follows: (1) interns were accustomed to import labs and studies into their note to use as their rounding report, and (2) the intervention took place late in the year when interns had developed a functional writing system that they were reluctant to change. The University of Iowa did not show significant improvement in their note quality despite a relatively high template uptake. Both of these outcomes raise the possibility that in addition to the template, there were other factors at play. Perhaps because UCSF and UCLA created the best practice guidelines and template, it was a better fit for their culture and they had more institutional buy-in. Or because the educational lectures were similar, but not standardized across institutions, some lectures may have been more effective than others. However, when evaluating the postintervention notes at UCSD and Iowa, templated notes were found to be much more likely to score higher on the PDQI-9 than nontemplated notes, which serves as evidence of the efficacy of the note template.

Some of the strengths of this study include the relatively large sample size spanning 4 institutions and the use of 3 different assessment tools for grading progress note quality (general impression score, PDQI-9, and competency note questionnaire). An additional strength is our unique finding suggesting that note writing may be more efficient by removing, rather than adding, “efficiency functions.” There were several limitations of this study. Pre- and postintervention notes were examined at different points in the same academic year, thus certain domains may have improved as interns progressed in clinical skill and comfort with documentation, independent of our intervention.21 However, our analysis of postintervention notes across the same time period revealed that use of the template was strongly associated with higher quality, shorter notes and earlier completion time arguing that the effect seen was not merely intern experience. The poor interrater reliability is also a limitation. Although the PDQI-9 was previously validated, future use of the grading tool may require more rater training for calibration or more objective wording.23 The study was not blinded, and thus, bias may have falsely elevated postintervention scores; however, we attempted to minimize bias by incorporating a more objective yes/no competency questionnaire and by having each note scored by 3 graders. Other studies have attempted to address this form of bias by printing out notes and blinding the graders. This design, however, isolates the note from all other data in the medical record, making it difficult to assess domains such as accuracy and completeness. Our inclusion of objective outcomes such as note length and time of note completion help to mitigate some of the bias.

Future research can expand on the results of this study by introducing similar progress note interventions at other institutions and/or in nonacademic environments to validate the results and expand generalizability. Longer term follow-up would be useful to determine if these effects are transient or long lasting. Similarly, it would be interesting to determine if such results are sustained even after new interns start suggesting that institutional culture can be changed. Investigators could focus on similar projects to improve other notes that are particularly at a high risk for propagating false information, such as the History and Physical or Discharge Summary. Future research should also focus on outcomes data, including whether a more efficient note can allow housestaff to spend more time with patients, decrease patient length of stay, reduce clinical errors, and improve educational time for trainees. Lastly, we should determine if interventions such as this can mitigate the widespread frustrations with electronic documentation that are associated with physician and provider burnout.15,24 One would hope that the technology could be harnessed to improve provider productivity and be effectively integrated into comprehensive patient care.

Our research makes progress toward recommendations made by the American College of Physicians “to improve accuracy of information recorded and the value of information,” and develop automated tools that “enhance documentation quality without facilitating improper behaviors.”19 Institutions should consider developing internal best practices for clinical documentation and building structured note templates.19 Our research would suggest that, combined with a small educational intervention, such templates can make progress notes more accurate and succinct, make note writing more efficient, and be harnessed to improve quality metrics.

 

 

ACKNOWLEDGMENTS

The authors thank Michael Pfeffer, MD, and Sitaram Vangala, MS, for their contributions to and support of this research study and manuscript.

Disclosure: The authors declare no conflicts of interest.

Files
References

1. Herzig SJ, Guess JR, Feinbloom DB, et al. Improving appropriateness of acid-suppressive medication use via computerized clinical decision support. J Hosp Med. 2015;10(1):41-45. PubMed
2. Nguyen OK, Makam AN, Clark C, et al. Predicting all-cause readmissions using electronic health record data from the entire hospitalization: Model development and comparison. J Hosp Med. 2016;11(7):473-480. PubMed
3. Donati A, Gabbanelli V, Pantanetti S, et al. The impact of a clinical information system in an intensive care unit. J Clin Monit Comput. 2008;22(1):31-36. PubMed
4. Schiff GD, Bates DW. Can electronic clinical documentation help prevent diagnostic errors? N Engl J Med. 2010;362(12):1066-1069. PubMed
5. Hartzband P, Groopman J. Off the record--avoiding the pitfalls of going electronic. N Engl J Med. 2008;358(16):1656-1658. PubMed
6. Hirschtick RE. A piece of my mind. Copy-and-paste. JAMA. 2006;295(20):2335-2336. PubMed
7. Hirschtick RE. A piece of my mind. John Lennon’s elbow. JAMA. 2012;308(5):463-464. PubMed
8. O’Donnell HC, Kaushal R, Barrón Y, Callahan MA, Adelman RD, Siegler EL. Physicians’ attitudes towards copy and pasting in electronic note writing. J Gen Intern Med. 2009;24(1):63-68. PubMed
9. Mahapatra P, Ieong E. Improving Documentation and Communication Using Operative Note Proformas. BMJ Qual Improv Rep. 2016;5(1):u209122.w3712. PubMed
10. Thomson DR, Baldwin MJ, Bellini MI, Silva MA. Improving the quality of operative notes for laparoscopic cholecystectomy: Assessing the impact of a standardized operation note proforma. Int J Surg. 2016;27:17-20. PubMed
11. Dean SM, Eickhoff JC, Bakel LA. The effectiveness of a bundled intervention to improve resident progress notes in an electronic health record. J Hosp Med. 2015;10(2):104-107. PubMed
12. Aylor M, Campbell EM, Winter C, Phillipi CA. Resident Notes in an Electronic Health Record: A Mixed-Methods Study Using a Standardized Intervention With Qualitative Analysis. Clin Pediatr (Phila). 2016;6(3):257-262. 
13. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377-381. PubMed
14. Chi J, Kugler J, Chu IM, et al. Medical students and the electronic health record: ‘an epic use of time’. Am J Med. 2014;127(9):891-895. PubMed
15. Martin SA, Sinsky CA. The map is not the territory: medical records and 21st century practice. Lancet. 2016;388(10055):2053-2056. PubMed
16. Oxentenko AS, Manohar CU, McCoy CP, et al. Internal medicine residents’ computer use in the inpatient setting. J Grad Med Educ. 2012;4(4):529-532. PubMed
17. Mamykina L, Vawdrey DK, Hripcsak G. How Do Residents Spend Their Shift Time? A Time and Motion Study With a Particular Focus on the Use of Computers. Acad Med. 2016;91(6):827-832. PubMed
18. Chen L, Guo U, Illipparambil LC, et al. Racing Against the Clock: Internal Medicine Residents’ Time Spent On Electronic Health Records. J Grad Med Educ. 2016;8(1):39-44. PubMed
19. Kuhn T, Basch P, Barr M, Yackel T, Physicians MICotACo. Clinical documentation in the 21st century: executive summary of a policy position paper from the American College of Physicians. Ann Intern Med. 2015;162(4):301-303. PubMed
20. Treadwell JR, Lucas S, Tsou AY. Surgical checklists: a systematic review of impacts and implementation. BMJ Qual Saf. 2014;23(4):299-318. PubMed
21. Ko HC, Turner TJ, Finnigan MA. Systematic review of safety checklists for use by medical care teams in acute hospital settings--limited evidence of effectiveness. BMC Health Serv Res. 2011;11:211. PubMed
22. Diaz-Montes TP, Cobb L, Ibeanu OA, Njoku P, Gerardi MA. Introduction of checklists at daily progress notes improves patient care among the gynecological oncology service. J Patient Saf. 2012;8(4):189-193. PubMed
23. Stetson PD, Bakken S, Wrenn JO, Siegler EL. Assessing Electronic Note Quality Using the Physician Documentation Quality Instrument (PDQI-9). Appl Clin Inform. 2012;3(2):164-174. PubMed
24. Friedberg MW, Chen PG, Van Busum KR, et al. Factors affecting physician professional satisfaction and their implications for patient care, health systems, and health policy. Santa Monica, CA: RAND Corporation; 2013. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(6)
Publications
Topics
Page Number
378-382. Published online first January 19, 2018
Sections
Files
Files
Article PDF
Article PDF

The widespread adoption of electronic health records (EHRs) has led to significant progress in the modernization of healthcare delivery. Ease of access has improved clinical efficiency, and digital data have allowed for point-of-care decision support tools ranging from predicting the 30-day risk of readmission to providing up-to-date guidelines for the care of various diseases.1,2 Documentation tools such as copy-forward and autopopulation increase the speed of documentation, and typed notes improve legibility and ease of note transmission.3,4

However, all of these benefits come with a potential for harm, particularly with respect to accurate and concise documentation. Many experts have described the perpetuation of false information leading to errors, copying-forward of inconsistent and outdated information, and the phenomenon of “note bloat” — physician notes that contain multiple pages of nonessential information, often leaving key aspects buried or lost.5-7 Providers seem to recognize the hazards of copy-and-paste functionality yet persist in utilizing it. In 1 survey, more than 70% of attendings and residents felt that copy and paste led to inaccurate and outdated information, yet 80% stated they would still use it.8

There is little evidence to guide institutions on ways to improve EHR documentation practices. Recent studies have shown that operative note templates improved documentation and decreased the number of missing components.9,10 In the nonoperative setting, 1 small pilot study of pediatric interns demonstrated that a bundled intervention composed of a note template and classroom teaching resulted in improvement in overall note quality and a decrease in “note clutter.”11 In a larger study of pediatric residents, a standardized and simplified note template resulted in a shorter note, although notes were completed later in the day.12 The present study seeks to build upon these efforts by investigating the effect of didactic teaching and an electronic progress note template on note quality, length, and timeliness across 4 academic internal medicine residency programs.

METHODS

Study Design

This prospective quality improvement study took place across 4 academic institutions: University of California Los Angeles (UCLA), University of California San Francisco (UCSF), University of California San Diego (UCSD), and University of Iowa, all of which use Epic EHR (Epic Corp., Madison, WI). The intervention combined brief educational conferences directed at housestaff and attendings with the implementation of an electronic progress note template. Guided by resident input, a note-writing task force at UCSF and UCLA developed a set of best practice guidelines and an aligned note template for progress notes (supplementary Appendix 1). UCSD and the University of Iowa adopted them at their respective institutions. The template’s design minimized autopopulation while encouraging providers to enter relevant data via free text fields (eg, physical exam), prompts (eg, “I have reviewed all the labs from today. Pertinent labs include…”), and drop-down menus (eg, deep vein thrombosis [DVT] prophylaxis: enoxaparin, heparin subcutaneously, etc; supplementary Appendix 2). Additionally, an inpatient checklist was included at the end of the note to serve as a reminder for key inpatient concerns and quality measures, such as Foley catheter days, discharge planning, and code status. Lectures that focused on issues with documentation in the EHR, the best practice guidelines, and a review of the note template with instructions on how to access it were presented to the housestaff. Each institution tailored the lecture to suit their culture. Housestaff were encouraged but not required to use the note template.

Selection and Grading of Progress Notes

Progress notes were eligible for the study if they were written by an intern on an internal medicine teaching service, from a patient with a hospitalization length of at least 3 days with a progress note selected from hospital day 2 or 3, and written while the patient was on the general medicine wards. The preintervention notes were authored from September 2013 to December 2013 and the postintervention notes from April 2014 to June 2014. One note was selected per patient and no more than 3 notes were selected per intern. Each institution selected the first 50 notes chronologically that met these criteria for both the preintervention and the postintervention periods, for a total of 400 notes. The note-grading tool consisted of the following 3 sections to analyze note quality: (1) a general impression of the note (eg, below average, average, above average); (2) the validated Physician Documentation Quality Instrument, 9-item version (PDQI-9) that evaluates notes on 9 domains (up to date, accurate, thorough, useful, organized, comprehensible, succinct, synthesized, internally consistent) on a Likert scale from 1 (not at all) to 5 (extremely); and (3) a note competency questionnaire based on the Accreditation Council for Graduate Medical Education competency note checklist that asked yes or no questions about best practice elements (eg, is there a relevant and focused physical exam).12

 

 

Graders were internal medicine teaching faculty involved in the study and were assigned to review notes from their respective sites by directly utilizing the EHR. Although this introduces potential for bias, it was felt that many of the grading elements required the grader to know details of the patient that would not be captured if the note was removed from the context of the EHR. Additionally, graders documented note length (number of lines of text), the time signed by the housestaff, and whether the template was used. Three different graders independently evaluated each note and submitted ratings by using Research Electronic Data Capture.13

Statistical Analysis

Means for each item on the grading tool were computed across raters for each progress note. These were summarized by institution as well as by pre- and postintervention. Cumulative logit mixed effects models were used to compare item responses between study conditions. The number of lines per note before and after the note template intervention was compared by using a mixed effects negative binomial regression model. The timestamp on each note, representing the time of day the note was signed, was compared pre- and postintervention by using a linear mixed effects model. All models included random note and rater effects, and fixed institution and intervention period effects, as well as their interaction. Inter-rater reliability of the grading tool was assessed by calculating the intraclass correlation coefficient (ICC) using the estimated variance components. Data obtained from the PDQI-9 portion were analyzed by individual components as well as by sum score combining each component. The sum score was used to generate odds ratios to assess the likelihood that postintervention notes that used the template compared to those that did not would increase PDQI-9 sum scores. Both cumulative and site-specific data were analyzed. P values < .05 were considered statistically significant. All analyses were performed using SAS version 9.4 (SAS Institute Inc, Cary, NC).

RESULTS

A total of 200 preintervention and 199 postintervention notes were graded (1 note was erroneously selected twice, leading to 49 postintervention notes from that institution). Seventy percent of postintervention notes used the best practice note template.

The mean general impression score significantly improved from 2.0 to 2.3 (on a 1-3 scale in which 2 is average) after the intervention (P < .001). Additionally, note quality significantly improved across each domain of the PDQI-9 (P < .001 for all domains, Table 1). The ICC was 0.245 for the general impression score and 0.143 for the PDQI-9 sum score.

Among the competency questionnaire, the most profound improvement was documentation of only “relevant lab values and studies and removal of older data rather than importing all information” (29% preintervention, 63% postintervention, P < .001; Table 2). Additionally, significant improvements were seen in notes being “concise yet adequately complete,” and in documenting a “relevant and focused physical exam,” an “updated problem list,” and “mention of a discharge plan” (Table 2). Copying and pasting a note from another physician did not decrease significantly (P = .36).

Three of 4 institutions documented the number of lines per note and the time the note was signed by the intern. Mean number of lines per note decreased by 25% (361 lines preintervention, 265 lines postintervention, P < .001). Mean time signed was approximately 1 hour and 15 minutes earlier in the day (3:27 pm preintervention and 2:10 pm postintervention, P < .001).

Site-specific data revealed variation between sites. Template use was 92% at UCSF, 90% at UCLA, 79% at Iowa, and 21% at UCSD. The mean general impression score significantly improved at UCSF, UCLA, and UCSD, but not at Iowa. The PDQI-9 score improved across all domains at UCSF and UCLA, 2 domains at UCSD, and 0 domains at Iowa. Documentation of pertinent labs and studies significantly improved at UCSF, UCLA, and Iowa, but not UCSD. Note length decreased at UCSF and UCLA, but not at UCSD. Notes were signed earlier at UCLA and UCSD, but not at UCSF.

When comparing postintervention notes based on template use, notes that used the template were significantly more likely to receive a higher mean impression score (odds ratio [OR] 11.95, P < .001), higher PDQI-9 sum score (OR 3.05, P < .001), be approximately 25% shorter (326 lines vs 239 lines, P < .001), and be completed approximately 1 hour and 20 minutes earlier (3:07 pm vs 1:45 pm, P < .001) than nontemplated notes from that same period. Additionally, at each institution, templated notes were more likely than nontemplated notes to receive a higher PDQI-9 sum score (OR at UCSF 6.81, P < .05; OR at UCLA 17.95, P < .001; OR at UCSD 10.99, P < .001; OR at Iowa 4.01, P < .05).

 

 

DISCUSSION

A bundled intervention consisting of educational lectures and a best practice progress note template significantly improved the quality, decreased the length, and resulted in earlier completion of inpatient progress notes. These findings are consistent with a prior study that demonstrated that a bundled note template intervention improved total note score and reduced note clutter.11 We saw a broad improvement in progress notes across all 9 domains of the PDQI-9, which corresponded with an improved general impression score. We also found statistically significant improvements in 7 of the 13 categories of the competency questionnaire.

Arguably the greatest impact of the intervention was shortening the documentation of labs and studies. Autopopulation can lead to the appearance of a comprehensive note; however, key data are often lost in a sea of numbers and imaging reports.6,14 Using simple prompts followed by free text such as, “I have reviewed all the labs from today. Pertinent labs include…” reduced autopopulation and reminded housestaff to identify only the key information that affected patient care for that day, resulting in a more streamlined, clear, and high-yield note.

The time spent documenting care is an important consideration for physician workflow and for uptake of any note intervention.14-18 One study from 2016 revealed that internal medicine housestaff spend more than half of an average shift using the computer, with 52% of that time spent on documentation.17 Although functions such as autopopulation and copy-forward were created as efficiency tools, we hypothesize that they may actually prolong note writing time by leading to disorganized, distended notes that are difficult to use the following day. There was concern that limiting these “efficiency functions” might discourage housestaff from using the progress note template. It was encouraging to find that postintervention notes were signed 1.3 hours earlier in the day. This study did not measure the impact of shorter notes and earlier completion time, but in theory, this could allow interns to spend more time in direct patient care and to be at lower risk of duty hour violations.19 Furthermore, while the clinical impact of this is unknown, it is possible that timely note completion may improve patient care by making notes available earlier for consultants and other members of the care team.

We found that adding an “inpatient checklist” to the progress note template facilitated a review of key inpatient concerns and quality measures. Although we did not specifically compare before-and-after documentation of all of the components of the checklist, there appeared to be improvement in the domains measured. Notably, there was a 31% increase (P < .001) in the percentage of notes documenting the “discharge plan, goals of hospitalization, or estimated length of stay.” In the surgical literature, studies have demonstrated that incorporating checklists improves patient safety, the delivery of care, and potentially shortens the length of stay.20-22 Future studies should explore the impact of adding a checklist to the daily progress note, as there may be potential to improve both process and outcome measures.

Institution-specific data provided insightful results. UCSD encountered low template use among their interns; however, they still had evidence of improvement in note quality, though not at the same level of UCLA and UCSF. Some barriers to uptake identified were as follows: (1) interns were accustomed to import labs and studies into their note to use as their rounding report, and (2) the intervention took place late in the year when interns had developed a functional writing system that they were reluctant to change. The University of Iowa did not show significant improvement in their note quality despite a relatively high template uptake. Both of these outcomes raise the possibility that in addition to the template, there were other factors at play. Perhaps because UCSF and UCLA created the best practice guidelines and template, it was a better fit for their culture and they had more institutional buy-in. Or because the educational lectures were similar, but not standardized across institutions, some lectures may have been more effective than others. However, when evaluating the postintervention notes at UCSD and Iowa, templated notes were found to be much more likely to score higher on the PDQI-9 than nontemplated notes, which serves as evidence of the efficacy of the note template.

Some of the strengths of this study include the relatively large sample size spanning 4 institutions and the use of 3 different assessment tools for grading progress note quality (general impression score, PDQI-9, and competency note questionnaire). An additional strength is our unique finding suggesting that note writing may be more efficient by removing, rather than adding, “efficiency functions.” There were several limitations of this study. Pre- and postintervention notes were examined at different points in the same academic year, thus certain domains may have improved as interns progressed in clinical skill and comfort with documentation, independent of our intervention.21 However, our analysis of postintervention notes across the same time period revealed that use of the template was strongly associated with higher quality, shorter notes and earlier completion time arguing that the effect seen was not merely intern experience. The poor interrater reliability is also a limitation. Although the PDQI-9 was previously validated, future use of the grading tool may require more rater training for calibration or more objective wording.23 The study was not blinded, and thus, bias may have falsely elevated postintervention scores; however, we attempted to minimize bias by incorporating a more objective yes/no competency questionnaire and by having each note scored by 3 graders. Other studies have attempted to address this form of bias by printing out notes and blinding the graders. This design, however, isolates the note from all other data in the medical record, making it difficult to assess domains such as accuracy and completeness. Our inclusion of objective outcomes such as note length and time of note completion help to mitigate some of the bias.

Future research can expand on the results of this study by introducing similar progress note interventions at other institutions and/or in nonacademic environments to validate the results and expand generalizability. Longer term follow-up would be useful to determine if these effects are transient or long lasting. Similarly, it would be interesting to determine if such results are sustained even after new interns start suggesting that institutional culture can be changed. Investigators could focus on similar projects to improve other notes that are particularly at a high risk for propagating false information, such as the History and Physical or Discharge Summary. Future research should also focus on outcomes data, including whether a more efficient note can allow housestaff to spend more time with patients, decrease patient length of stay, reduce clinical errors, and improve educational time for trainees. Lastly, we should determine if interventions such as this can mitigate the widespread frustrations with electronic documentation that are associated with physician and provider burnout.15,24 One would hope that the technology could be harnessed to improve provider productivity and be effectively integrated into comprehensive patient care.

Our research makes progress toward recommendations made by the American College of Physicians “to improve accuracy of information recorded and the value of information,” and develop automated tools that “enhance documentation quality without facilitating improper behaviors.”19 Institutions should consider developing internal best practices for clinical documentation and building structured note templates.19 Our research would suggest that, combined with a small educational intervention, such templates can make progress notes more accurate and succinct, make note writing more efficient, and be harnessed to improve quality metrics.

 

 

ACKNOWLEDGMENTS

The authors thank Michael Pfeffer, MD, and Sitaram Vangala, MS, for their contributions to and support of this research study and manuscript.

Disclosure: The authors declare no conflicts of interest.

The widespread adoption of electronic health records (EHRs) has led to significant progress in the modernization of healthcare delivery. Ease of access has improved clinical efficiency, and digital data have allowed for point-of-care decision support tools ranging from predicting the 30-day risk of readmission to providing up-to-date guidelines for the care of various diseases.1,2 Documentation tools such as copy-forward and autopopulation increase the speed of documentation, and typed notes improve legibility and ease of note transmission.3,4

However, all of these benefits come with a potential for harm, particularly with respect to accurate and concise documentation. Many experts have described the perpetuation of false information leading to errors, copying-forward of inconsistent and outdated information, and the phenomenon of “note bloat” — physician notes that contain multiple pages of nonessential information, often leaving key aspects buried or lost.5-7 Providers seem to recognize the hazards of copy-and-paste functionality yet persist in utilizing it. In 1 survey, more than 70% of attendings and residents felt that copy and paste led to inaccurate and outdated information, yet 80% stated they would still use it.8

There is little evidence to guide institutions on ways to improve EHR documentation practices. Recent studies have shown that operative note templates improved documentation and decreased the number of missing components.9,10 In the nonoperative setting, 1 small pilot study of pediatric interns demonstrated that a bundled intervention composed of a note template and classroom teaching resulted in improvement in overall note quality and a decrease in “note clutter.”11 In a larger study of pediatric residents, a standardized and simplified note template resulted in a shorter note, although notes were completed later in the day.12 The present study seeks to build upon these efforts by investigating the effect of didactic teaching and an electronic progress note template on note quality, length, and timeliness across 4 academic internal medicine residency programs.

METHODS

Study Design

This prospective quality improvement study took place across 4 academic institutions: University of California Los Angeles (UCLA), University of California San Francisco (UCSF), University of California San Diego (UCSD), and University of Iowa, all of which use Epic EHR (Epic Corp., Madison, WI). The intervention combined brief educational conferences directed at housestaff and attendings with the implementation of an electronic progress note template. Guided by resident input, a note-writing task force at UCSF and UCLA developed a set of best practice guidelines and an aligned note template for progress notes (supplementary Appendix 1). UCSD and the University of Iowa adopted them at their respective institutions. The template’s design minimized autopopulation while encouraging providers to enter relevant data via free text fields (eg, physical exam), prompts (eg, “I have reviewed all the labs from today. Pertinent labs include…”), and drop-down menus (eg, deep vein thrombosis [DVT] prophylaxis: enoxaparin, heparin subcutaneously, etc; supplementary Appendix 2). Additionally, an inpatient checklist was included at the end of the note to serve as a reminder for key inpatient concerns and quality measures, such as Foley catheter days, discharge planning, and code status. Lectures that focused on issues with documentation in the EHR, the best practice guidelines, and a review of the note template with instructions on how to access it were presented to the housestaff. Each institution tailored the lecture to suit their culture. Housestaff were encouraged but not required to use the note template.

Selection and Grading of Progress Notes

Progress notes were eligible for the study if they were written by an intern on an internal medicine teaching service, from a patient with a hospitalization length of at least 3 days with a progress note selected from hospital day 2 or 3, and written while the patient was on the general medicine wards. The preintervention notes were authored from September 2013 to December 2013 and the postintervention notes from April 2014 to June 2014. One note was selected per patient and no more than 3 notes were selected per intern. Each institution selected the first 50 notes chronologically that met these criteria for both the preintervention and the postintervention periods, for a total of 400 notes. The note-grading tool consisted of the following 3 sections to analyze note quality: (1) a general impression of the note (eg, below average, average, above average); (2) the validated Physician Documentation Quality Instrument, 9-item version (PDQI-9) that evaluates notes on 9 domains (up to date, accurate, thorough, useful, organized, comprehensible, succinct, synthesized, internally consistent) on a Likert scale from 1 (not at all) to 5 (extremely); and (3) a note competency questionnaire based on the Accreditation Council for Graduate Medical Education competency note checklist that asked yes or no questions about best practice elements (eg, is there a relevant and focused physical exam).12

 

 

Graders were internal medicine teaching faculty involved in the study and were assigned to review notes from their respective sites by directly utilizing the EHR. Although this introduces potential for bias, it was felt that many of the grading elements required the grader to know details of the patient that would not be captured if the note was removed from the context of the EHR. Additionally, graders documented note length (number of lines of text), the time signed by the housestaff, and whether the template was used. Three different graders independently evaluated each note and submitted ratings by using Research Electronic Data Capture.13

Statistical Analysis

Means for each item on the grading tool were computed across raters for each progress note. These were summarized by institution as well as by pre- and postintervention. Cumulative logit mixed effects models were used to compare item responses between study conditions. The number of lines per note before and after the note template intervention was compared by using a mixed effects negative binomial regression model. The timestamp on each note, representing the time of day the note was signed, was compared pre- and postintervention by using a linear mixed effects model. All models included random note and rater effects, and fixed institution and intervention period effects, as well as their interaction. Inter-rater reliability of the grading tool was assessed by calculating the intraclass correlation coefficient (ICC) using the estimated variance components. Data obtained from the PDQI-9 portion were analyzed by individual components as well as by sum score combining each component. The sum score was used to generate odds ratios to assess the likelihood that postintervention notes that used the template compared to those that did not would increase PDQI-9 sum scores. Both cumulative and site-specific data were analyzed. P values < .05 were considered statistically significant. All analyses were performed using SAS version 9.4 (SAS Institute Inc, Cary, NC).

RESULTS

A total of 200 preintervention and 199 postintervention notes were graded (1 note was erroneously selected twice, leading to 49 postintervention notes from that institution). Seventy percent of postintervention notes used the best practice note template.

The mean general impression score significantly improved from 2.0 to 2.3 (on a 1-3 scale in which 2 is average) after the intervention (P < .001). Additionally, note quality significantly improved across each domain of the PDQI-9 (P < .001 for all domains, Table 1). The ICC was 0.245 for the general impression score and 0.143 for the PDQI-9 sum score.

Among the competency questionnaire, the most profound improvement was documentation of only “relevant lab values and studies and removal of older data rather than importing all information” (29% preintervention, 63% postintervention, P < .001; Table 2). Additionally, significant improvements were seen in notes being “concise yet adequately complete,” and in documenting a “relevant and focused physical exam,” an “updated problem list,” and “mention of a discharge plan” (Table 2). Copying and pasting a note from another physician did not decrease significantly (P = .36).

Three of 4 institutions documented the number of lines per note and the time the note was signed by the intern. Mean number of lines per note decreased by 25% (361 lines preintervention, 265 lines postintervention, P < .001). Mean time signed was approximately 1 hour and 15 minutes earlier in the day (3:27 pm preintervention and 2:10 pm postintervention, P < .001).

Site-specific data revealed variation between sites. Template use was 92% at UCSF, 90% at UCLA, 79% at Iowa, and 21% at UCSD. The mean general impression score significantly improved at UCSF, UCLA, and UCSD, but not at Iowa. The PDQI-9 score improved across all domains at UCSF and UCLA, 2 domains at UCSD, and 0 domains at Iowa. Documentation of pertinent labs and studies significantly improved at UCSF, UCLA, and Iowa, but not UCSD. Note length decreased at UCSF and UCLA, but not at UCSD. Notes were signed earlier at UCLA and UCSD, but not at UCSF.

When comparing postintervention notes based on template use, notes that used the template were significantly more likely to receive a higher mean impression score (odds ratio [OR] 11.95, P < .001), higher PDQI-9 sum score (OR 3.05, P < .001), be approximately 25% shorter (326 lines vs 239 lines, P < .001), and be completed approximately 1 hour and 20 minutes earlier (3:07 pm vs 1:45 pm, P < .001) than nontemplated notes from that same period. Additionally, at each institution, templated notes were more likely than nontemplated notes to receive a higher PDQI-9 sum score (OR at UCSF 6.81, P < .05; OR at UCLA 17.95, P < .001; OR at UCSD 10.99, P < .001; OR at Iowa 4.01, P < .05).

 

 

DISCUSSION

A bundled intervention consisting of educational lectures and a best practice progress note template significantly improved the quality, decreased the length, and resulted in earlier completion of inpatient progress notes. These findings are consistent with a prior study that demonstrated that a bundled note template intervention improved total note score and reduced note clutter.11 We saw a broad improvement in progress notes across all 9 domains of the PDQI-9, which corresponded with an improved general impression score. We also found statistically significant improvements in 7 of the 13 categories of the competency questionnaire.

Arguably the greatest impact of the intervention was shortening the documentation of labs and studies. Autopopulation can lead to the appearance of a comprehensive note; however, key data are often lost in a sea of numbers and imaging reports.6,14 Using simple prompts followed by free text such as, “I have reviewed all the labs from today. Pertinent labs include…” reduced autopopulation and reminded housestaff to identify only the key information that affected patient care for that day, resulting in a more streamlined, clear, and high-yield note.

The time spent documenting care is an important consideration for physician workflow and for uptake of any note intervention.14-18 One study from 2016 revealed that internal medicine housestaff spend more than half of an average shift using the computer, with 52% of that time spent on documentation.17 Although functions such as autopopulation and copy-forward were created as efficiency tools, we hypothesize that they may actually prolong note writing time by leading to disorganized, distended notes that are difficult to use the following day. There was concern that limiting these “efficiency functions” might discourage housestaff from using the progress note template. It was encouraging to find that postintervention notes were signed 1.3 hours earlier in the day. This study did not measure the impact of shorter notes and earlier completion time, but in theory, this could allow interns to spend more time in direct patient care and to be at lower risk of duty hour violations.19 Furthermore, while the clinical impact of this is unknown, it is possible that timely note completion may improve patient care by making notes available earlier for consultants and other members of the care team.

We found that adding an “inpatient checklist” to the progress note template facilitated a review of key inpatient concerns and quality measures. Although we did not specifically compare before-and-after documentation of all of the components of the checklist, there appeared to be improvement in the domains measured. Notably, there was a 31% increase (P < .001) in the percentage of notes documenting the “discharge plan, goals of hospitalization, or estimated length of stay.” In the surgical literature, studies have demonstrated that incorporating checklists improves patient safety, the delivery of care, and potentially shortens the length of stay.20-22 Future studies should explore the impact of adding a checklist to the daily progress note, as there may be potential to improve both process and outcome measures.

Institution-specific data provided insightful results. UCSD encountered low template use among their interns; however, they still had evidence of improvement in note quality, though not at the same level of UCLA and UCSF. Some barriers to uptake identified were as follows: (1) interns were accustomed to import labs and studies into their note to use as their rounding report, and (2) the intervention took place late in the year when interns had developed a functional writing system that they were reluctant to change. The University of Iowa did not show significant improvement in their note quality despite a relatively high template uptake. Both of these outcomes raise the possibility that in addition to the template, there were other factors at play. Perhaps because UCSF and UCLA created the best practice guidelines and template, it was a better fit for their culture and they had more institutional buy-in. Or because the educational lectures were similar, but not standardized across institutions, some lectures may have been more effective than others. However, when evaluating the postintervention notes at UCSD and Iowa, templated notes were found to be much more likely to score higher on the PDQI-9 than nontemplated notes, which serves as evidence of the efficacy of the note template.

Some of the strengths of this study include the relatively large sample size spanning 4 institutions and the use of 3 different assessment tools for grading progress note quality (general impression score, PDQI-9, and competency note questionnaire). An additional strength is our unique finding suggesting that note writing may be more efficient by removing, rather than adding, “efficiency functions.” There were several limitations of this study. Pre- and postintervention notes were examined at different points in the same academic year, thus certain domains may have improved as interns progressed in clinical skill and comfort with documentation, independent of our intervention.21 However, our analysis of postintervention notes across the same time period revealed that use of the template was strongly associated with higher quality, shorter notes and earlier completion time arguing that the effect seen was not merely intern experience. The poor interrater reliability is also a limitation. Although the PDQI-9 was previously validated, future use of the grading tool may require more rater training for calibration or more objective wording.23 The study was not blinded, and thus, bias may have falsely elevated postintervention scores; however, we attempted to minimize bias by incorporating a more objective yes/no competency questionnaire and by having each note scored by 3 graders. Other studies have attempted to address this form of bias by printing out notes and blinding the graders. This design, however, isolates the note from all other data in the medical record, making it difficult to assess domains such as accuracy and completeness. Our inclusion of objective outcomes such as note length and time of note completion help to mitigate some of the bias.

Future research can expand on the results of this study by introducing similar progress note interventions at other institutions and/or in nonacademic environments to validate the results and expand generalizability. Longer term follow-up would be useful to determine if these effects are transient or long lasting. Similarly, it would be interesting to determine if such results are sustained even after new interns start suggesting that institutional culture can be changed. Investigators could focus on similar projects to improve other notes that are particularly at a high risk for propagating false information, such as the History and Physical or Discharge Summary. Future research should also focus on outcomes data, including whether a more efficient note can allow housestaff to spend more time with patients, decrease patient length of stay, reduce clinical errors, and improve educational time for trainees. Lastly, we should determine if interventions such as this can mitigate the widespread frustrations with electronic documentation that are associated with physician and provider burnout.15,24 One would hope that the technology could be harnessed to improve provider productivity and be effectively integrated into comprehensive patient care.

Our research makes progress toward recommendations made by the American College of Physicians “to improve accuracy of information recorded and the value of information,” and develop automated tools that “enhance documentation quality without facilitating improper behaviors.”19 Institutions should consider developing internal best practices for clinical documentation and building structured note templates.19 Our research would suggest that, combined with a small educational intervention, such templates can make progress notes more accurate and succinct, make note writing more efficient, and be harnessed to improve quality metrics.

 

 

ACKNOWLEDGMENTS

The authors thank Michael Pfeffer, MD, and Sitaram Vangala, MS, for their contributions to and support of this research study and manuscript.

Disclosure: The authors declare no conflicts of interest.

References

1. Herzig SJ, Guess JR, Feinbloom DB, et al. Improving appropriateness of acid-suppressive medication use via computerized clinical decision support. J Hosp Med. 2015;10(1):41-45. PubMed
2. Nguyen OK, Makam AN, Clark C, et al. Predicting all-cause readmissions using electronic health record data from the entire hospitalization: Model development and comparison. J Hosp Med. 2016;11(7):473-480. PubMed
3. Donati A, Gabbanelli V, Pantanetti S, et al. The impact of a clinical information system in an intensive care unit. J Clin Monit Comput. 2008;22(1):31-36. PubMed
4. Schiff GD, Bates DW. Can electronic clinical documentation help prevent diagnostic errors? N Engl J Med. 2010;362(12):1066-1069. PubMed
5. Hartzband P, Groopman J. Off the record--avoiding the pitfalls of going electronic. N Engl J Med. 2008;358(16):1656-1658. PubMed
6. Hirschtick RE. A piece of my mind. Copy-and-paste. JAMA. 2006;295(20):2335-2336. PubMed
7. Hirschtick RE. A piece of my mind. John Lennon’s elbow. JAMA. 2012;308(5):463-464. PubMed
8. O’Donnell HC, Kaushal R, Barrón Y, Callahan MA, Adelman RD, Siegler EL. Physicians’ attitudes towards copy and pasting in electronic note writing. J Gen Intern Med. 2009;24(1):63-68. PubMed
9. Mahapatra P, Ieong E. Improving Documentation and Communication Using Operative Note Proformas. BMJ Qual Improv Rep. 2016;5(1):u209122.w3712. PubMed
10. Thomson DR, Baldwin MJ, Bellini MI, Silva MA. Improving the quality of operative notes for laparoscopic cholecystectomy: Assessing the impact of a standardized operation note proforma. Int J Surg. 2016;27:17-20. PubMed
11. Dean SM, Eickhoff JC, Bakel LA. The effectiveness of a bundled intervention to improve resident progress notes in an electronic health record. J Hosp Med. 2015;10(2):104-107. PubMed
12. Aylor M, Campbell EM, Winter C, Phillipi CA. Resident Notes in an Electronic Health Record: A Mixed-Methods Study Using a Standardized Intervention With Qualitative Analysis. Clin Pediatr (Phila). 2016;6(3):257-262. 
13. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377-381. PubMed
14. Chi J, Kugler J, Chu IM, et al. Medical students and the electronic health record: ‘an epic use of time’. Am J Med. 2014;127(9):891-895. PubMed
15. Martin SA, Sinsky CA. The map is not the territory: medical records and 21st century practice. Lancet. 2016;388(10055):2053-2056. PubMed
16. Oxentenko AS, Manohar CU, McCoy CP, et al. Internal medicine residents’ computer use in the inpatient setting. J Grad Med Educ. 2012;4(4):529-532. PubMed
17. Mamykina L, Vawdrey DK, Hripcsak G. How Do Residents Spend Their Shift Time? A Time and Motion Study With a Particular Focus on the Use of Computers. Acad Med. 2016;91(6):827-832. PubMed
18. Chen L, Guo U, Illipparambil LC, et al. Racing Against the Clock: Internal Medicine Residents’ Time Spent On Electronic Health Records. J Grad Med Educ. 2016;8(1):39-44. PubMed
19. Kuhn T, Basch P, Barr M, Yackel T, Physicians MICotACo. Clinical documentation in the 21st century: executive summary of a policy position paper from the American College of Physicians. Ann Intern Med. 2015;162(4):301-303. PubMed
20. Treadwell JR, Lucas S, Tsou AY. Surgical checklists: a systematic review of impacts and implementation. BMJ Qual Saf. 2014;23(4):299-318. PubMed
21. Ko HC, Turner TJ, Finnigan MA. Systematic review of safety checklists for use by medical care teams in acute hospital settings--limited evidence of effectiveness. BMC Health Serv Res. 2011;11:211. PubMed
22. Diaz-Montes TP, Cobb L, Ibeanu OA, Njoku P, Gerardi MA. Introduction of checklists at daily progress notes improves patient care among the gynecological oncology service. J Patient Saf. 2012;8(4):189-193. PubMed
23. Stetson PD, Bakken S, Wrenn JO, Siegler EL. Assessing Electronic Note Quality Using the Physician Documentation Quality Instrument (PDQI-9). Appl Clin Inform. 2012;3(2):164-174. PubMed
24. Friedberg MW, Chen PG, Van Busum KR, et al. Factors affecting physician professional satisfaction and their implications for patient care, health systems, and health policy. Santa Monica, CA: RAND Corporation; 2013. PubMed

References

1. Herzig SJ, Guess JR, Feinbloom DB, et al. Improving appropriateness of acid-suppressive medication use via computerized clinical decision support. J Hosp Med. 2015;10(1):41-45. PubMed
2. Nguyen OK, Makam AN, Clark C, et al. Predicting all-cause readmissions using electronic health record data from the entire hospitalization: Model development and comparison. J Hosp Med. 2016;11(7):473-480. PubMed
3. Donati A, Gabbanelli V, Pantanetti S, et al. The impact of a clinical information system in an intensive care unit. J Clin Monit Comput. 2008;22(1):31-36. PubMed
4. Schiff GD, Bates DW. Can electronic clinical documentation help prevent diagnostic errors? N Engl J Med. 2010;362(12):1066-1069. PubMed
5. Hartzband P, Groopman J. Off the record--avoiding the pitfalls of going electronic. N Engl J Med. 2008;358(16):1656-1658. PubMed
6. Hirschtick RE. A piece of my mind. Copy-and-paste. JAMA. 2006;295(20):2335-2336. PubMed
7. Hirschtick RE. A piece of my mind. John Lennon’s elbow. JAMA. 2012;308(5):463-464. PubMed
8. O’Donnell HC, Kaushal R, Barrón Y, Callahan MA, Adelman RD, Siegler EL. Physicians’ attitudes towards copy and pasting in electronic note writing. J Gen Intern Med. 2009;24(1):63-68. PubMed
9. Mahapatra P, Ieong E. Improving Documentation and Communication Using Operative Note Proformas. BMJ Qual Improv Rep. 2016;5(1):u209122.w3712. PubMed
10. Thomson DR, Baldwin MJ, Bellini MI, Silva MA. Improving the quality of operative notes for laparoscopic cholecystectomy: Assessing the impact of a standardized operation note proforma. Int J Surg. 2016;27:17-20. PubMed
11. Dean SM, Eickhoff JC, Bakel LA. The effectiveness of a bundled intervention to improve resident progress notes in an electronic health record. J Hosp Med. 2015;10(2):104-107. PubMed
12. Aylor M, Campbell EM, Winter C, Phillipi CA. Resident Notes in an Electronic Health Record: A Mixed-Methods Study Using a Standardized Intervention With Qualitative Analysis. Clin Pediatr (Phila). 2016;6(3):257-262. 
13. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377-381. PubMed
14. Chi J, Kugler J, Chu IM, et al. Medical students and the electronic health record: ‘an epic use of time’. Am J Med. 2014;127(9):891-895. PubMed
15. Martin SA, Sinsky CA. The map is not the territory: medical records and 21st century practice. Lancet. 2016;388(10055):2053-2056. PubMed
16. Oxentenko AS, Manohar CU, McCoy CP, et al. Internal medicine residents’ computer use in the inpatient setting. J Grad Med Educ. 2012;4(4):529-532. PubMed
17. Mamykina L, Vawdrey DK, Hripcsak G. How Do Residents Spend Their Shift Time? A Time and Motion Study With a Particular Focus on the Use of Computers. Acad Med. 2016;91(6):827-832. PubMed
18. Chen L, Guo U, Illipparambil LC, et al. Racing Against the Clock: Internal Medicine Residents’ Time Spent On Electronic Health Records. J Grad Med Educ. 2016;8(1):39-44. PubMed
19. Kuhn T, Basch P, Barr M, Yackel T, Physicians MICotACo. Clinical documentation in the 21st century: executive summary of a policy position paper from the American College of Physicians. Ann Intern Med. 2015;162(4):301-303. PubMed
20. Treadwell JR, Lucas S, Tsou AY. Surgical checklists: a systematic review of impacts and implementation. BMJ Qual Saf. 2014;23(4):299-318. PubMed
21. Ko HC, Turner TJ, Finnigan MA. Systematic review of safety checklists for use by medical care teams in acute hospital settings--limited evidence of effectiveness. BMC Health Serv Res. 2011;11:211. PubMed
22. Diaz-Montes TP, Cobb L, Ibeanu OA, Njoku P, Gerardi MA. Introduction of checklists at daily progress notes improves patient care among the gynecological oncology service. J Patient Saf. 2012;8(4):189-193. PubMed
23. Stetson PD, Bakken S, Wrenn JO, Siegler EL. Assessing Electronic Note Quality Using the Physician Documentation Quality Instrument (PDQI-9). Appl Clin Inform. 2012;3(2):164-174. PubMed
24. Friedberg MW, Chen PG, Van Busum KR, et al. Factors affecting physician professional satisfaction and their implications for patient care, health systems, and health policy. Santa Monica, CA: RAND Corporation; 2013. PubMed

Issue
Journal of Hospital Medicine 13(6)
Issue
Journal of Hospital Medicine 13(6)
Page Number
378-382. Published online first January 19, 2018
Page Number
378-382. Published online first January 19, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Daniel Kahn, MD, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, 757 Westwood Plaza #7501, Los Angeles, CA 90095; Telephone: 310-267-9643; Fax: 310-267-3840; E-mail: DaKahn@mednet.ucla.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 07/11/2018 - 05:00
Un-Gate On Date
Wed, 12/26/2018 - 05:00
Use ProPublica
Article PDF Media
Media Files

The Evaluation of Medical Inpatients Who Are Admitted on Long-term Opioid Therapy for Chronic Pain

Article Type
Changed
Fri, 10/04/2019 - 16:16

Hospitalists face complex questions about how to evaluate and treat the large number of individuals who are admitted on long-term opioid therapy (LTOT, defined as lasting 3 months or longer) for chronic noncancer pain. A recent study at one Veterans Affairs hospital, found 26% of medical inpatients were on LTOT.1 Over the last 2 decades, use of LTOT has risen substantially in the United States, including among middle-aged and older adults.2 Concurrently, inpatient hospitalizations related to the overuse of prescription opioids, including overdose, dependence, abuse, and adverse drug events, have increased by 153%.3 Individuals on LTOT can also be hospitalized for exacerbations of the opioid-treated chronic pain condition or unrelated conditions. In addition to affecting rates of hospitalization, use of LTOT is associated with higher rates of in-hospital adverse events, longer hospital stays, and higher readmission rates.1,4,5

Physicians find managing chronic pain to be stressful, are often concerned about misuse and addiction, and believe their training in opioid prescribing is inadequate.6 Hospitalists report confidence in assessing and prescribing opioids for acute pain but limited success and satisfaction with treating exacerbations of chronic pain.7 Although half of all hospitalized patients receive opioids,5 little information is available to guide the care of hospitalized medical patients on LTOT for chronic noncancer pain.8,9

Our multispecialty team sought to synthesize guideline recommendations and primary literature relevant to the assessment of medical inpatients on LTOT to assist practitioners balance effective pain treatment and opioid risk reduction. This article addresses obtaining a comprehensive pain history, identifying misuse and opioid use disorders, assessing the risk of overdose and adverse drug events, gauging the risk of withdrawal, and based on such findings, appraise indications for opioid therapy. Other authors have recently published narrative reviews on the management of acute pain in hospitalized patients with opioid dependence and the inpatient management of opioid use disorder.10,11

METHODS

To identify primary literature, we searched PubMed, EMBASE, The Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects, Health Economic Evaluations Database, key meeting abstracts, and hand searches. To identify guidelines, we searched PubMed, National Guidelines Clearinghouse, specialty societies’ websites, the Centers for Disease Control and Prevention (CDC), the United Kingdom National Institute for Health and Care Excellence, the Canadian Medical Association, and the Australian Government National Health and Medical Research Council. Search terms related to opioids and chronic pain, which was last updated in October 2016.12

We selected English-language documents on opioids and chronic pain among adults, excluding pain in the setting of procedures, labor and delivery, life-limiting illness, or specific conditions. For primary literature, we considered intervention studies of any design that addressed pain management among hospitalized medical patients. We included guidelines and specialty society position statements published after January 1, 2009, that addressed pain in the hospital setting, acute pain in any setting, or chronic pain in the outpatient setting if published by a national body. Due to the paucity of documents specific to inpatient care, we used a narrative review format to synthesize information. Dual reviewers extracted guideline recommendations potentially relevant to medical inpatients on LTOT. We also summarize relevant assessment instruments, emphasizing very brief screening instruments, which may be more likely to be used by busy hospitalists.

RESULTS

We did not find any primary literature specific to the assessment of pain among medical inpatients on LTOT. We identified 14 eligible guidelines and position statements (see Table 1). Three documents address pain in the hospital setting, including an “implementation guide” from the Society for Hospital Medicine.13-15 Three documents address acute pain,9,16,17 and 8 documents address LTOT for chronic noncancer pain.18-25 Table 2 lists guideline recommendations potentially relevant to inpatients on LTOT.

DISCUSSION

We grouped guideline recommendations into the following 3 categories applicable to inpatient assessment of patients on LTOT: obtaining a comprehensive pain history, identifying misuse and opioid use disorders, and assessing the risk of overdose and adverse drug events. Although we did not find recommendations that specifically spoke to assessment for opioid withdrawal and appraising indications for opioid therapy, we briefly discuss these areas as highly relevant to inpatient practice.

 

 

Obtaining a Comprehensive Pain History

Hospitalists newly evaluating patients on LTOT often face a dual challenge: deciding if the patient has an immediate indication for additional opioids and if the current long-term opioid regimen should be altered or discontinued. In general, opioids are an accepted short-term treatment for moderate to severe acute pain but their role in chronic noncancer pain is controversial. Newly released guidelines by the CDC recommend initiating LTOT as a last resort, and the Departments of Veterans Affairs and Defense guidelines recommend against initiation of LTOT.22,23

A key first step, therefore, is distinguishing between acute and chronic pain. Among patients on LTOT, pain can represent a new acute pain condition, an exacerbation of chronic pain, opioid-induced hyperalgesia, or opioid withdrawal. Acute pain is defined as an unpleasant sensory and emotional experience associated with actual or potential tissue damage or described in relation to such damage.26 In contrast, chronic pain is a complex response that may not be related to actual or ongoing tissue damage, and is influenced by physiological, contextual, and psychological factors. Two acute pain guidelines and 1 chronic pain guideline recommend distinguishing acute and chronic pain,9,16,21 3 chronic pain guidelines reinforce the importance of obtaining a pain history (including timing, intensity, frequency, onset, etc),20,22,23 and 6 guidelines recommend ascertaining a history of prior pain-related treatments.9,13,14,16,20,22 Inquiring how the current pain compares with symptoms “on a good day,” what activities the patient can usually perform, and what the patient does outside the hospital to cope with pain can serve as entry into this conversation.

The standard for assessing pain intensity remains patient self-report using a validated instrument, such as the Numerical Rating Scale (Table 3).23,24,27 Among patients with chronic pain, clinically meaningful differences in pain intensity correspond to 1- to 2-point changes on these scales.27,28 Pain scores should not be the only factor used to determine when opioids are indicated because other factors are relevant and scores may not correlate with patients’ preference to receive opioid therapy.29 Along with pain intensity, 3 guidelines for hospital settings/acute pain and 4 chronic pain guidelines recommend assessing functional status.9,13,16,18,20-22 The CDC guideline endorses 3-item the “Pain average, interference with Enjoyment of life, and interference with General activity” (PEG) assessment scale 22,30 (Table 3). The instrument would need to be adapted for the hospital setting, but improvement in function, such as mobility, is a good indicator of clinical improvement among inpatients as well.

In addition to function, 5 guidelines, including 2 specific guidelines for acute pain or the hospital setting, recommend obtaining a detailed psychosocial history to identify life stressors and gain insight into the patient’s coping skills.14,16,19,20,22 Psychiatric symptoms can intensify the experience of pain or hamper coping ability. Anxiety, depression, and insomnia frequently coexist in patients with chronic pain.31 As such, 3 hospital setting/acute pain guidelines and 3 chronic pain guidelines recommend screening for mental health issues including anxiety and depression.13,14,16,20,22,23 Several depression screening instruments have been validated among inpatients,32 and there are validated single-item, self-administered instruments for both depression and anxiety (Table 3).32,33

Although obtaining a comprehensive history before making treatment decisions is ideal, some patients present in extremis. In emergency departments, some guidelines endorse prompt administration of analgesics based on patient self-report, prior to establishing a diagnosis.17 Given concerns about the growing prevalence of opioid use disorders, several states now recommend emergency medicine prescribers screen for misuse before giving opioids and avoid parenteral opioids for acute exacerbations of chronic pain.34 Treatments received in emergency departments set patients’ expectations for the care they receive during hospitalization, and hospitalists may find it necessary to explain therapies appropriate for urgent management are not intended to be sustained.

Identifying Misuse and Opioid Use Disorders

Nonmedical use of prescription opioids and opioid use disorders have more than doubled over the last decade.35 Five guidelines, including 3 specific guidelines for acute pain or the hospital setting, recommend screening for opioid misuse.13,14,16,19,23 Many states mandate practitioners assess patients for substance use disorders before prescribing controlled substances.36 Instruments to identify aberrant and risky use include the Current Opioid Misuse Measure,37 Prescription Drug Use Questionnaire,38 Addiction Behaviors Checklist,39 Screening Tool for Abuse,40 and the Self-Administered Single-Item Screening Question (Table 3).41 However, the evidence for these and other tools is limited and absent for the inpatient setting.21,42

In addition to obtaining a history from the patient, 4 guidelines specific to hospital settings/acute pain and 4 chronic pain guidelines recommend practitioners access prescription drug monitoring programs (PDMPs).13-16,19,21-24 PDMPs exist in all states except Missouri, and about half of states mandate practitioners check the PDMP database in certain circumstances.36 Studies examining the effects of PDMPs on prescribing are limited, but checking these databases can uncover concerning patterns including overlapping prescriptions or multiple prescribers.43 PDMPs can also confirm reported medication doses, for which patient report may be less reliable.

Two hospital/acute pain guidelines and 5 chronic pain guidelines also recommend urine drug testing, although differing on when and whom to test, with some favoring universal screening.11,20,23 Screening hospitalized patients may reveal substances not reported by patients, but medications administered in emergency departments can confound results. Furthermore, the commonly used immunoassay does not distinguish heroin from prescription opioids, nor detect hydrocodone, oxycodone, methadone, buprenorphine, or certain benzodiazepines. Chromatography/mass spectrometry assays can but are often not available from hospital laboratories. The differential for unexpected results includes substance use, self treatment of uncontrolled pain, diversion, or laboratory error.20

If concerning opioid use is identified, 3 hospital setting/acute pain specific guidelines and the CDC guideline recommend sharing concerns with patients and assessing for a substance use disorder.9,13,16,22 Determining whether patients have an opioid use disorder that meets the criteria in the Diagnostic and Statistical Manual, 5th Edition44 can be challenging. Patients may minimize or deny symptoms or fear that the stigma of an opioid use disorder will lead to dismissive or subpar care. Additionally, substance use disorders are subject to federal confidentiality regulations, which can hamper acquisition of information from providers.45 Thus, hospitalists may find specialty consultation helpful to confirm the diagnosis.

 

 

Assessing the Risk of Overdose and Adverse Drug Events

Oversedation, respiratory depression, and death can result from iatrogenic or self-administered opioid overdose in the hospital.5 Patient factors that increase this risk among outpatients include a prior history of overdose, preexisting substance use disorders, cognitive impairment, mood and personality disorders, chronic kidney disease, sleep apnea, obstructive lung disease, and recent abstinence from opioids.12 Medication factors include concomitant use of benzodiazepines and other central nervous system depressants, including alcohol; recent initiation of long-acting opioids; use of fentanyl patches, immediate-release fentanyl, or methadone; rapid titration; switching opioids without adequate dose reduction; pharmacokinetic drug–drug interactions; and, importantly, higher doses.12,22 Two guidelines specific to acute pain and hospital settings and 5 chronic pain guidelines recommend screening for use of benzodiazepines among patients on LTOT.13,14,16,18-20,22,21
The CDC guideline recommends careful assessment when doses exceed 50 mg of morphine equivalents per day and avoiding doses above 90 mg per day due to the heightened risk of overdose.22 In the hospital, 23% of patients receive doses at or above 100 mg of morphine equivalents per day,5 and concurrent use of central nervous system depressants is common. Changes in kidney and liver function during acute illness may impact opioid metabolism and contribute to overdose.

In addition to overdose, opioids are leading causes of adverse drug events during hospitalization.46 Most studies have focused on surgical patients reporting common opioid-related events as nausea/vomiting, pruritus, rash, mental status changes, respiratory depression, ileus, and urinary retention.47 Hospitalized patients may also exhibit chronic adverse effects due to LTOT. At least one-third of patients on LTOT eventually stop because of adverse effects, such as endocrinopathies, sleep disordered breathing, constipation, fractures, falls, and mental status changes.48 Patients may lack awareness that their symptoms are attributable to opioids and are willing to reduce their opioid use once informed, especially when alternatives are offered to alleviate pain.

Gauging the Risk of Withdrawal

Sudden discontinuation of LTOT by patients, practitioners, or intercurrent events can have unanticipated and undesirable consequences. Withdrawal is not only distressing for patients; it can be dangerous because patients may resort to illicit use, diversion of opioids, or masking opioid withdrawal with other substances such as alcohol. The anxiety and distress associated with withdrawal, or anticipatory fear about withdrawal, can undermine therapeutic alliance and interfere with processes of care. Reviewed guidelines did not offer recommendations regarding withdrawal risk or specific strategies for avoidance. There is no specific prior dose threshold or degree of reduction in opioids that puts patients at risk for withdrawal, in part due to patients’ beliefs, expectations, and differences in response to opioid formulations. Symptoms of opioid withdrawal have been compared to a severe case of influenza, including stomach cramps, nausea and vomiting, diarrhea, tremor and muscle twitching, sweating, restlessness, yawning, tachycardia, anxiety and irritability, bone and joint aches, runny nose, tearing, and piloerection.49 The Clinical Opiate Withdrawal Scale (COWS)49 and the Clinical Institute Narcotic Assessment51 are clinician-administered tools to assess opioid withdrawal similar to the Clinical Institute Withdrawal Assessment of Alcohol Scale, Revised,52 to monitor for withdrawal in the inpatient setting.

Synthesizing and Appraising the Indications for Opioid Therapy

For medical inpatients who report adequate pain control and functional outcomes on current doses of LTOT, without evidence of misuse, the pragmatic approach is to continue the treatment plan established by the outpatient clinician rather than escalating or tapering the dose. If opioids are prescribed at discharge, 3 hospital setting/acute pain guidelines and the CDC guideline recommend prescribing the lowest effective dose of immediate release opioids for 3 to 7 days.13,15,16,22

When patients exhibit evidence of an opioid use disorder, have a history of serious overdose, or are experiencing intolerable opioid-related adverse events, the hospitalist may conclude the harms of LTOT outweigh the benefits. For these patients, opioid treatment in the hospital can be aimed at preventing withdrawal, avoiding the perpetuation of inappropriate opioid use, managing other acute medical conditions, and communicating with outpatient prescribers. For patients with misuse, discontinuing opioids is potentially harmful and may be perceived as punitive. Hospitalists should consider consulting addiction or mental health specialists to assist with formulating a plan of care. However, such specialists may not be available in smaller or rural hospitals and referral at discharge can be challenging.53

Beginning to taper opioids during the hospitalization can be appropriate when patients are motivated and can transition to an outpatient provider who will supervise the taper. In ambulatory settings, tapers of 10% to 30% every 2 to 5 days are generally well tolerated.54 If patients started tapering opioids under supervision of an outpatient provider prior to hospitalization; ideally, the taper can be continued during hospitalization with close coordination with the outpatient clinician.

Unfortunately, many patients on LTOT are admitted with new sources of acute pain and or exacerbations of chronic pain, and some have concomitant substance use disorders; we plan to address the management of these complex situations in future work.

 

 

Despite the frequency with which patients on LTOT are hospitalized for nonsurgical stays and the challenges inherent in evaluating pain and assessing the possibility of substance use disorders, no formal guidelines or empirical research studies pertain to this population. Guidelines in this review were developed for hospital settings and acute pain in the absence of LTOT, and for outpatient care of patients on LTOT. We also included a nonsystematic synthesis of literature that varied in relevance to medical inpatients on LTOT.

CONCLUSIONS

Although inpatient assessment and treatment of patients with LTOT remains an underresearched area, we were able to extract and synthesize recommendations from 14 guideline statements and apply these to the assessment of patients with LTOT in the inpatient setting. Hospitalists frequently encounter patients on LTOT for chronic nonmalignant pain and are faced with complex decisions about the effectiveness and safety of LTOT; appropriate patient assessment is fundamental to making these decisions. Key guideline recommendations relevant to inpatient assessment include assessing both pain and functional status, differentiating acute from chronic pain, ascertaining preadmission pain treatment history, obtaining a psychosocial history, screening for mental health issues such as depression and anxiety, screening for substance use disorders, checking state prescription drug monitoring databases, ordering urine drug immunoassays, detecting use of sedative-hypnotics, identifying medical conditions associated with increased risk of overdose and adverse events, and appraising the potential benefits and harms of opioid therapy. Although approaches to assessing medical inpatients on LTOT can be extrapolated from outpatient guidelines, observational studies, and small studies in surgical populations, more work is needed to address these critical topics for inpatients on LTOT.

Disclosure

Dr. Herzig was funded by grant number K23AG042459 from the National Institute on Aging. The funding organization had no involvement in any aspect of the study, including design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. All other authors have no relevant conflicts of interest with the work.

References

1. Mosher HJ, Jiang L, Sarrazin MSV, Cram P, Kaboli PJ, Vander Weg MW. Prevalence and Characteristics of Hospitalized Adults on Chronic Opioid Therapy. J Hosp Med. 2014;9(2):82-87. PubMed
2. Campbell CI, Weisner C, Leresche L, et al. Age and Gender Trends in Long-Term Opioid Analgesic Use for Noncancer Pain. Am J Public Health. 2010;100(12):2541-2547. PubMed
3. Owens PL, Barrett ML, Weiss AJ, Washington RE, Kronick R. Hospital Inpatient Utilization Related to Opioid Overuse among Adults, 1993–2012. Rockville, MD: Agency for Healthcare Research and Quality; 2014. PubMed

4. Gulur P, Williams L, Chaudhary S, Koury K, Jaff M. Opioid Tolerance--a Predictor of Increased Length of Stay and Higher Readmission Rates. Pain Physician. 2014;17(4):E503-507. PubMed
5. Herzig SJ, Rothberg MB, Cheung M, Ngo LH, Marcantonio ER. Opioid Utilization and Opioid-Related Adverse Events in Nonsurgical Patients in US Hospitals. J Hosp Med. 2014;9(2):73-81. PubMed
6. Jamison RN, Sheehan KA, Scanlan E, Matthews M, Ross EL. Beliefs and Attitudes About Opioid Prescribing and Chronic Pain Management: Survey of Primary Care Providers. J Opioid Manag. 2014;10(6):375-382. PubMed
7. Calcaterra SL, Drabkin AD, Leslie SE, et al. The Hospitalist Perspective on Opioid Prescribing: A Qualitative Analysis. J Hosp Med. 2016;11(8):536-542. PubMed
8. Helfand M, Freeman M. Assessment and Management of Acute Pain in Adult Medical Inpatients: A Systematic Review. Pain Med. 2009;10(7):1183-1199. PubMed
9. Macintyre P, Schug S, Scott D, Visser E, Walker S. Acute Pain Management: Scientific Evidence. Melbourne, Australia: Australian and New Zealand College of Anesthetists and Faculty of Pain Medicine; 2010. 
10. Raub JN, Vettese TE. Acute Pain Management in Hospitalized Adult Patients with Opioid Dependence: A Narrative Review and Guide for Clinicians. J Hosp Med. 2017;12(5):375-379. PubMed
11. Theisen-Toupal J, Ronan MV, Moore A, Rosenthal ES. Inpatient Management of Opioid Use Disorder: A Review for Hospitalists. J Hosp Med. 2017;12(5):369-374. PubMed
12. Nuckols TK, Anderson L, Popescu I, et al. Opioid Prescribing: A Systematic Review and Critical Appraisal of Guidelines for Chronic Pain. Ann Intern Med. 2014;160(1):38-47. PubMed
13. Massachusetts Health & Hospital Association Substance Use Disorder Prevention and Treatment Task Force. Guidelines for Opioid Management within a Hospital Setting. Boston, MA: Massachusetts Health & Hospital Association; 2009. 
14. Society for Hospital Medicine’s Center for Hospital Innovation & Improvement. Reducing Adverse Drug Events Related to Opioids Implementation Guide. Philadelphia, PA; 2015. 
15. Cantrill S, Brown M, Carlisle RJ, et al. Clinical Policy Critical Issues in the Prescribing of Opioids for Adult Patients in the Emergency Department. Ann Emerg Med. 2012;60(4):499-525. PubMed
16. Thorson D, Biewen P, Bonte B, et al. Acute Pain Assessment and Opioid Prescribing Protocol. Bloomington, MN: Institute for Clinical Systems Improvement; 2014. 
17. American Society for Pain Management N, Emergency Nurses A, American College of Emergency P, American Pain S. Optimizing the Treatment of Pain in Patients with Acute Presentations. Policy Statement. Ann Emerg Med. Jul 2010;56(1):77-79. 
18. American Geriatrics Society Panel on the Pharmacological Management of Persistent Pain in Older Persons. Pharmacological Management of Persistent Pain in Older Persons. J Am Geriatr Soc. 2009;57(8):1331-1346.  
19. Chou R, Fanciullo GJ, Fine PG, et al. Clinical Guidelines for the Use of Chronic Opioid Therapy in Chronic Noncancer Pain. J Pain. 2009;10(2):113-130. PubMed
20. Furlan AD, Reardon R, Weppler C. Opioids for Chronic Noncancer Pain: A New Canadian Practice Guideline. CMAJ. 2010;182(9):923-930. PubMed
21. Manchikanti L, Abdi S, Atluri S, et al. American Society of Interventional Pain Physicians (ASIPP) Guidelines for Responsible Opioid Prescribing in Chronic Non-Cancer Pain: Part 2--Guidance. Pain Physician. 2012;15(3 Suppl):S67-116. PubMed
22. Dowell D, Haegerich TM, Chou R. CDC Guideline for Prescribing Opioids for Chronic Pain--United States, 2016. JAMA. 2016;315(15):1624-1645. PubMed
23. The Opiod Therap for Chronic Pain Work Group. VA/DoD Clinical Practice Guideline for Opioid Therapy for Chronic Pain. Version 3.0. https://www.healthquality.va.gov/guidelines/Pain/cot/VADoDOTCPG022717.pdf. AccessedAugust 3, 2016.
24. Hooten W, Timming R, Belgrade M, et al. Assessment and Managemeent of Chronic Pain. Bloomington, MN: Institute for Clinical Systems Improvement; 2013. 
25. American Society of Anesthesiologists Task Force. Practice Guidelines for Chronic Pain Management: An Updated Report by the American Society of Anesthesiologists Task Force on Chronic Pain Management and the American Society of Regional Anesthesia and Pain Medicine. Anesthesiology. 2010;112(4):810-833. PubMed
26. International Association for the Study of Pain. IASP Taxonomy. https://www.iasp-pain.org/Taxonomy. Accessed August 3, 2016.
27. Hawker GA, Mian S, Kendzerska T, French M. Measures of Adult Pain: Visual Analog Scale for Pain (VAS Pain), Numeric Rating Scale for Pain (NRS Pain), Mcgill Pain Questionnaire (MPQ), Short-Form Mcgill Pain Questionnaire (SF-MPQ), Chronic Pain Grade Scale (CPGS), Short Form-36 Bodily Pain Scale (SF36 BPS), and Measure of Intermittent and Constant Osteoarthritis Pain (ICOAP). Arthritis Care Res (Hoboken). 2011;63 Suppl 11:S240-252. PubMed
28. Farrar JT, Young JP, LaMoreaux L, Werth JL, Poole RM. Clinical Importance of Changes in Chronic Pain Intensity Measured on an 11-Point Numerical Pain Rating Scale. Pain. 2001;94(2):149-158. PubMed
29. van Dijk JF, Kappen TH, Schuurmans MJ, van Wijck AJ. The Relation between Patients’ NRS Pain Scores and Their Desire for Additional Opioids after Surgery. Pain Pract. 2015;15(7):604-609. PubMed
30. Krebs EE, Lorenz KA, Bair MJ, et al. Development and Initial Validation of the PEG, a Three-Item Scale Assessing Pain Intensity and Interference. J Gen Intern Med. 2009;24(6):733-738. PubMed
31. Finan PH, Smith MT. The Comorbidity of Insomnia, Chronic Pain, and Depression: Dopamine as a Putative Mechanism. Sleep Med Rev. 2013;17(3):173-183. PubMed
32. IsHak WW, Collison K, Danovitch I, et al. Screening for Depression in Hospitalized Medical Patients. J Hosp Med. 2017;12(2):118-125. PubMed

33. Young QR, Nguyen M, Roth S, Broadberry A, Mackay MH. Single-Item Measures for Depression and Anxiety: Validation of the Screening Tool for Psychological Distress in an Inpatient Cardiology Setting. Eur J Cardiovasc Nurs. 2015;14(6):544-551. PubMed

34. Poon SJ, Greenwood-Ericksen MB. The Opioid Prescription Epidemic and the Role of Emergency Medicine. Ann Emerg Med. 2014;64(5):490-495. PubMed
35. National Institute on Alcohol Abuse and Alcoholism (NIAAA). Rates of Nonmedical Prescription Opioid Use and Opioid Use Disorder Double in 10 Years. https://www.nih.gov/news-events/rates-nonmedical-prescription-opioid-use-opioid-use-disorder-double-10-years. Accessed on August 3, 2016.
36. National Alliance for Model State Drug Laws. Status of Prescription Drug Monitoring Programs (PDMPs). http://www.pdmpassist.org/pdf/PDMPProgramStatus.pdf. Accessed August 3, 2016.
37. Butler SF, Budman SH, Fernandez KC, et al. Development and Validation of the Current Opioid Misuse Measure. Pain. 2007;130(1-2):144-156. PubMed
38. Compton PA, Wu SM, Schieffer B, Pham Q, Naliboff BD. Introduction of a Self-Report Version of the Prescription Drug Use Questionnaire and Relationship to Medication Agreement Noncompliance. J Pain Symptom Manage. 2008;36(4):383-395. PubMed
39. Wu SM, Compton P, Bolus R, et al. The Addiction Behaviors Checklist: Validation of a New Clinician-Based Measure of Inappropriate Opioid Use in Chronic Pain. J Pain Symptom Manage. 2006;32(4):342-351. PubMed
40. Atluri SL, Sudarshan G. Development of a Screening Tool to Detect the Risk of Inappropriate Prescription Opioid Use in Patients with Chronic Pain. Pain Physician. 2004;7(3):333-338. PubMed
41. McNeely J, Cleland CM, Strauss SM, Palamar JJ, Rotrosen J, Saitz R. Validation of Self-Administered Single-Item Screening Questions (SISQS) for Unhealthy Alcohol and Drug Use in Primary Care Patients. J Gen Intern Med. 2015;30(12):1757-1764. PubMed
42. Kaye AD, Jones MR, Kaye AM, et al. Prescription Opioid Abuse in Chronic Pain: An Updated Review of Opioid Abuse Predictors and Strategies to Curb Opioid Abuse (Part 2). Pain Physician. 2017;20(2):S111-S133. PubMed
43. Paulozzi LJ, Strickler GK, Kreiner PW, Koris CM. Controlled Substance Prescribing Patterns - Prescription Behavior Surveillance System, Eight States, 2013. MMWR Surveillance Summaries. 16 2015;64(9):1-14. PubMed
44. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Washington, DC; 2013. 
45. Substance Abuse and Mental Health Services Administration. Substance Abuse Confidentiality Regulations. Rockville, MD; 2016. 
46. Lucado J, Paez K, Elixhauser A. Medication-Related Adverse Outcomes in U.S. Hospitals and Emergency Departments, 2008: Statistical Brief #109. Rockville, MD: Agency for Healthcare Research and Quality (AHRQ); April 2011. PubMed
47. Wheeler M, Oderda GM, Ashburn MA, Lipman AG. Adverse Events Associated with Postoperative Opioid Analgesia: A Systematic Review. J Pain. Jun 2002;3(3):159-180. PubMed
48. Noble M, Tregear SJ, Treadwell JR, Schoelles K. Long-Term Opioid Therapy for Chronic Noncancer Pain: A Systematic Review and Meta-Analysis of Efficacy and Safety. J Pain Symptom Manage. Feb 2008;35(2):214-228. PubMed
49. Wesson DR, Ling W. The Clinical Opiate Withdrawal Scale (COWS). J Psychoactive Drugs. 2003;35(2):253-259.50. PubMed
50. Tompkins DA, Bigelow GE, Harrison JA, Johnson RE, Fudala PJ, Strain EC. Concurrent Validation of the Clinical Opiate Withdrawal Scale (COWS) and Single-Item Indices against the Clinical Institute Narcotic Assessment (CINA) Opioid Withdrawal Instrument. Drug Alcohol Depend. 2009;105(1-2):154-159. PubMed
51. Sullivan JT, Sykora K, Schneiderman J, Naranjo CA, Sellers EM. Assessment of Alcohol Withdrawal: The Revised Clinical Institute Withdrawal Assessment for Alcohol Scale (CIWA-Ar). Br J Addict. 1989;84(11):1353-1357. PubMed
52. Rosenblatt RA, Andrilla CH, Catlin M, Larson EH. Geographic and Specialty Distribution of US Physicians Trained to Treat Opioid Use Disorder. Ann Fam Med. Jan-Feb 2015;13(1):23-26. PubMed
53. Berna C, Kulich RJ, Rathmell JP. Tapering Long-Term Opioid Therapy in Chronic Noncancer Pain: Evidence and Recommendations for Everyday Practice. Mayo Clin Proc. Jun 2015;90(6):828-842. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(4)
Publications
Topics
Page Number
249-255. Published online first December 6, 2017
Sections
Article PDF
Article PDF

Hospitalists face complex questions about how to evaluate and treat the large number of individuals who are admitted on long-term opioid therapy (LTOT, defined as lasting 3 months or longer) for chronic noncancer pain. A recent study at one Veterans Affairs hospital, found 26% of medical inpatients were on LTOT.1 Over the last 2 decades, use of LTOT has risen substantially in the United States, including among middle-aged and older adults.2 Concurrently, inpatient hospitalizations related to the overuse of prescription opioids, including overdose, dependence, abuse, and adverse drug events, have increased by 153%.3 Individuals on LTOT can also be hospitalized for exacerbations of the opioid-treated chronic pain condition or unrelated conditions. In addition to affecting rates of hospitalization, use of LTOT is associated with higher rates of in-hospital adverse events, longer hospital stays, and higher readmission rates.1,4,5

Physicians find managing chronic pain to be stressful, are often concerned about misuse and addiction, and believe their training in opioid prescribing is inadequate.6 Hospitalists report confidence in assessing and prescribing opioids for acute pain but limited success and satisfaction with treating exacerbations of chronic pain.7 Although half of all hospitalized patients receive opioids,5 little information is available to guide the care of hospitalized medical patients on LTOT for chronic noncancer pain.8,9

Our multispecialty team sought to synthesize guideline recommendations and primary literature relevant to the assessment of medical inpatients on LTOT to assist practitioners balance effective pain treatment and opioid risk reduction. This article addresses obtaining a comprehensive pain history, identifying misuse and opioid use disorders, assessing the risk of overdose and adverse drug events, gauging the risk of withdrawal, and based on such findings, appraise indications for opioid therapy. Other authors have recently published narrative reviews on the management of acute pain in hospitalized patients with opioid dependence and the inpatient management of opioid use disorder.10,11

METHODS

To identify primary literature, we searched PubMed, EMBASE, The Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects, Health Economic Evaluations Database, key meeting abstracts, and hand searches. To identify guidelines, we searched PubMed, National Guidelines Clearinghouse, specialty societies’ websites, the Centers for Disease Control and Prevention (CDC), the United Kingdom National Institute for Health and Care Excellence, the Canadian Medical Association, and the Australian Government National Health and Medical Research Council. Search terms related to opioids and chronic pain, which was last updated in October 2016.12

We selected English-language documents on opioids and chronic pain among adults, excluding pain in the setting of procedures, labor and delivery, life-limiting illness, or specific conditions. For primary literature, we considered intervention studies of any design that addressed pain management among hospitalized medical patients. We included guidelines and specialty society position statements published after January 1, 2009, that addressed pain in the hospital setting, acute pain in any setting, or chronic pain in the outpatient setting if published by a national body. Due to the paucity of documents specific to inpatient care, we used a narrative review format to synthesize information. Dual reviewers extracted guideline recommendations potentially relevant to medical inpatients on LTOT. We also summarize relevant assessment instruments, emphasizing very brief screening instruments, which may be more likely to be used by busy hospitalists.

RESULTS

We did not find any primary literature specific to the assessment of pain among medical inpatients on LTOT. We identified 14 eligible guidelines and position statements (see Table 1). Three documents address pain in the hospital setting, including an “implementation guide” from the Society for Hospital Medicine.13-15 Three documents address acute pain,9,16,17 and 8 documents address LTOT for chronic noncancer pain.18-25 Table 2 lists guideline recommendations potentially relevant to inpatients on LTOT.

DISCUSSION

We grouped guideline recommendations into the following 3 categories applicable to inpatient assessment of patients on LTOT: obtaining a comprehensive pain history, identifying misuse and opioid use disorders, and assessing the risk of overdose and adverse drug events. Although we did not find recommendations that specifically spoke to assessment for opioid withdrawal and appraising indications for opioid therapy, we briefly discuss these areas as highly relevant to inpatient practice.

 

 

Obtaining a Comprehensive Pain History

Hospitalists newly evaluating patients on LTOT often face a dual challenge: deciding if the patient has an immediate indication for additional opioids and if the current long-term opioid regimen should be altered or discontinued. In general, opioids are an accepted short-term treatment for moderate to severe acute pain but their role in chronic noncancer pain is controversial. Newly released guidelines by the CDC recommend initiating LTOT as a last resort, and the Departments of Veterans Affairs and Defense guidelines recommend against initiation of LTOT.22,23

A key first step, therefore, is distinguishing between acute and chronic pain. Among patients on LTOT, pain can represent a new acute pain condition, an exacerbation of chronic pain, opioid-induced hyperalgesia, or opioid withdrawal. Acute pain is defined as an unpleasant sensory and emotional experience associated with actual or potential tissue damage or described in relation to such damage.26 In contrast, chronic pain is a complex response that may not be related to actual or ongoing tissue damage, and is influenced by physiological, contextual, and psychological factors. Two acute pain guidelines and 1 chronic pain guideline recommend distinguishing acute and chronic pain,9,16,21 3 chronic pain guidelines reinforce the importance of obtaining a pain history (including timing, intensity, frequency, onset, etc),20,22,23 and 6 guidelines recommend ascertaining a history of prior pain-related treatments.9,13,14,16,20,22 Inquiring how the current pain compares with symptoms “on a good day,” what activities the patient can usually perform, and what the patient does outside the hospital to cope with pain can serve as entry into this conversation.

The standard for assessing pain intensity remains patient self-report using a validated instrument, such as the Numerical Rating Scale (Table 3).23,24,27 Among patients with chronic pain, clinically meaningful differences in pain intensity correspond to 1- to 2-point changes on these scales.27,28 Pain scores should not be the only factor used to determine when opioids are indicated because other factors are relevant and scores may not correlate with patients’ preference to receive opioid therapy.29 Along with pain intensity, 3 guidelines for hospital settings/acute pain and 4 chronic pain guidelines recommend assessing functional status.9,13,16,18,20-22 The CDC guideline endorses 3-item the “Pain average, interference with Enjoyment of life, and interference with General activity” (PEG) assessment scale 22,30 (Table 3). The instrument would need to be adapted for the hospital setting, but improvement in function, such as mobility, is a good indicator of clinical improvement among inpatients as well.

In addition to function, 5 guidelines, including 2 specific guidelines for acute pain or the hospital setting, recommend obtaining a detailed psychosocial history to identify life stressors and gain insight into the patient’s coping skills.14,16,19,20,22 Psychiatric symptoms can intensify the experience of pain or hamper coping ability. Anxiety, depression, and insomnia frequently coexist in patients with chronic pain.31 As such, 3 hospital setting/acute pain guidelines and 3 chronic pain guidelines recommend screening for mental health issues including anxiety and depression.13,14,16,20,22,23 Several depression screening instruments have been validated among inpatients,32 and there are validated single-item, self-administered instruments for both depression and anxiety (Table 3).32,33

Although obtaining a comprehensive history before making treatment decisions is ideal, some patients present in extremis. In emergency departments, some guidelines endorse prompt administration of analgesics based on patient self-report, prior to establishing a diagnosis.17 Given concerns about the growing prevalence of opioid use disorders, several states now recommend emergency medicine prescribers screen for misuse before giving opioids and avoid parenteral opioids for acute exacerbations of chronic pain.34 Treatments received in emergency departments set patients’ expectations for the care they receive during hospitalization, and hospitalists may find it necessary to explain therapies appropriate for urgent management are not intended to be sustained.

Identifying Misuse and Opioid Use Disorders

Nonmedical use of prescription opioids and opioid use disorders have more than doubled over the last decade.35 Five guidelines, including 3 specific guidelines for acute pain or the hospital setting, recommend screening for opioid misuse.13,14,16,19,23 Many states mandate practitioners assess patients for substance use disorders before prescribing controlled substances.36 Instruments to identify aberrant and risky use include the Current Opioid Misuse Measure,37 Prescription Drug Use Questionnaire,38 Addiction Behaviors Checklist,39 Screening Tool for Abuse,40 and the Self-Administered Single-Item Screening Question (Table 3).41 However, the evidence for these and other tools is limited and absent for the inpatient setting.21,42

In addition to obtaining a history from the patient, 4 guidelines specific to hospital settings/acute pain and 4 chronic pain guidelines recommend practitioners access prescription drug monitoring programs (PDMPs).13-16,19,21-24 PDMPs exist in all states except Missouri, and about half of states mandate practitioners check the PDMP database in certain circumstances.36 Studies examining the effects of PDMPs on prescribing are limited, but checking these databases can uncover concerning patterns including overlapping prescriptions or multiple prescribers.43 PDMPs can also confirm reported medication doses, for which patient report may be less reliable.

Two hospital/acute pain guidelines and 5 chronic pain guidelines also recommend urine drug testing, although differing on when and whom to test, with some favoring universal screening.11,20,23 Screening hospitalized patients may reveal substances not reported by patients, but medications administered in emergency departments can confound results. Furthermore, the commonly used immunoassay does not distinguish heroin from prescription opioids, nor detect hydrocodone, oxycodone, methadone, buprenorphine, or certain benzodiazepines. Chromatography/mass spectrometry assays can but are often not available from hospital laboratories. The differential for unexpected results includes substance use, self treatment of uncontrolled pain, diversion, or laboratory error.20

If concerning opioid use is identified, 3 hospital setting/acute pain specific guidelines and the CDC guideline recommend sharing concerns with patients and assessing for a substance use disorder.9,13,16,22 Determining whether patients have an opioid use disorder that meets the criteria in the Diagnostic and Statistical Manual, 5th Edition44 can be challenging. Patients may minimize or deny symptoms or fear that the stigma of an opioid use disorder will lead to dismissive or subpar care. Additionally, substance use disorders are subject to federal confidentiality regulations, which can hamper acquisition of information from providers.45 Thus, hospitalists may find specialty consultation helpful to confirm the diagnosis.

 

 

Assessing the Risk of Overdose and Adverse Drug Events

Oversedation, respiratory depression, and death can result from iatrogenic or self-administered opioid overdose in the hospital.5 Patient factors that increase this risk among outpatients include a prior history of overdose, preexisting substance use disorders, cognitive impairment, mood and personality disorders, chronic kidney disease, sleep apnea, obstructive lung disease, and recent abstinence from opioids.12 Medication factors include concomitant use of benzodiazepines and other central nervous system depressants, including alcohol; recent initiation of long-acting opioids; use of fentanyl patches, immediate-release fentanyl, or methadone; rapid titration; switching opioids without adequate dose reduction; pharmacokinetic drug–drug interactions; and, importantly, higher doses.12,22 Two guidelines specific to acute pain and hospital settings and 5 chronic pain guidelines recommend screening for use of benzodiazepines among patients on LTOT.13,14,16,18-20,22,21
The CDC guideline recommends careful assessment when doses exceed 50 mg of morphine equivalents per day and avoiding doses above 90 mg per day due to the heightened risk of overdose.22 In the hospital, 23% of patients receive doses at or above 100 mg of morphine equivalents per day,5 and concurrent use of central nervous system depressants is common. Changes in kidney and liver function during acute illness may impact opioid metabolism and contribute to overdose.

In addition to overdose, opioids are leading causes of adverse drug events during hospitalization.46 Most studies have focused on surgical patients reporting common opioid-related events as nausea/vomiting, pruritus, rash, mental status changes, respiratory depression, ileus, and urinary retention.47 Hospitalized patients may also exhibit chronic adverse effects due to LTOT. At least one-third of patients on LTOT eventually stop because of adverse effects, such as endocrinopathies, sleep disordered breathing, constipation, fractures, falls, and mental status changes.48 Patients may lack awareness that their symptoms are attributable to opioids and are willing to reduce their opioid use once informed, especially when alternatives are offered to alleviate pain.

Gauging the Risk of Withdrawal

Sudden discontinuation of LTOT by patients, practitioners, or intercurrent events can have unanticipated and undesirable consequences. Withdrawal is not only distressing for patients; it can be dangerous because patients may resort to illicit use, diversion of opioids, or masking opioid withdrawal with other substances such as alcohol. The anxiety and distress associated with withdrawal, or anticipatory fear about withdrawal, can undermine therapeutic alliance and interfere with processes of care. Reviewed guidelines did not offer recommendations regarding withdrawal risk or specific strategies for avoidance. There is no specific prior dose threshold or degree of reduction in opioids that puts patients at risk for withdrawal, in part due to patients’ beliefs, expectations, and differences in response to opioid formulations. Symptoms of opioid withdrawal have been compared to a severe case of influenza, including stomach cramps, nausea and vomiting, diarrhea, tremor and muscle twitching, sweating, restlessness, yawning, tachycardia, anxiety and irritability, bone and joint aches, runny nose, tearing, and piloerection.49 The Clinical Opiate Withdrawal Scale (COWS)49 and the Clinical Institute Narcotic Assessment51 are clinician-administered tools to assess opioid withdrawal similar to the Clinical Institute Withdrawal Assessment of Alcohol Scale, Revised,52 to monitor for withdrawal in the inpatient setting.

Synthesizing and Appraising the Indications for Opioid Therapy

For medical inpatients who report adequate pain control and functional outcomes on current doses of LTOT, without evidence of misuse, the pragmatic approach is to continue the treatment plan established by the outpatient clinician rather than escalating or tapering the dose. If opioids are prescribed at discharge, 3 hospital setting/acute pain guidelines and the CDC guideline recommend prescribing the lowest effective dose of immediate release opioids for 3 to 7 days.13,15,16,22

When patients exhibit evidence of an opioid use disorder, have a history of serious overdose, or are experiencing intolerable opioid-related adverse events, the hospitalist may conclude the harms of LTOT outweigh the benefits. For these patients, opioid treatment in the hospital can be aimed at preventing withdrawal, avoiding the perpetuation of inappropriate opioid use, managing other acute medical conditions, and communicating with outpatient prescribers. For patients with misuse, discontinuing opioids is potentially harmful and may be perceived as punitive. Hospitalists should consider consulting addiction or mental health specialists to assist with formulating a plan of care. However, such specialists may not be available in smaller or rural hospitals and referral at discharge can be challenging.53

Beginning to taper opioids during the hospitalization can be appropriate when patients are motivated and can transition to an outpatient provider who will supervise the taper. In ambulatory settings, tapers of 10% to 30% every 2 to 5 days are generally well tolerated.54 If patients started tapering opioids under supervision of an outpatient provider prior to hospitalization; ideally, the taper can be continued during hospitalization with close coordination with the outpatient clinician.

Unfortunately, many patients on LTOT are admitted with new sources of acute pain and or exacerbations of chronic pain, and some have concomitant substance use disorders; we plan to address the management of these complex situations in future work.

 

 

Despite the frequency with which patients on LTOT are hospitalized for nonsurgical stays and the challenges inherent in evaluating pain and assessing the possibility of substance use disorders, no formal guidelines or empirical research studies pertain to this population. Guidelines in this review were developed for hospital settings and acute pain in the absence of LTOT, and for outpatient care of patients on LTOT. We also included a nonsystematic synthesis of literature that varied in relevance to medical inpatients on LTOT.

CONCLUSIONS

Although inpatient assessment and treatment of patients with LTOT remains an underresearched area, we were able to extract and synthesize recommendations from 14 guideline statements and apply these to the assessment of patients with LTOT in the inpatient setting. Hospitalists frequently encounter patients on LTOT for chronic nonmalignant pain and are faced with complex decisions about the effectiveness and safety of LTOT; appropriate patient assessment is fundamental to making these decisions. Key guideline recommendations relevant to inpatient assessment include assessing both pain and functional status, differentiating acute from chronic pain, ascertaining preadmission pain treatment history, obtaining a psychosocial history, screening for mental health issues such as depression and anxiety, screening for substance use disorders, checking state prescription drug monitoring databases, ordering urine drug immunoassays, detecting use of sedative-hypnotics, identifying medical conditions associated with increased risk of overdose and adverse events, and appraising the potential benefits and harms of opioid therapy. Although approaches to assessing medical inpatients on LTOT can be extrapolated from outpatient guidelines, observational studies, and small studies in surgical populations, more work is needed to address these critical topics for inpatients on LTOT.

Disclosure

Dr. Herzig was funded by grant number K23AG042459 from the National Institute on Aging. The funding organization had no involvement in any aspect of the study, including design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. All other authors have no relevant conflicts of interest with the work.

Hospitalists face complex questions about how to evaluate and treat the large number of individuals who are admitted on long-term opioid therapy (LTOT, defined as lasting 3 months or longer) for chronic noncancer pain. A recent study at one Veterans Affairs hospital, found 26% of medical inpatients were on LTOT.1 Over the last 2 decades, use of LTOT has risen substantially in the United States, including among middle-aged and older adults.2 Concurrently, inpatient hospitalizations related to the overuse of prescription opioids, including overdose, dependence, abuse, and adverse drug events, have increased by 153%.3 Individuals on LTOT can also be hospitalized for exacerbations of the opioid-treated chronic pain condition or unrelated conditions. In addition to affecting rates of hospitalization, use of LTOT is associated with higher rates of in-hospital adverse events, longer hospital stays, and higher readmission rates.1,4,5

Physicians find managing chronic pain to be stressful, are often concerned about misuse and addiction, and believe their training in opioid prescribing is inadequate.6 Hospitalists report confidence in assessing and prescribing opioids for acute pain but limited success and satisfaction with treating exacerbations of chronic pain.7 Although half of all hospitalized patients receive opioids,5 little information is available to guide the care of hospitalized medical patients on LTOT for chronic noncancer pain.8,9

Our multispecialty team sought to synthesize guideline recommendations and primary literature relevant to the assessment of medical inpatients on LTOT to assist practitioners balance effective pain treatment and opioid risk reduction. This article addresses obtaining a comprehensive pain history, identifying misuse and opioid use disorders, assessing the risk of overdose and adverse drug events, gauging the risk of withdrawal, and based on such findings, appraise indications for opioid therapy. Other authors have recently published narrative reviews on the management of acute pain in hospitalized patients with opioid dependence and the inpatient management of opioid use disorder.10,11

METHODS

To identify primary literature, we searched PubMed, EMBASE, The Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects, Health Economic Evaluations Database, key meeting abstracts, and hand searches. To identify guidelines, we searched PubMed, National Guidelines Clearinghouse, specialty societies’ websites, the Centers for Disease Control and Prevention (CDC), the United Kingdom National Institute for Health and Care Excellence, the Canadian Medical Association, and the Australian Government National Health and Medical Research Council. Search terms related to opioids and chronic pain, which was last updated in October 2016.12

We selected English-language documents on opioids and chronic pain among adults, excluding pain in the setting of procedures, labor and delivery, life-limiting illness, or specific conditions. For primary literature, we considered intervention studies of any design that addressed pain management among hospitalized medical patients. We included guidelines and specialty society position statements published after January 1, 2009, that addressed pain in the hospital setting, acute pain in any setting, or chronic pain in the outpatient setting if published by a national body. Due to the paucity of documents specific to inpatient care, we used a narrative review format to synthesize information. Dual reviewers extracted guideline recommendations potentially relevant to medical inpatients on LTOT. We also summarize relevant assessment instruments, emphasizing very brief screening instruments, which may be more likely to be used by busy hospitalists.

RESULTS

We did not find any primary literature specific to the assessment of pain among medical inpatients on LTOT. We identified 14 eligible guidelines and position statements (see Table 1). Three documents address pain in the hospital setting, including an “implementation guide” from the Society for Hospital Medicine.13-15 Three documents address acute pain,9,16,17 and 8 documents address LTOT for chronic noncancer pain.18-25 Table 2 lists guideline recommendations potentially relevant to inpatients on LTOT.

DISCUSSION

We grouped guideline recommendations into the following 3 categories applicable to inpatient assessment of patients on LTOT: obtaining a comprehensive pain history, identifying misuse and opioid use disorders, and assessing the risk of overdose and adverse drug events. Although we did not find recommendations that specifically spoke to assessment for opioid withdrawal and appraising indications for opioid therapy, we briefly discuss these areas as highly relevant to inpatient practice.

 

 

Obtaining a Comprehensive Pain History

Hospitalists newly evaluating patients on LTOT often face a dual challenge: deciding if the patient has an immediate indication for additional opioids and if the current long-term opioid regimen should be altered or discontinued. In general, opioids are an accepted short-term treatment for moderate to severe acute pain but their role in chronic noncancer pain is controversial. Newly released guidelines by the CDC recommend initiating LTOT as a last resort, and the Departments of Veterans Affairs and Defense guidelines recommend against initiation of LTOT.22,23

A key first step, therefore, is distinguishing between acute and chronic pain. Among patients on LTOT, pain can represent a new acute pain condition, an exacerbation of chronic pain, opioid-induced hyperalgesia, or opioid withdrawal. Acute pain is defined as an unpleasant sensory and emotional experience associated with actual or potential tissue damage or described in relation to such damage.26 In contrast, chronic pain is a complex response that may not be related to actual or ongoing tissue damage, and is influenced by physiological, contextual, and psychological factors. Two acute pain guidelines and 1 chronic pain guideline recommend distinguishing acute and chronic pain,9,16,21 3 chronic pain guidelines reinforce the importance of obtaining a pain history (including timing, intensity, frequency, onset, etc),20,22,23 and 6 guidelines recommend ascertaining a history of prior pain-related treatments.9,13,14,16,20,22 Inquiring how the current pain compares with symptoms “on a good day,” what activities the patient can usually perform, and what the patient does outside the hospital to cope with pain can serve as entry into this conversation.

The standard for assessing pain intensity remains patient self-report using a validated instrument, such as the Numerical Rating Scale (Table 3).23,24,27 Among patients with chronic pain, clinically meaningful differences in pain intensity correspond to 1- to 2-point changes on these scales.27,28 Pain scores should not be the only factor used to determine when opioids are indicated because other factors are relevant and scores may not correlate with patients’ preference to receive opioid therapy.29 Along with pain intensity, 3 guidelines for hospital settings/acute pain and 4 chronic pain guidelines recommend assessing functional status.9,13,16,18,20-22 The CDC guideline endorses 3-item the “Pain average, interference with Enjoyment of life, and interference with General activity” (PEG) assessment scale 22,30 (Table 3). The instrument would need to be adapted for the hospital setting, but improvement in function, such as mobility, is a good indicator of clinical improvement among inpatients as well.

In addition to function, 5 guidelines, including 2 specific guidelines for acute pain or the hospital setting, recommend obtaining a detailed psychosocial history to identify life stressors and gain insight into the patient’s coping skills.14,16,19,20,22 Psychiatric symptoms can intensify the experience of pain or hamper coping ability. Anxiety, depression, and insomnia frequently coexist in patients with chronic pain.31 As such, 3 hospital setting/acute pain guidelines and 3 chronic pain guidelines recommend screening for mental health issues including anxiety and depression.13,14,16,20,22,23 Several depression screening instruments have been validated among inpatients,32 and there are validated single-item, self-administered instruments for both depression and anxiety (Table 3).32,33

Although obtaining a comprehensive history before making treatment decisions is ideal, some patients present in extremis. In emergency departments, some guidelines endorse prompt administration of analgesics based on patient self-report, prior to establishing a diagnosis.17 Given concerns about the growing prevalence of opioid use disorders, several states now recommend emergency medicine prescribers screen for misuse before giving opioids and avoid parenteral opioids for acute exacerbations of chronic pain.34 Treatments received in emergency departments set patients’ expectations for the care they receive during hospitalization, and hospitalists may find it necessary to explain therapies appropriate for urgent management are not intended to be sustained.

Identifying Misuse and Opioid Use Disorders

Nonmedical use of prescription opioids and opioid use disorders have more than doubled over the last decade.35 Five guidelines, including 3 specific guidelines for acute pain or the hospital setting, recommend screening for opioid misuse.13,14,16,19,23 Many states mandate practitioners assess patients for substance use disorders before prescribing controlled substances.36 Instruments to identify aberrant and risky use include the Current Opioid Misuse Measure,37 Prescription Drug Use Questionnaire,38 Addiction Behaviors Checklist,39 Screening Tool for Abuse,40 and the Self-Administered Single-Item Screening Question (Table 3).41 However, the evidence for these and other tools is limited and absent for the inpatient setting.21,42

In addition to obtaining a history from the patient, 4 guidelines specific to hospital settings/acute pain and 4 chronic pain guidelines recommend practitioners access prescription drug monitoring programs (PDMPs).13-16,19,21-24 PDMPs exist in all states except Missouri, and about half of states mandate practitioners check the PDMP database in certain circumstances.36 Studies examining the effects of PDMPs on prescribing are limited, but checking these databases can uncover concerning patterns including overlapping prescriptions or multiple prescribers.43 PDMPs can also confirm reported medication doses, for which patient report may be less reliable.

Two hospital/acute pain guidelines and 5 chronic pain guidelines also recommend urine drug testing, although differing on when and whom to test, with some favoring universal screening.11,20,23 Screening hospitalized patients may reveal substances not reported by patients, but medications administered in emergency departments can confound results. Furthermore, the commonly used immunoassay does not distinguish heroin from prescription opioids, nor detect hydrocodone, oxycodone, methadone, buprenorphine, or certain benzodiazepines. Chromatography/mass spectrometry assays can but are often not available from hospital laboratories. The differential for unexpected results includes substance use, self treatment of uncontrolled pain, diversion, or laboratory error.20

If concerning opioid use is identified, 3 hospital setting/acute pain specific guidelines and the CDC guideline recommend sharing concerns with patients and assessing for a substance use disorder.9,13,16,22 Determining whether patients have an opioid use disorder that meets the criteria in the Diagnostic and Statistical Manual, 5th Edition44 can be challenging. Patients may minimize or deny symptoms or fear that the stigma of an opioid use disorder will lead to dismissive or subpar care. Additionally, substance use disorders are subject to federal confidentiality regulations, which can hamper acquisition of information from providers.45 Thus, hospitalists may find specialty consultation helpful to confirm the diagnosis.

 

 

Assessing the Risk of Overdose and Adverse Drug Events

Oversedation, respiratory depression, and death can result from iatrogenic or self-administered opioid overdose in the hospital.5 Patient factors that increase this risk among outpatients include a prior history of overdose, preexisting substance use disorders, cognitive impairment, mood and personality disorders, chronic kidney disease, sleep apnea, obstructive lung disease, and recent abstinence from opioids.12 Medication factors include concomitant use of benzodiazepines and other central nervous system depressants, including alcohol; recent initiation of long-acting opioids; use of fentanyl patches, immediate-release fentanyl, or methadone; rapid titration; switching opioids without adequate dose reduction; pharmacokinetic drug–drug interactions; and, importantly, higher doses.12,22 Two guidelines specific to acute pain and hospital settings and 5 chronic pain guidelines recommend screening for use of benzodiazepines among patients on LTOT.13,14,16,18-20,22,21
The CDC guideline recommends careful assessment when doses exceed 50 mg of morphine equivalents per day and avoiding doses above 90 mg per day due to the heightened risk of overdose.22 In the hospital, 23% of patients receive doses at or above 100 mg of morphine equivalents per day,5 and concurrent use of central nervous system depressants is common. Changes in kidney and liver function during acute illness may impact opioid metabolism and contribute to overdose.

In addition to overdose, opioids are leading causes of adverse drug events during hospitalization.46 Most studies have focused on surgical patients reporting common opioid-related events as nausea/vomiting, pruritus, rash, mental status changes, respiratory depression, ileus, and urinary retention.47 Hospitalized patients may also exhibit chronic adverse effects due to LTOT. At least one-third of patients on LTOT eventually stop because of adverse effects, such as endocrinopathies, sleep disordered breathing, constipation, fractures, falls, and mental status changes.48 Patients may lack awareness that their symptoms are attributable to opioids and are willing to reduce their opioid use once informed, especially when alternatives are offered to alleviate pain.

Gauging the Risk of Withdrawal

Sudden discontinuation of LTOT by patients, practitioners, or intercurrent events can have unanticipated and undesirable consequences. Withdrawal is not only distressing for patients; it can be dangerous because patients may resort to illicit use, diversion of opioids, or masking opioid withdrawal with other substances such as alcohol. The anxiety and distress associated with withdrawal, or anticipatory fear about withdrawal, can undermine therapeutic alliance and interfere with processes of care. Reviewed guidelines did not offer recommendations regarding withdrawal risk or specific strategies for avoidance. There is no specific prior dose threshold or degree of reduction in opioids that puts patients at risk for withdrawal, in part due to patients’ beliefs, expectations, and differences in response to opioid formulations. Symptoms of opioid withdrawal have been compared to a severe case of influenza, including stomach cramps, nausea and vomiting, diarrhea, tremor and muscle twitching, sweating, restlessness, yawning, tachycardia, anxiety and irritability, bone and joint aches, runny nose, tearing, and piloerection.49 The Clinical Opiate Withdrawal Scale (COWS)49 and the Clinical Institute Narcotic Assessment51 are clinician-administered tools to assess opioid withdrawal similar to the Clinical Institute Withdrawal Assessment of Alcohol Scale, Revised,52 to monitor for withdrawal in the inpatient setting.

Synthesizing and Appraising the Indications for Opioid Therapy

For medical inpatients who report adequate pain control and functional outcomes on current doses of LTOT, without evidence of misuse, the pragmatic approach is to continue the treatment plan established by the outpatient clinician rather than escalating or tapering the dose. If opioids are prescribed at discharge, 3 hospital setting/acute pain guidelines and the CDC guideline recommend prescribing the lowest effective dose of immediate release opioids for 3 to 7 days.13,15,16,22

When patients exhibit evidence of an opioid use disorder, have a history of serious overdose, or are experiencing intolerable opioid-related adverse events, the hospitalist may conclude the harms of LTOT outweigh the benefits. For these patients, opioid treatment in the hospital can be aimed at preventing withdrawal, avoiding the perpetuation of inappropriate opioid use, managing other acute medical conditions, and communicating with outpatient prescribers. For patients with misuse, discontinuing opioids is potentially harmful and may be perceived as punitive. Hospitalists should consider consulting addiction or mental health specialists to assist with formulating a plan of care. However, such specialists may not be available in smaller or rural hospitals and referral at discharge can be challenging.53

Beginning to taper opioids during the hospitalization can be appropriate when patients are motivated and can transition to an outpatient provider who will supervise the taper. In ambulatory settings, tapers of 10% to 30% every 2 to 5 days are generally well tolerated.54 If patients started tapering opioids under supervision of an outpatient provider prior to hospitalization; ideally, the taper can be continued during hospitalization with close coordination with the outpatient clinician.

Unfortunately, many patients on LTOT are admitted with new sources of acute pain and or exacerbations of chronic pain, and some have concomitant substance use disorders; we plan to address the management of these complex situations in future work.

 

 

Despite the frequency with which patients on LTOT are hospitalized for nonsurgical stays and the challenges inherent in evaluating pain and assessing the possibility of substance use disorders, no formal guidelines or empirical research studies pertain to this population. Guidelines in this review were developed for hospital settings and acute pain in the absence of LTOT, and for outpatient care of patients on LTOT. We also included a nonsystematic synthesis of literature that varied in relevance to medical inpatients on LTOT.

CONCLUSIONS

Although inpatient assessment and treatment of patients with LTOT remains an underresearched area, we were able to extract and synthesize recommendations from 14 guideline statements and apply these to the assessment of patients with LTOT in the inpatient setting. Hospitalists frequently encounter patients on LTOT for chronic nonmalignant pain and are faced with complex decisions about the effectiveness and safety of LTOT; appropriate patient assessment is fundamental to making these decisions. Key guideline recommendations relevant to inpatient assessment include assessing both pain and functional status, differentiating acute from chronic pain, ascertaining preadmission pain treatment history, obtaining a psychosocial history, screening for mental health issues such as depression and anxiety, screening for substance use disorders, checking state prescription drug monitoring databases, ordering urine drug immunoassays, detecting use of sedative-hypnotics, identifying medical conditions associated with increased risk of overdose and adverse events, and appraising the potential benefits and harms of opioid therapy. Although approaches to assessing medical inpatients on LTOT can be extrapolated from outpatient guidelines, observational studies, and small studies in surgical populations, more work is needed to address these critical topics for inpatients on LTOT.

Disclosure

Dr. Herzig was funded by grant number K23AG042459 from the National Institute on Aging. The funding organization had no involvement in any aspect of the study, including design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. All other authors have no relevant conflicts of interest with the work.

References

1. Mosher HJ, Jiang L, Sarrazin MSV, Cram P, Kaboli PJ, Vander Weg MW. Prevalence and Characteristics of Hospitalized Adults on Chronic Opioid Therapy. J Hosp Med. 2014;9(2):82-87. PubMed
2. Campbell CI, Weisner C, Leresche L, et al. Age and Gender Trends in Long-Term Opioid Analgesic Use for Noncancer Pain. Am J Public Health. 2010;100(12):2541-2547. PubMed
3. Owens PL, Barrett ML, Weiss AJ, Washington RE, Kronick R. Hospital Inpatient Utilization Related to Opioid Overuse among Adults, 1993–2012. Rockville, MD: Agency for Healthcare Research and Quality; 2014. PubMed

4. Gulur P, Williams L, Chaudhary S, Koury K, Jaff M. Opioid Tolerance--a Predictor of Increased Length of Stay and Higher Readmission Rates. Pain Physician. 2014;17(4):E503-507. PubMed
5. Herzig SJ, Rothberg MB, Cheung M, Ngo LH, Marcantonio ER. Opioid Utilization and Opioid-Related Adverse Events in Nonsurgical Patients in US Hospitals. J Hosp Med. 2014;9(2):73-81. PubMed
6. Jamison RN, Sheehan KA, Scanlan E, Matthews M, Ross EL. Beliefs and Attitudes About Opioid Prescribing and Chronic Pain Management: Survey of Primary Care Providers. J Opioid Manag. 2014;10(6):375-382. PubMed
7. Calcaterra SL, Drabkin AD, Leslie SE, et al. The Hospitalist Perspective on Opioid Prescribing: A Qualitative Analysis. J Hosp Med. 2016;11(8):536-542. PubMed
8. Helfand M, Freeman M. Assessment and Management of Acute Pain in Adult Medical Inpatients: A Systematic Review. Pain Med. 2009;10(7):1183-1199. PubMed
9. Macintyre P, Schug S, Scott D, Visser E, Walker S. Acute Pain Management: Scientific Evidence. Melbourne, Australia: Australian and New Zealand College of Anesthetists and Faculty of Pain Medicine; 2010. 
10. Raub JN, Vettese TE. Acute Pain Management in Hospitalized Adult Patients with Opioid Dependence: A Narrative Review and Guide for Clinicians. J Hosp Med. 2017;12(5):375-379. PubMed
11. Theisen-Toupal J, Ronan MV, Moore A, Rosenthal ES. Inpatient Management of Opioid Use Disorder: A Review for Hospitalists. J Hosp Med. 2017;12(5):369-374. PubMed
12. Nuckols TK, Anderson L, Popescu I, et al. Opioid Prescribing: A Systematic Review and Critical Appraisal of Guidelines for Chronic Pain. Ann Intern Med. 2014;160(1):38-47. PubMed
13. Massachusetts Health & Hospital Association Substance Use Disorder Prevention and Treatment Task Force. Guidelines for Opioid Management within a Hospital Setting. Boston, MA: Massachusetts Health & Hospital Association; 2009. 
14. Society for Hospital Medicine’s Center for Hospital Innovation & Improvement. Reducing Adverse Drug Events Related to Opioids Implementation Guide. Philadelphia, PA; 2015. 
15. Cantrill S, Brown M, Carlisle RJ, et al. Clinical Policy Critical Issues in the Prescribing of Opioids for Adult Patients in the Emergency Department. Ann Emerg Med. 2012;60(4):499-525. PubMed
16. Thorson D, Biewen P, Bonte B, et al. Acute Pain Assessment and Opioid Prescribing Protocol. Bloomington, MN: Institute for Clinical Systems Improvement; 2014. 
17. American Society for Pain Management N, Emergency Nurses A, American College of Emergency P, American Pain S. Optimizing the Treatment of Pain in Patients with Acute Presentations. Policy Statement. Ann Emerg Med. Jul 2010;56(1):77-79. 
18. American Geriatrics Society Panel on the Pharmacological Management of Persistent Pain in Older Persons. Pharmacological Management of Persistent Pain in Older Persons. J Am Geriatr Soc. 2009;57(8):1331-1346.  
19. Chou R, Fanciullo GJ, Fine PG, et al. Clinical Guidelines for the Use of Chronic Opioid Therapy in Chronic Noncancer Pain. J Pain. 2009;10(2):113-130. PubMed
20. Furlan AD, Reardon R, Weppler C. Opioids for Chronic Noncancer Pain: A New Canadian Practice Guideline. CMAJ. 2010;182(9):923-930. PubMed
21. Manchikanti L, Abdi S, Atluri S, et al. American Society of Interventional Pain Physicians (ASIPP) Guidelines for Responsible Opioid Prescribing in Chronic Non-Cancer Pain: Part 2--Guidance. Pain Physician. 2012;15(3 Suppl):S67-116. PubMed
22. Dowell D, Haegerich TM, Chou R. CDC Guideline for Prescribing Opioids for Chronic Pain--United States, 2016. JAMA. 2016;315(15):1624-1645. PubMed
23. The Opiod Therap for Chronic Pain Work Group. VA/DoD Clinical Practice Guideline for Opioid Therapy for Chronic Pain. Version 3.0. https://www.healthquality.va.gov/guidelines/Pain/cot/VADoDOTCPG022717.pdf. AccessedAugust 3, 2016.
24. Hooten W, Timming R, Belgrade M, et al. Assessment and Managemeent of Chronic Pain. Bloomington, MN: Institute for Clinical Systems Improvement; 2013. 
25. American Society of Anesthesiologists Task Force. Practice Guidelines for Chronic Pain Management: An Updated Report by the American Society of Anesthesiologists Task Force on Chronic Pain Management and the American Society of Regional Anesthesia and Pain Medicine. Anesthesiology. 2010;112(4):810-833. PubMed
26. International Association for the Study of Pain. IASP Taxonomy. https://www.iasp-pain.org/Taxonomy. Accessed August 3, 2016.
27. Hawker GA, Mian S, Kendzerska T, French M. Measures of Adult Pain: Visual Analog Scale for Pain (VAS Pain), Numeric Rating Scale for Pain (NRS Pain), Mcgill Pain Questionnaire (MPQ), Short-Form Mcgill Pain Questionnaire (SF-MPQ), Chronic Pain Grade Scale (CPGS), Short Form-36 Bodily Pain Scale (SF36 BPS), and Measure of Intermittent and Constant Osteoarthritis Pain (ICOAP). Arthritis Care Res (Hoboken). 2011;63 Suppl 11:S240-252. PubMed
28. Farrar JT, Young JP, LaMoreaux L, Werth JL, Poole RM. Clinical Importance of Changes in Chronic Pain Intensity Measured on an 11-Point Numerical Pain Rating Scale. Pain. 2001;94(2):149-158. PubMed
29. van Dijk JF, Kappen TH, Schuurmans MJ, van Wijck AJ. The Relation between Patients’ NRS Pain Scores and Their Desire for Additional Opioids after Surgery. Pain Pract. 2015;15(7):604-609. PubMed
30. Krebs EE, Lorenz KA, Bair MJ, et al. Development and Initial Validation of the PEG, a Three-Item Scale Assessing Pain Intensity and Interference. J Gen Intern Med. 2009;24(6):733-738. PubMed
31. Finan PH, Smith MT. The Comorbidity of Insomnia, Chronic Pain, and Depression: Dopamine as a Putative Mechanism. Sleep Med Rev. 2013;17(3):173-183. PubMed
32. IsHak WW, Collison K, Danovitch I, et al. Screening for Depression in Hospitalized Medical Patients. J Hosp Med. 2017;12(2):118-125. PubMed

33. Young QR, Nguyen M, Roth S, Broadberry A, Mackay MH. Single-Item Measures for Depression and Anxiety: Validation of the Screening Tool for Psychological Distress in an Inpatient Cardiology Setting. Eur J Cardiovasc Nurs. 2015;14(6):544-551. PubMed

34. Poon SJ, Greenwood-Ericksen MB. The Opioid Prescription Epidemic and the Role of Emergency Medicine. Ann Emerg Med. 2014;64(5):490-495. PubMed
35. National Institute on Alcohol Abuse and Alcoholism (NIAAA). Rates of Nonmedical Prescription Opioid Use and Opioid Use Disorder Double in 10 Years. https://www.nih.gov/news-events/rates-nonmedical-prescription-opioid-use-opioid-use-disorder-double-10-years. Accessed on August 3, 2016.
36. National Alliance for Model State Drug Laws. Status of Prescription Drug Monitoring Programs (PDMPs). http://www.pdmpassist.org/pdf/PDMPProgramStatus.pdf. Accessed August 3, 2016.
37. Butler SF, Budman SH, Fernandez KC, et al. Development and Validation of the Current Opioid Misuse Measure. Pain. 2007;130(1-2):144-156. PubMed
38. Compton PA, Wu SM, Schieffer B, Pham Q, Naliboff BD. Introduction of a Self-Report Version of the Prescription Drug Use Questionnaire and Relationship to Medication Agreement Noncompliance. J Pain Symptom Manage. 2008;36(4):383-395. PubMed
39. Wu SM, Compton P, Bolus R, et al. The Addiction Behaviors Checklist: Validation of a New Clinician-Based Measure of Inappropriate Opioid Use in Chronic Pain. J Pain Symptom Manage. 2006;32(4):342-351. PubMed
40. Atluri SL, Sudarshan G. Development of a Screening Tool to Detect the Risk of Inappropriate Prescription Opioid Use in Patients with Chronic Pain. Pain Physician. 2004;7(3):333-338. PubMed
41. McNeely J, Cleland CM, Strauss SM, Palamar JJ, Rotrosen J, Saitz R. Validation of Self-Administered Single-Item Screening Questions (SISQS) for Unhealthy Alcohol and Drug Use in Primary Care Patients. J Gen Intern Med. 2015;30(12):1757-1764. PubMed
42. Kaye AD, Jones MR, Kaye AM, et al. Prescription Opioid Abuse in Chronic Pain: An Updated Review of Opioid Abuse Predictors and Strategies to Curb Opioid Abuse (Part 2). Pain Physician. 2017;20(2):S111-S133. PubMed
43. Paulozzi LJ, Strickler GK, Kreiner PW, Koris CM. Controlled Substance Prescribing Patterns - Prescription Behavior Surveillance System, Eight States, 2013. MMWR Surveillance Summaries. 16 2015;64(9):1-14. PubMed
44. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Washington, DC; 2013. 
45. Substance Abuse and Mental Health Services Administration. Substance Abuse Confidentiality Regulations. Rockville, MD; 2016. 
46. Lucado J, Paez K, Elixhauser A. Medication-Related Adverse Outcomes in U.S. Hospitals and Emergency Departments, 2008: Statistical Brief #109. Rockville, MD: Agency for Healthcare Research and Quality (AHRQ); April 2011. PubMed
47. Wheeler M, Oderda GM, Ashburn MA, Lipman AG. Adverse Events Associated with Postoperative Opioid Analgesia: A Systematic Review. J Pain. Jun 2002;3(3):159-180. PubMed
48. Noble M, Tregear SJ, Treadwell JR, Schoelles K. Long-Term Opioid Therapy for Chronic Noncancer Pain: A Systematic Review and Meta-Analysis of Efficacy and Safety. J Pain Symptom Manage. Feb 2008;35(2):214-228. PubMed
49. Wesson DR, Ling W. The Clinical Opiate Withdrawal Scale (COWS). J Psychoactive Drugs. 2003;35(2):253-259.50. PubMed
50. Tompkins DA, Bigelow GE, Harrison JA, Johnson RE, Fudala PJ, Strain EC. Concurrent Validation of the Clinical Opiate Withdrawal Scale (COWS) and Single-Item Indices against the Clinical Institute Narcotic Assessment (CINA) Opioid Withdrawal Instrument. Drug Alcohol Depend. 2009;105(1-2):154-159. PubMed
51. Sullivan JT, Sykora K, Schneiderman J, Naranjo CA, Sellers EM. Assessment of Alcohol Withdrawal: The Revised Clinical Institute Withdrawal Assessment for Alcohol Scale (CIWA-Ar). Br J Addict. 1989;84(11):1353-1357. PubMed
52. Rosenblatt RA, Andrilla CH, Catlin M, Larson EH. Geographic and Specialty Distribution of US Physicians Trained to Treat Opioid Use Disorder. Ann Fam Med. Jan-Feb 2015;13(1):23-26. PubMed
53. Berna C, Kulich RJ, Rathmell JP. Tapering Long-Term Opioid Therapy in Chronic Noncancer Pain: Evidence and Recommendations for Everyday Practice. Mayo Clin Proc. Jun 2015;90(6):828-842. PubMed

References

1. Mosher HJ, Jiang L, Sarrazin MSV, Cram P, Kaboli PJ, Vander Weg MW. Prevalence and Characteristics of Hospitalized Adults on Chronic Opioid Therapy. J Hosp Med. 2014;9(2):82-87. PubMed
2. Campbell CI, Weisner C, Leresche L, et al. Age and Gender Trends in Long-Term Opioid Analgesic Use for Noncancer Pain. Am J Public Health. 2010;100(12):2541-2547. PubMed
3. Owens PL, Barrett ML, Weiss AJ, Washington RE, Kronick R. Hospital Inpatient Utilization Related to Opioid Overuse among Adults, 1993–2012. Rockville, MD: Agency for Healthcare Research and Quality; 2014. PubMed

4. Gulur P, Williams L, Chaudhary S, Koury K, Jaff M. Opioid Tolerance--a Predictor of Increased Length of Stay and Higher Readmission Rates. Pain Physician. 2014;17(4):E503-507. PubMed
5. Herzig SJ, Rothberg MB, Cheung M, Ngo LH, Marcantonio ER. Opioid Utilization and Opioid-Related Adverse Events in Nonsurgical Patients in US Hospitals. J Hosp Med. 2014;9(2):73-81. PubMed
6. Jamison RN, Sheehan KA, Scanlan E, Matthews M, Ross EL. Beliefs and Attitudes About Opioid Prescribing and Chronic Pain Management: Survey of Primary Care Providers. J Opioid Manag. 2014;10(6):375-382. PubMed
7. Calcaterra SL, Drabkin AD, Leslie SE, et al. The Hospitalist Perspective on Opioid Prescribing: A Qualitative Analysis. J Hosp Med. 2016;11(8):536-542. PubMed
8. Helfand M, Freeman M. Assessment and Management of Acute Pain in Adult Medical Inpatients: A Systematic Review. Pain Med. 2009;10(7):1183-1199. PubMed
9. Macintyre P, Schug S, Scott D, Visser E, Walker S. Acute Pain Management: Scientific Evidence. Melbourne, Australia: Australian and New Zealand College of Anesthetists and Faculty of Pain Medicine; 2010. 
10. Raub JN, Vettese TE. Acute Pain Management in Hospitalized Adult Patients with Opioid Dependence: A Narrative Review and Guide for Clinicians. J Hosp Med. 2017;12(5):375-379. PubMed
11. Theisen-Toupal J, Ronan MV, Moore A, Rosenthal ES. Inpatient Management of Opioid Use Disorder: A Review for Hospitalists. J Hosp Med. 2017;12(5):369-374. PubMed
12. Nuckols TK, Anderson L, Popescu I, et al. Opioid Prescribing: A Systematic Review and Critical Appraisal of Guidelines for Chronic Pain. Ann Intern Med. 2014;160(1):38-47. PubMed
13. Massachusetts Health & Hospital Association Substance Use Disorder Prevention and Treatment Task Force. Guidelines for Opioid Management within a Hospital Setting. Boston, MA: Massachusetts Health & Hospital Association; 2009. 
14. Society for Hospital Medicine’s Center for Hospital Innovation & Improvement. Reducing Adverse Drug Events Related to Opioids Implementation Guide. Philadelphia, PA; 2015. 
15. Cantrill S, Brown M, Carlisle RJ, et al. Clinical Policy Critical Issues in the Prescribing of Opioids for Adult Patients in the Emergency Department. Ann Emerg Med. 2012;60(4):499-525. PubMed
16. Thorson D, Biewen P, Bonte B, et al. Acute Pain Assessment and Opioid Prescribing Protocol. Bloomington, MN: Institute for Clinical Systems Improvement; 2014. 
17. American Society for Pain Management N, Emergency Nurses A, American College of Emergency P, American Pain S. Optimizing the Treatment of Pain in Patients with Acute Presentations. Policy Statement. Ann Emerg Med. Jul 2010;56(1):77-79. 
18. American Geriatrics Society Panel on the Pharmacological Management of Persistent Pain in Older Persons. Pharmacological Management of Persistent Pain in Older Persons. J Am Geriatr Soc. 2009;57(8):1331-1346.  
19. Chou R, Fanciullo GJ, Fine PG, et al. Clinical Guidelines for the Use of Chronic Opioid Therapy in Chronic Noncancer Pain. J Pain. 2009;10(2):113-130. PubMed
20. Furlan AD, Reardon R, Weppler C. Opioids for Chronic Noncancer Pain: A New Canadian Practice Guideline. CMAJ. 2010;182(9):923-930. PubMed
21. Manchikanti L, Abdi S, Atluri S, et al. American Society of Interventional Pain Physicians (ASIPP) Guidelines for Responsible Opioid Prescribing in Chronic Non-Cancer Pain: Part 2--Guidance. Pain Physician. 2012;15(3 Suppl):S67-116. PubMed
22. Dowell D, Haegerich TM, Chou R. CDC Guideline for Prescribing Opioids for Chronic Pain--United States, 2016. JAMA. 2016;315(15):1624-1645. PubMed
23. The Opiod Therap for Chronic Pain Work Group. VA/DoD Clinical Practice Guideline for Opioid Therapy for Chronic Pain. Version 3.0. https://www.healthquality.va.gov/guidelines/Pain/cot/VADoDOTCPG022717.pdf. AccessedAugust 3, 2016.
24. Hooten W, Timming R, Belgrade M, et al. Assessment and Managemeent of Chronic Pain. Bloomington, MN: Institute for Clinical Systems Improvement; 2013. 
25. American Society of Anesthesiologists Task Force. Practice Guidelines for Chronic Pain Management: An Updated Report by the American Society of Anesthesiologists Task Force on Chronic Pain Management and the American Society of Regional Anesthesia and Pain Medicine. Anesthesiology. 2010;112(4):810-833. PubMed
26. International Association for the Study of Pain. IASP Taxonomy. https://www.iasp-pain.org/Taxonomy. Accessed August 3, 2016.
27. Hawker GA, Mian S, Kendzerska T, French M. Measures of Adult Pain: Visual Analog Scale for Pain (VAS Pain), Numeric Rating Scale for Pain (NRS Pain), Mcgill Pain Questionnaire (MPQ), Short-Form Mcgill Pain Questionnaire (SF-MPQ), Chronic Pain Grade Scale (CPGS), Short Form-36 Bodily Pain Scale (SF36 BPS), and Measure of Intermittent and Constant Osteoarthritis Pain (ICOAP). Arthritis Care Res (Hoboken). 2011;63 Suppl 11:S240-252. PubMed
28. Farrar JT, Young JP, LaMoreaux L, Werth JL, Poole RM. Clinical Importance of Changes in Chronic Pain Intensity Measured on an 11-Point Numerical Pain Rating Scale. Pain. 2001;94(2):149-158. PubMed
29. van Dijk JF, Kappen TH, Schuurmans MJ, van Wijck AJ. The Relation between Patients’ NRS Pain Scores and Their Desire for Additional Opioids after Surgery. Pain Pract. 2015;15(7):604-609. PubMed
30. Krebs EE, Lorenz KA, Bair MJ, et al. Development and Initial Validation of the PEG, a Three-Item Scale Assessing Pain Intensity and Interference. J Gen Intern Med. 2009;24(6):733-738. PubMed
31. Finan PH, Smith MT. The Comorbidity of Insomnia, Chronic Pain, and Depression: Dopamine as a Putative Mechanism. Sleep Med Rev. 2013;17(3):173-183. PubMed
32. IsHak WW, Collison K, Danovitch I, et al. Screening for Depression in Hospitalized Medical Patients. J Hosp Med. 2017;12(2):118-125. PubMed

33. Young QR, Nguyen M, Roth S, Broadberry A, Mackay MH. Single-Item Measures for Depression and Anxiety: Validation of the Screening Tool for Psychological Distress in an Inpatient Cardiology Setting. Eur J Cardiovasc Nurs. 2015;14(6):544-551. PubMed

34. Poon SJ, Greenwood-Ericksen MB. The Opioid Prescription Epidemic and the Role of Emergency Medicine. Ann Emerg Med. 2014;64(5):490-495. PubMed
35. National Institute on Alcohol Abuse and Alcoholism (NIAAA). Rates of Nonmedical Prescription Opioid Use and Opioid Use Disorder Double in 10 Years. https://www.nih.gov/news-events/rates-nonmedical-prescription-opioid-use-opioid-use-disorder-double-10-years. Accessed on August 3, 2016.
36. National Alliance for Model State Drug Laws. Status of Prescription Drug Monitoring Programs (PDMPs). http://www.pdmpassist.org/pdf/PDMPProgramStatus.pdf. Accessed August 3, 2016.
37. Butler SF, Budman SH, Fernandez KC, et al. Development and Validation of the Current Opioid Misuse Measure. Pain. 2007;130(1-2):144-156. PubMed
38. Compton PA, Wu SM, Schieffer B, Pham Q, Naliboff BD. Introduction of a Self-Report Version of the Prescription Drug Use Questionnaire and Relationship to Medication Agreement Noncompliance. J Pain Symptom Manage. 2008;36(4):383-395. PubMed
39. Wu SM, Compton P, Bolus R, et al. The Addiction Behaviors Checklist: Validation of a New Clinician-Based Measure of Inappropriate Opioid Use in Chronic Pain. J Pain Symptom Manage. 2006;32(4):342-351. PubMed
40. Atluri SL, Sudarshan G. Development of a Screening Tool to Detect the Risk of Inappropriate Prescription Opioid Use in Patients with Chronic Pain. Pain Physician. 2004;7(3):333-338. PubMed
41. McNeely J, Cleland CM, Strauss SM, Palamar JJ, Rotrosen J, Saitz R. Validation of Self-Administered Single-Item Screening Questions (SISQS) for Unhealthy Alcohol and Drug Use in Primary Care Patients. J Gen Intern Med. 2015;30(12):1757-1764. PubMed
42. Kaye AD, Jones MR, Kaye AM, et al. Prescription Opioid Abuse in Chronic Pain: An Updated Review of Opioid Abuse Predictors and Strategies to Curb Opioid Abuse (Part 2). Pain Physician. 2017;20(2):S111-S133. PubMed
43. Paulozzi LJ, Strickler GK, Kreiner PW, Koris CM. Controlled Substance Prescribing Patterns - Prescription Behavior Surveillance System, Eight States, 2013. MMWR Surveillance Summaries. 16 2015;64(9):1-14. PubMed
44. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Washington, DC; 2013. 
45. Substance Abuse and Mental Health Services Administration. Substance Abuse Confidentiality Regulations. Rockville, MD; 2016. 
46. Lucado J, Paez K, Elixhauser A. Medication-Related Adverse Outcomes in U.S. Hospitals and Emergency Departments, 2008: Statistical Brief #109. Rockville, MD: Agency for Healthcare Research and Quality (AHRQ); April 2011. PubMed
47. Wheeler M, Oderda GM, Ashburn MA, Lipman AG. Adverse Events Associated with Postoperative Opioid Analgesia: A Systematic Review. J Pain. Jun 2002;3(3):159-180. PubMed
48. Noble M, Tregear SJ, Treadwell JR, Schoelles K. Long-Term Opioid Therapy for Chronic Noncancer Pain: A Systematic Review and Meta-Analysis of Efficacy and Safety. J Pain Symptom Manage. Feb 2008;35(2):214-228. PubMed
49. Wesson DR, Ling W. The Clinical Opiate Withdrawal Scale (COWS). J Psychoactive Drugs. 2003;35(2):253-259.50. PubMed
50. Tompkins DA, Bigelow GE, Harrison JA, Johnson RE, Fudala PJ, Strain EC. Concurrent Validation of the Clinical Opiate Withdrawal Scale (COWS) and Single-Item Indices against the Clinical Institute Narcotic Assessment (CINA) Opioid Withdrawal Instrument. Drug Alcohol Depend. 2009;105(1-2):154-159. PubMed
51. Sullivan JT, Sykora K, Schneiderman J, Naranjo CA, Sellers EM. Assessment of Alcohol Withdrawal: The Revised Clinical Institute Withdrawal Assessment for Alcohol Scale (CIWA-Ar). Br J Addict. 1989;84(11):1353-1357. PubMed
52. Rosenblatt RA, Andrilla CH, Catlin M, Larson EH. Geographic and Specialty Distribution of US Physicians Trained to Treat Opioid Use Disorder. Ann Fam Med. Jan-Feb 2015;13(1):23-26. PubMed
53. Berna C, Kulich RJ, Rathmell JP. Tapering Long-Term Opioid Therapy in Chronic Noncancer Pain: Evidence and Recommendations for Everyday Practice. Mayo Clin Proc. Jun 2015;90(6):828-842. PubMed

Issue
Journal of Hospital Medicine 13(4)
Issue
Journal of Hospital Medicine 13(4)
Page Number
249-255. Published online first December 6, 2017
Page Number
249-255. Published online first December 6, 2017
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Teryl Nuckols, MD, MSHS, Cedars-Sinai Medical Center, 8700 Beverly Drive, Becker 113, Los Angeles, CA 90048; Telephone: 310-423-2760; Fax: 310-423-0436; E-mail: teryl.nuckols@cshs.org
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Perceptions of Current Note Quality

Article Type
Changed
Tue, 05/16/2017 - 23:13
Display Headline
Internal medicine progress note writing attitudes and practices in an electronic health record

The electronic health record (EHR) has revolutionized the practice of medicine. As part of the economic stimulus package in 2009, Congress enacted the Health Information Technology for Economic and Clinical Health Act, which included incentives for physicians and hospitals to adopt an EHR by 2015. In the setting of more limited duty hours and demands for increased clinical productivity, EHRs have functions that may improve the quality and efficiency of clinical documentation.[1, 2, 3, 4, 5]

The process of note writing and the use of notes for clinical care have changed substantially with EHR implementation. Use of efficiency tools (ie, copy forward functions and autopopulation of data) may increase the speed of documentation.[5] Notes in an EHR are more legible and accessible and may be able to organize data to improve clinical care.[6]

Yet, many have commented on the negative consequences of documentation in an EHR. In a New England Journal of Medicine Perspective article, Drs. Hartzband and Groopman wrote, we have observed the electronic medical record become a powerful vehicle for perpetuating erroneous information, leading to diagnostic errors that gain momentum when passed on electronically.[7] As a result, the copy forward and autopopulation functions have come under significant scrutiny.[8, 9, 10] A survey conducted at 2 academic institutions found that 71% of residents and attendings believed that the copy forward function led to inconsistencies and outdated information.[11] Autopopulation has been criticized for creating lengthy notes full of trivial or redundant data, a phenomenon termed note bloat. Bloated notes may be less effective as a communication tool.[12] Additionally, the process of composing a note often stimulates critical thinking and may lead to changes in care. The act of copying forward a previous note and autopopulating data bypasses that process and in effect may suppress critical thinking.[13] Previous studies have raised numerous concerns regarding copy forward and autopopulation functionality in the EHR. Many have described the duplication of outdated data and the possibility of the introduction and perpetuation of errors.[14, 15, 16] The Veterans Affairs (VA) Puget Sound Health system evaluated 6322 copy events and found that 1 in 10 electronic patient charts contained an instance of high‐risk copying.[17] In a survey of faculty and residents at a single academic medical center, the majority of users of copy and paste functionality recognized the hazards; they responded that their notes may contain more outdated (66%) and more inconsistent information (69%). Yet, most felt copy forwarding improved the documentation of the entire hospital course (87%), overall physician documentation (69%), and should definitely be continued (91%).[11] Others have complained about the impact of copy forward on the expression of clinical reasoning.[7, 9, 18]

Previous discussions on the topic of overall note quality following EHR implementation have been limited to perspectives or opinion pieces of individual attending providers.[18] We conducted a survey across 4 academic institutions to analyze both housestaff and attendings perceptions of the quality of notes since the implementation of an EHR to better inform the discussion of the impact of an EHR on note quality.

METHODS

Participants

Surveys were administered via email to interns, residents (second‐, third‐, or fourth‐year residents, hereafter referred to as residents) and attendings at 4 academic hospitals that use the Epic EHR (Epic Corp., Madison, WI). The 4 institutions each adopted the Epic EHR, with mandatory faculty and resident training, between 1 and 5 years prior to the survey. Three of the institutions previously used systems with electronic notes, whereas the fourth institution previously used a system with handwritten notes. The study participation emails included a link to an online survey in REDCap.[19] We included interns and residents from the following types of residency programs: internal medicine categorical or primary care, medicine‐pediatrics, or medicine‐psychiatry. For housestaff (the combination of both interns and residents), exclusion criteria included preliminary or transitional year interns, or any interns or residents from other specialties who rotate on the medicine service. For attendings, participants included hospitalists, general internal medicine attendings, chief residents, and subspecialty medicine attendings, each of whom had worked for any amount of time on the inpatient medicine teaching service in the prior 12 months.

Design

We developed 3 unique surveys for interns, residents, and attendings to assess their perception of inpatient progress notes (see Supporting Information, Appendix, in the online version of this article). The surveys incorporated questions from 2 previously published sources, the 9‐item Physician Documentation Quality Instrument (PDQI‐9) (see online Appendix), a validated note‐scoring tool, and the Accreditation Council for Graduate Medical Education note‐writing competency checklists.[20] Additionally, faculty at the participating institutions developed questions to address practices and attitudes toward autopopulation, copy forward, and the purposes of a progress note. Responses were based on a 5‐point Likert scale. The intern and resident surveys asked for self‐evaluation of their own progress notes and those of their peers, whereas the attending surveys asked for assessment of housestaff notes.

The survey was left open for a total of 55 days and participants were sent reminder emails. The study received a waiver from the institutional review board at all 4 institutions.

Data Analysis

Study data were collected and managed using REDCap electronic data capture tools hosted at the University of California, San Francisco (UCSF).[19] The survey data were analyzed and the figures were created using Microsoft Excel 2008 (Microsoft Corp., Redmond, WA). Mean values for each survey question were calculated. Differences between the means among the groups were assessed using 2‐sample t tests. P values <0.05 were considered statistically significant.

RESULTS

Demographics

We received 99 completed surveys from interns, 155 completed surveys from residents, and 153 completed surveys from attendings across the 4 institutions. The overall response rate for interns was 68%, ranging from 59% at the University of California, San Diego (UCSD) to 74% at the University of Iowa. The overall response rate for residents was 49%, ranging from 38% at UCSF to 66% at the University of California, Los Angeles. The overall response rate for attendings was 70%, ranging from 53% at UCSD to 74% at UCSF.

A total of 78% of interns and 72% of residents had used an EHR at a prior institution. Of the residents, 90 were second‐year residents, 64 were third‐year residents, and 2 were fourth‐year residents. A total of 76% of attendings self‐identified as hospitalists.

Overall Assessment of Note Quality

Participants were asked to rate the quality of progress notes on a 5‐point scale (poor, fair, good, very good, excellent). Half of interns and residents rated their own progress notes as very good or excellent. A total of 44% percent of interns and 24% of residents rated their peers notes as very good or excellent, whereas only 15% of attending physicians rated housestaff notes as very good or excellent.

When asked to rate the change in progress note quality since their hospital had adopted the EHR, the majority of residents answered unchanged or better, and the majority of attendings answered unchanged or worse (Figure 1).

Figure 1
Resident and attending assessment of progress note quality since adopting the Epic electronic health record.

PDQI‐9 Framework

Participants answered each PDQI‐9 question on a 5‐point Likert scale ranging from not at all (1) to extremely (5). In 8 of the 9 PDQI‐9 domains, there were no significant differences between interns and residents. Across each domain, attending perceptions of housestaff notes were significantly lower than housestaff perceptions of their own notes (P<0.001) (Figure 2). Both housestaff and attendings gave the highest ratings to thorough, up to date, and synthesized and the lowest rating to succinct.

Figure 2
Mean intern, resident, and attending perception of note characteristics based on the 9‐item Physician Documentation Quality Instrument (*P < 0.05, **P < 0.001).

Copy Forward and Autopopulation

Overall, the effect of copy forward and autopopulation on critical thinking, note accuracy, and prioritizing the problem list was thought to be neutral or somewhat positive by interns, neutral by residents, and neutral or somewhat negative by attendings (P<0.001) (Figure 3). In all, 16% of interns, 22% of residents, and 55% of attendings reported that copy forward had a somewhat negative or very negative impact on critical thinking (P<0.001). In all, 16% of interns, 29% of residents and 39% of attendings thought that autopopulation had a somewhat negative or very negative impact on critical thinking (P<0.001).

Figure 3
Intern, resident, and attending perceptions of the mean impact of copy forward and autopopulation (*P < 0.05, **P < 0.001).

Purpose of Progress Notes

Participants were provided with 7 possible purposes of a progress note and asked to rate the importance of each stated purpose. There was nearly perfect agreement between interns, residents, and attendings in the rank order of the importance of each purpose of a progress note (Table 1). Attendings and housestaff ranked communication with other providers and documenting important events and the plan for the day as the 2 most important purposes of a progress note, and billing and quality improvement as less important.

Ranked Importance of Each Purpose of a Progress Note
 InternsResidentsAttendings
Communication with other providers112
Documenting important events and the plan for the day221
Prioritizing issues going forward in the patient's care333
Medicolegal444
Stimulate critical thinking555
Billing666
Quality improvement777

DISCUSSION

This is the first large multicenter analysis of both attendings and housestaff perceptions of note quality in the EHR era. The findings provide insight into important differences and similarities in the perceptions of the 2 groups. Most striking is the difference in opinion of overall note quality, with only a small minority of faculty rating current housestaff notes as very good or excellent, whereas a much larger proportion of housestaff rated their own notes and those of their peers to be of high quality. Though participants were not specifically asked why note quality in general was suboptimal, housestaff and faculty rankings of specific domains from the PDQI‐9 may yield an important clue. Specifically, all groups expressed that the weakest attribute of current progress notes is succinct. This finding is consistent with the note bloat phenomenon, which has been maligned as a consequence of EHR implementation.[7, 14, 18, 21, 22]

One interesting finding was that only 5% of interns rated the notes of other housestaff as fair or poor. One possible explanation for this may be the tendency for an individual to enhance or augment the status or performance of the group to which he or she belongs as a mechanism to increase self‐image, known as the social identity theory.[23] Thus, housestaff may not criticize their peers to allow for identification with a group that is not deficient in note writing.

The more positive assessment of overall note quality among housestaff could be related to the different roles of housestaff and attendings on a teaching service. On a teaching service, housestaff are typically the writer, whereas attendings are almost exclusively the reader of progress notes. Housestaff may reap benefits, including efficiency, beyond the finished product. A perception of higher quality may reflect the process of note writing, data gathering, and critical thinking required to build an assessment and plan. The scores on the PDQI‐9 support this notion, as housestaff rated all 9 domains significantly higher than attendings.

Housestaff and attendings held greater differences of opinion with respect to the EHR's impact on note quality. Generally, housestaff perceived the EHR to have improved progress note quality, whereas attendings perceived the opposite. One explanation could be that these results reflect changing stages of development of physicians well described through the RIME framework (reporter, interpreter, manager, educator). Attendings may expect notes to reflect synthesis and analysis, whereas trainees may be satisfied with the data gathering that an EHR facilitates. In our survey, the trend of answers from intern to resident to attending suggests an evolving process of attitudes toward note quality.

The above reasons may also explain why housestaff were generally more positive than attendings about the effect of copy forward and autopopulation functions on critical thinking. Perhaps, as these functions can potentially increase efficiency and decrease time spent at the computer, although data are mixed on this finding, housestaff may have more time to spend with patients or develop a thorough plan and thus rate these functions positively.

Notably, housestaff and attendings had excellent agreement on the purposes of a progress note. They agreed that the 2 most important purposes were communication with other providers and documenting important events and the plan for the day. These are the 2 listed purposes that are most directly related to patient care. If future interventions to improve note quality require housestaff and attendings to significantly change their behavior, a focus on the impact on patient care might yield the best results.

There were several limitations in our study. Any study based on self‐assessment is subject to bias. A previous meta‐analysis and review described poor to moderate correlations between self‐assessed and external measures of performance.[24, 25] The survey data were aggregated from 4 institutions despite somewhat different, though relatively high, response rates between the institutions. There could be a response bias; those who did not respond may have systematically different perceptions of note quality. It should be noted that the general demographics of the respondents reflected those of the housestaff and attendings at 4 academic centers. All 4 of the participating institutions adopted the Epic EHR within the last several years of the survey being administered, and perceptions of note quality may be biased depending on the prior system used (ie, change from handwritten to electronic vs electronic to other electronic system). In addition, the survey results reflect experience with only 1 EHR, and our results may not apply to other EHR vendors or institutions like the VA, which have a long‐standing system in place. Last, we did not explore the impact of perceived note quality on the measured or perceived quality of care. One previous study found no direct correlation between note quality and clinical quality.[26]

There are several future directions for research based on our findings. First, potential differences between housestaff and attending perceptions of note quality could be further teased apart by studying the perceptions of attendings on a nonteaching service who write their own daily progress notes. Second, housestaff perceptions on why copy forward and autopopulation may increase critical thinking could be explored further with more direct questioning. Finally, although our study captured only perceptions of note quality, validated tools could be used to objectively measure note quality; these measurements could then be compared to perception of note quality as well as clinical outcomes.

Given the prevalence and the apparent belief that the benefits of an EHR outweigh the hazards, institutions should embrace these innovations but take steps to mitigate the potential errors and problems associated with copy forward and autopopulation. The results of our study should help inform future interventions.

Acknowledgements

The authors acknowledge the contributions of Russell Leslie from the University of Iowa.

Disclosure: Nothing to report.

Files
References
  1. Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144(10):742752.
  2. Amarasingham R, Plantinga L, Diener‐West M, Gaskin DJ, Powe NR. Clinical information technologies and inpatient outcomes: a multiple hospital study. Arch Intern Med. 2009;169(2):108114.
  3. Bates DW, Leape LL, Cullen DJ, et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA. 1998;280(15):13111316.
  4. Cebul RD, Love TE, Jain AK, Hebert CJ. Electronic health records and quality of diabetes care. N Engl J Med. 2011;365(9):825833.
  5. Donati A, Gabbanelli V, Pantanetti S, et al. The impact of a clinical information system in an intensive care unit. J Clin Monit Comput. 2008;22(1):3136.
  6. Schiff GD, Bates DW. Can electronic clinical documentation help prevent diagnostic errors? N Engl J Med. 2010;362(12):10661069.
  7. Hartzband P, Groopman J. Off the record—avoiding the pitfalls of going electronic. N Eng J Med. 2008;358(16):16561658.
  8. Thielke S, Hammond K, Helbig S. Copying and pasting of examinations within the electronic medical record. Int J Med Inform. 2007;76(suppl 1):S122S128.
  9. Siegler EL, Adelman R. Copy and paste: a remediable hazard of electronic health records. Am J Med. 2009;122(6):495496.
  10. Sheehy AM, Weissburg DJ, Dean SM. The role of copy‐and‐paste in the hospital electronic health record. JAMA Intern Med. 2014;174(8):12171218.
  11. O'Donnell HC, Kaushal R, Barrón Y, Callahan MA, Adelman RD, Siegler EL. Physicians’ attitudes towards copy and pasting in electronic note writing. J Gen Intern Med. 2009;24(1):6368.
  12. Tierney MJ, Pageler NM, Kahana M, Pantaleoni JL, Longhurst CA. Medical education in the electronic medical record (EMR) era: benefits, challenges, and future directions. Acad Med. 2013;88(6):748752.
  13. Schenarts PJ, Schenarts KD. Educational impact of the electronic medical record. J Surg Educ. 2012;69(1):105112.
  14. Weir CR, Hurdle JF, Felgar MA, Hoffman JM, Roth B, Nebeker JR. Direct text entry in electronic progress notes. An evaluation of input errors. Methods Inf Med. 2003;42(1):6167.
  15. Barr MS. The clinical record: a 200‐year‐old 21st‐century challenge. Ann Intern Med. 2010;153(10):682683.
  16. Hirschtick R. Sloppy and paste. Morbidity and Mortality Rounds on the Web. Available at: http://www.webmm.ahrq.gov/case.aspx?caseID=274. Published July 2012. Accessed September 26, 2014.
  17. Hammond KW, Helbig ST, Benson CC, Brathwaite‐Sketoe BM. Are electronic medical records trustworthy? Observations on copying, pasting and duplication. AMIA Annu Symp Proc. 2003:269273.
  18. Hirschtick RE. A piece of my mind. John Lennon's elbow. JAMA. 2012;308(5):463464.
  19. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata‐driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377381.
  20. Whelan H, Latimore D, Murin S. ACGME competency note checklist. Available at: http://www.im.org/p/cm/ld/fid=831. Accessed August 8, 2013.
  21. Stetson PD, Bakken S, Wrenn JO, Siegler EL. Assessing electronic note quality using the Physician Documentation Quality Instrument (PDQI‐9). Appl Clin Inform. 2012;3(2):164174.
  22. Wrenn JO, Stein DM, Bakken S, Stetson PD. Quantifying clinical narrative redundancy in an electronic health record. J Am Med Inform Assoc. 2010;17(1):4953.
  23. Tajfel H, Turner JC. The social identity theory of intergroup behavior. In: Psychology of Intergroup Relations. 2nd ed. Chicago, IL: Nelson‐Hall Publishers; 1986:724.
  24. Falchikov N, Boud D. Student self‐assessment in higher education: a meta‐analysis. Rev Educ Res. 1989;59:395430.
  25. Gordon MJ. A review of the validity and accuracy of self‐assessments in health professions training. Acad Med. 1991;66:762769.
  26. Edwards ST, Neri PM, Volk LA, Schiff GD, Bates DW. Association of note quality and quality of care: a cross‐sectional study. BMJ Qual Saf. 2014;23(5):406413.
Article PDF
Issue
Journal of Hospital Medicine - 10(8)
Publications
Page Number
525-529
Sections
Files
Files
Article PDF
Article PDF

The electronic health record (EHR) has revolutionized the practice of medicine. As part of the economic stimulus package in 2009, Congress enacted the Health Information Technology for Economic and Clinical Health Act, which included incentives for physicians and hospitals to adopt an EHR by 2015. In the setting of more limited duty hours and demands for increased clinical productivity, EHRs have functions that may improve the quality and efficiency of clinical documentation.[1, 2, 3, 4, 5]

The process of note writing and the use of notes for clinical care have changed substantially with EHR implementation. Use of efficiency tools (ie, copy forward functions and autopopulation of data) may increase the speed of documentation.[5] Notes in an EHR are more legible and accessible and may be able to organize data to improve clinical care.[6]

Yet, many have commented on the negative consequences of documentation in an EHR. In a New England Journal of Medicine Perspective article, Drs. Hartzband and Groopman wrote, we have observed the electronic medical record become a powerful vehicle for perpetuating erroneous information, leading to diagnostic errors that gain momentum when passed on electronically.[7] As a result, the copy forward and autopopulation functions have come under significant scrutiny.[8, 9, 10] A survey conducted at 2 academic institutions found that 71% of residents and attendings believed that the copy forward function led to inconsistencies and outdated information.[11] Autopopulation has been criticized for creating lengthy notes full of trivial or redundant data, a phenomenon termed note bloat. Bloated notes may be less effective as a communication tool.[12] Additionally, the process of composing a note often stimulates critical thinking and may lead to changes in care. The act of copying forward a previous note and autopopulating data bypasses that process and in effect may suppress critical thinking.[13] Previous studies have raised numerous concerns regarding copy forward and autopopulation functionality in the EHR. Many have described the duplication of outdated data and the possibility of the introduction and perpetuation of errors.[14, 15, 16] The Veterans Affairs (VA) Puget Sound Health system evaluated 6322 copy events and found that 1 in 10 electronic patient charts contained an instance of high‐risk copying.[17] In a survey of faculty and residents at a single academic medical center, the majority of users of copy and paste functionality recognized the hazards; they responded that their notes may contain more outdated (66%) and more inconsistent information (69%). Yet, most felt copy forwarding improved the documentation of the entire hospital course (87%), overall physician documentation (69%), and should definitely be continued (91%).[11] Others have complained about the impact of copy forward on the expression of clinical reasoning.[7, 9, 18]

Previous discussions on the topic of overall note quality following EHR implementation have been limited to perspectives or opinion pieces of individual attending providers.[18] We conducted a survey across 4 academic institutions to analyze both housestaff and attendings perceptions of the quality of notes since the implementation of an EHR to better inform the discussion of the impact of an EHR on note quality.

METHODS

Participants

Surveys were administered via email to interns, residents (second‐, third‐, or fourth‐year residents, hereafter referred to as residents) and attendings at 4 academic hospitals that use the Epic EHR (Epic Corp., Madison, WI). The 4 institutions each adopted the Epic EHR, with mandatory faculty and resident training, between 1 and 5 years prior to the survey. Three of the institutions previously used systems with electronic notes, whereas the fourth institution previously used a system with handwritten notes. The study participation emails included a link to an online survey in REDCap.[19] We included interns and residents from the following types of residency programs: internal medicine categorical or primary care, medicine‐pediatrics, or medicine‐psychiatry. For housestaff (the combination of both interns and residents), exclusion criteria included preliminary or transitional year interns, or any interns or residents from other specialties who rotate on the medicine service. For attendings, participants included hospitalists, general internal medicine attendings, chief residents, and subspecialty medicine attendings, each of whom had worked for any amount of time on the inpatient medicine teaching service in the prior 12 months.

Design

We developed 3 unique surveys for interns, residents, and attendings to assess their perception of inpatient progress notes (see Supporting Information, Appendix, in the online version of this article). The surveys incorporated questions from 2 previously published sources, the 9‐item Physician Documentation Quality Instrument (PDQI‐9) (see online Appendix), a validated note‐scoring tool, and the Accreditation Council for Graduate Medical Education note‐writing competency checklists.[20] Additionally, faculty at the participating institutions developed questions to address practices and attitudes toward autopopulation, copy forward, and the purposes of a progress note. Responses were based on a 5‐point Likert scale. The intern and resident surveys asked for self‐evaluation of their own progress notes and those of their peers, whereas the attending surveys asked for assessment of housestaff notes.

The survey was left open for a total of 55 days and participants were sent reminder emails. The study received a waiver from the institutional review board at all 4 institutions.

Data Analysis

Study data were collected and managed using REDCap electronic data capture tools hosted at the University of California, San Francisco (UCSF).[19] The survey data were analyzed and the figures were created using Microsoft Excel 2008 (Microsoft Corp., Redmond, WA). Mean values for each survey question were calculated. Differences between the means among the groups were assessed using 2‐sample t tests. P values <0.05 were considered statistically significant.

RESULTS

Demographics

We received 99 completed surveys from interns, 155 completed surveys from residents, and 153 completed surveys from attendings across the 4 institutions. The overall response rate for interns was 68%, ranging from 59% at the University of California, San Diego (UCSD) to 74% at the University of Iowa. The overall response rate for residents was 49%, ranging from 38% at UCSF to 66% at the University of California, Los Angeles. The overall response rate for attendings was 70%, ranging from 53% at UCSD to 74% at UCSF.

A total of 78% of interns and 72% of residents had used an EHR at a prior institution. Of the residents, 90 were second‐year residents, 64 were third‐year residents, and 2 were fourth‐year residents. A total of 76% of attendings self‐identified as hospitalists.

Overall Assessment of Note Quality

Participants were asked to rate the quality of progress notes on a 5‐point scale (poor, fair, good, very good, excellent). Half of interns and residents rated their own progress notes as very good or excellent. A total of 44% percent of interns and 24% of residents rated their peers notes as very good or excellent, whereas only 15% of attending physicians rated housestaff notes as very good or excellent.

When asked to rate the change in progress note quality since their hospital had adopted the EHR, the majority of residents answered unchanged or better, and the majority of attendings answered unchanged or worse (Figure 1).

Figure 1
Resident and attending assessment of progress note quality since adopting the Epic electronic health record.

PDQI‐9 Framework

Participants answered each PDQI‐9 question on a 5‐point Likert scale ranging from not at all (1) to extremely (5). In 8 of the 9 PDQI‐9 domains, there were no significant differences between interns and residents. Across each domain, attending perceptions of housestaff notes were significantly lower than housestaff perceptions of their own notes (P<0.001) (Figure 2). Both housestaff and attendings gave the highest ratings to thorough, up to date, and synthesized and the lowest rating to succinct.

Figure 2
Mean intern, resident, and attending perception of note characteristics based on the 9‐item Physician Documentation Quality Instrument (*P < 0.05, **P < 0.001).

Copy Forward and Autopopulation

Overall, the effect of copy forward and autopopulation on critical thinking, note accuracy, and prioritizing the problem list was thought to be neutral or somewhat positive by interns, neutral by residents, and neutral or somewhat negative by attendings (P<0.001) (Figure 3). In all, 16% of interns, 22% of residents, and 55% of attendings reported that copy forward had a somewhat negative or very negative impact on critical thinking (P<0.001). In all, 16% of interns, 29% of residents and 39% of attendings thought that autopopulation had a somewhat negative or very negative impact on critical thinking (P<0.001).

Figure 3
Intern, resident, and attending perceptions of the mean impact of copy forward and autopopulation (*P < 0.05, **P < 0.001).

Purpose of Progress Notes

Participants were provided with 7 possible purposes of a progress note and asked to rate the importance of each stated purpose. There was nearly perfect agreement between interns, residents, and attendings in the rank order of the importance of each purpose of a progress note (Table 1). Attendings and housestaff ranked communication with other providers and documenting important events and the plan for the day as the 2 most important purposes of a progress note, and billing and quality improvement as less important.

Ranked Importance of Each Purpose of a Progress Note
 InternsResidentsAttendings
Communication with other providers112
Documenting important events and the plan for the day221
Prioritizing issues going forward in the patient's care333
Medicolegal444
Stimulate critical thinking555
Billing666
Quality improvement777

DISCUSSION

This is the first large multicenter analysis of both attendings and housestaff perceptions of note quality in the EHR era. The findings provide insight into important differences and similarities in the perceptions of the 2 groups. Most striking is the difference in opinion of overall note quality, with only a small minority of faculty rating current housestaff notes as very good or excellent, whereas a much larger proportion of housestaff rated their own notes and those of their peers to be of high quality. Though participants were not specifically asked why note quality in general was suboptimal, housestaff and faculty rankings of specific domains from the PDQI‐9 may yield an important clue. Specifically, all groups expressed that the weakest attribute of current progress notes is succinct. This finding is consistent with the note bloat phenomenon, which has been maligned as a consequence of EHR implementation.[7, 14, 18, 21, 22]

One interesting finding was that only 5% of interns rated the notes of other housestaff as fair or poor. One possible explanation for this may be the tendency for an individual to enhance or augment the status or performance of the group to which he or she belongs as a mechanism to increase self‐image, known as the social identity theory.[23] Thus, housestaff may not criticize their peers to allow for identification with a group that is not deficient in note writing.

The more positive assessment of overall note quality among housestaff could be related to the different roles of housestaff and attendings on a teaching service. On a teaching service, housestaff are typically the writer, whereas attendings are almost exclusively the reader of progress notes. Housestaff may reap benefits, including efficiency, beyond the finished product. A perception of higher quality may reflect the process of note writing, data gathering, and critical thinking required to build an assessment and plan. The scores on the PDQI‐9 support this notion, as housestaff rated all 9 domains significantly higher than attendings.

Housestaff and attendings held greater differences of opinion with respect to the EHR's impact on note quality. Generally, housestaff perceived the EHR to have improved progress note quality, whereas attendings perceived the opposite. One explanation could be that these results reflect changing stages of development of physicians well described through the RIME framework (reporter, interpreter, manager, educator). Attendings may expect notes to reflect synthesis and analysis, whereas trainees may be satisfied with the data gathering that an EHR facilitates. In our survey, the trend of answers from intern to resident to attending suggests an evolving process of attitudes toward note quality.

The above reasons may also explain why housestaff were generally more positive than attendings about the effect of copy forward and autopopulation functions on critical thinking. Perhaps, as these functions can potentially increase efficiency and decrease time spent at the computer, although data are mixed on this finding, housestaff may have more time to spend with patients or develop a thorough plan and thus rate these functions positively.

Notably, housestaff and attendings had excellent agreement on the purposes of a progress note. They agreed that the 2 most important purposes were communication with other providers and documenting important events and the plan for the day. These are the 2 listed purposes that are most directly related to patient care. If future interventions to improve note quality require housestaff and attendings to significantly change their behavior, a focus on the impact on patient care might yield the best results.

There were several limitations in our study. Any study based on self‐assessment is subject to bias. A previous meta‐analysis and review described poor to moderate correlations between self‐assessed and external measures of performance.[24, 25] The survey data were aggregated from 4 institutions despite somewhat different, though relatively high, response rates between the institutions. There could be a response bias; those who did not respond may have systematically different perceptions of note quality. It should be noted that the general demographics of the respondents reflected those of the housestaff and attendings at 4 academic centers. All 4 of the participating institutions adopted the Epic EHR within the last several years of the survey being administered, and perceptions of note quality may be biased depending on the prior system used (ie, change from handwritten to electronic vs electronic to other electronic system). In addition, the survey results reflect experience with only 1 EHR, and our results may not apply to other EHR vendors or institutions like the VA, which have a long‐standing system in place. Last, we did not explore the impact of perceived note quality on the measured or perceived quality of care. One previous study found no direct correlation between note quality and clinical quality.[26]

There are several future directions for research based on our findings. First, potential differences between housestaff and attending perceptions of note quality could be further teased apart by studying the perceptions of attendings on a nonteaching service who write their own daily progress notes. Second, housestaff perceptions on why copy forward and autopopulation may increase critical thinking could be explored further with more direct questioning. Finally, although our study captured only perceptions of note quality, validated tools could be used to objectively measure note quality; these measurements could then be compared to perception of note quality as well as clinical outcomes.

Given the prevalence and the apparent belief that the benefits of an EHR outweigh the hazards, institutions should embrace these innovations but take steps to mitigate the potential errors and problems associated with copy forward and autopopulation. The results of our study should help inform future interventions.

Acknowledgements

The authors acknowledge the contributions of Russell Leslie from the University of Iowa.

Disclosure: Nothing to report.

The electronic health record (EHR) has revolutionized the practice of medicine. As part of the economic stimulus package in 2009, Congress enacted the Health Information Technology for Economic and Clinical Health Act, which included incentives for physicians and hospitals to adopt an EHR by 2015. In the setting of more limited duty hours and demands for increased clinical productivity, EHRs have functions that may improve the quality and efficiency of clinical documentation.[1, 2, 3, 4, 5]

The process of note writing and the use of notes for clinical care have changed substantially with EHR implementation. Use of efficiency tools (ie, copy forward functions and autopopulation of data) may increase the speed of documentation.[5] Notes in an EHR are more legible and accessible and may be able to organize data to improve clinical care.[6]

Yet, many have commented on the negative consequences of documentation in an EHR. In a New England Journal of Medicine Perspective article, Drs. Hartzband and Groopman wrote, we have observed the electronic medical record become a powerful vehicle for perpetuating erroneous information, leading to diagnostic errors that gain momentum when passed on electronically.[7] As a result, the copy forward and autopopulation functions have come under significant scrutiny.[8, 9, 10] A survey conducted at 2 academic institutions found that 71% of residents and attendings believed that the copy forward function led to inconsistencies and outdated information.[11] Autopopulation has been criticized for creating lengthy notes full of trivial or redundant data, a phenomenon termed note bloat. Bloated notes may be less effective as a communication tool.[12] Additionally, the process of composing a note often stimulates critical thinking and may lead to changes in care. The act of copying forward a previous note and autopopulating data bypasses that process and in effect may suppress critical thinking.[13] Previous studies have raised numerous concerns regarding copy forward and autopopulation functionality in the EHR. Many have described the duplication of outdated data and the possibility of the introduction and perpetuation of errors.[14, 15, 16] The Veterans Affairs (VA) Puget Sound Health system evaluated 6322 copy events and found that 1 in 10 electronic patient charts contained an instance of high‐risk copying.[17] In a survey of faculty and residents at a single academic medical center, the majority of users of copy and paste functionality recognized the hazards; they responded that their notes may contain more outdated (66%) and more inconsistent information (69%). Yet, most felt copy forwarding improved the documentation of the entire hospital course (87%), overall physician documentation (69%), and should definitely be continued (91%).[11] Others have complained about the impact of copy forward on the expression of clinical reasoning.[7, 9, 18]

Previous discussions on the topic of overall note quality following EHR implementation have been limited to perspectives or opinion pieces of individual attending providers.[18] We conducted a survey across 4 academic institutions to analyze both housestaff and attendings perceptions of the quality of notes since the implementation of an EHR to better inform the discussion of the impact of an EHR on note quality.

METHODS

Participants

Surveys were administered via email to interns, residents (second‐, third‐, or fourth‐year residents, hereafter referred to as residents) and attendings at 4 academic hospitals that use the Epic EHR (Epic Corp., Madison, WI). The 4 institutions each adopted the Epic EHR, with mandatory faculty and resident training, between 1 and 5 years prior to the survey. Three of the institutions previously used systems with electronic notes, whereas the fourth institution previously used a system with handwritten notes. The study participation emails included a link to an online survey in REDCap.[19] We included interns and residents from the following types of residency programs: internal medicine categorical or primary care, medicine‐pediatrics, or medicine‐psychiatry. For housestaff (the combination of both interns and residents), exclusion criteria included preliminary or transitional year interns, or any interns or residents from other specialties who rotate on the medicine service. For attendings, participants included hospitalists, general internal medicine attendings, chief residents, and subspecialty medicine attendings, each of whom had worked for any amount of time on the inpatient medicine teaching service in the prior 12 months.

Design

We developed 3 unique surveys for interns, residents, and attendings to assess their perception of inpatient progress notes (see Supporting Information, Appendix, in the online version of this article). The surveys incorporated questions from 2 previously published sources, the 9‐item Physician Documentation Quality Instrument (PDQI‐9) (see online Appendix), a validated note‐scoring tool, and the Accreditation Council for Graduate Medical Education note‐writing competency checklists.[20] Additionally, faculty at the participating institutions developed questions to address practices and attitudes toward autopopulation, copy forward, and the purposes of a progress note. Responses were based on a 5‐point Likert scale. The intern and resident surveys asked for self‐evaluation of their own progress notes and those of their peers, whereas the attending surveys asked for assessment of housestaff notes.

The survey was left open for a total of 55 days and participants were sent reminder emails. The study received a waiver from the institutional review board at all 4 institutions.

Data Analysis

Study data were collected and managed using REDCap electronic data capture tools hosted at the University of California, San Francisco (UCSF).[19] The survey data were analyzed and the figures were created using Microsoft Excel 2008 (Microsoft Corp., Redmond, WA). Mean values for each survey question were calculated. Differences between the means among the groups were assessed using 2‐sample t tests. P values <0.05 were considered statistically significant.

RESULTS

Demographics

We received 99 completed surveys from interns, 155 completed surveys from residents, and 153 completed surveys from attendings across the 4 institutions. The overall response rate for interns was 68%, ranging from 59% at the University of California, San Diego (UCSD) to 74% at the University of Iowa. The overall response rate for residents was 49%, ranging from 38% at UCSF to 66% at the University of California, Los Angeles. The overall response rate for attendings was 70%, ranging from 53% at UCSD to 74% at UCSF.

A total of 78% of interns and 72% of residents had used an EHR at a prior institution. Of the residents, 90 were second‐year residents, 64 were third‐year residents, and 2 were fourth‐year residents. A total of 76% of attendings self‐identified as hospitalists.

Overall Assessment of Note Quality

Participants were asked to rate the quality of progress notes on a 5‐point scale (poor, fair, good, very good, excellent). Half of interns and residents rated their own progress notes as very good or excellent. A total of 44% percent of interns and 24% of residents rated their peers notes as very good or excellent, whereas only 15% of attending physicians rated housestaff notes as very good or excellent.

When asked to rate the change in progress note quality since their hospital had adopted the EHR, the majority of residents answered unchanged or better, and the majority of attendings answered unchanged or worse (Figure 1).

Figure 1
Resident and attending assessment of progress note quality since adopting the Epic electronic health record.

PDQI‐9 Framework

Participants answered each PDQI‐9 question on a 5‐point Likert scale ranging from not at all (1) to extremely (5). In 8 of the 9 PDQI‐9 domains, there were no significant differences between interns and residents. Across each domain, attending perceptions of housestaff notes were significantly lower than housestaff perceptions of their own notes (P<0.001) (Figure 2). Both housestaff and attendings gave the highest ratings to thorough, up to date, and synthesized and the lowest rating to succinct.

Figure 2
Mean intern, resident, and attending perception of note characteristics based on the 9‐item Physician Documentation Quality Instrument (*P < 0.05, **P < 0.001).

Copy Forward and Autopopulation

Overall, the effect of copy forward and autopopulation on critical thinking, note accuracy, and prioritizing the problem list was thought to be neutral or somewhat positive by interns, neutral by residents, and neutral or somewhat negative by attendings (P<0.001) (Figure 3). In all, 16% of interns, 22% of residents, and 55% of attendings reported that copy forward had a somewhat negative or very negative impact on critical thinking (P<0.001). In all, 16% of interns, 29% of residents and 39% of attendings thought that autopopulation had a somewhat negative or very negative impact on critical thinking (P<0.001).

Figure 3
Intern, resident, and attending perceptions of the mean impact of copy forward and autopopulation (*P < 0.05, **P < 0.001).

Purpose of Progress Notes

Participants were provided with 7 possible purposes of a progress note and asked to rate the importance of each stated purpose. There was nearly perfect agreement between interns, residents, and attendings in the rank order of the importance of each purpose of a progress note (Table 1). Attendings and housestaff ranked communication with other providers and documenting important events and the plan for the day as the 2 most important purposes of a progress note, and billing and quality improvement as less important.

Ranked Importance of Each Purpose of a Progress Note
 InternsResidentsAttendings
Communication with other providers112
Documenting important events and the plan for the day221
Prioritizing issues going forward in the patient's care333
Medicolegal444
Stimulate critical thinking555
Billing666
Quality improvement777

DISCUSSION

This is the first large multicenter analysis of both attendings and housestaff perceptions of note quality in the EHR era. The findings provide insight into important differences and similarities in the perceptions of the 2 groups. Most striking is the difference in opinion of overall note quality, with only a small minority of faculty rating current housestaff notes as very good or excellent, whereas a much larger proportion of housestaff rated their own notes and those of their peers to be of high quality. Though participants were not specifically asked why note quality in general was suboptimal, housestaff and faculty rankings of specific domains from the PDQI‐9 may yield an important clue. Specifically, all groups expressed that the weakest attribute of current progress notes is succinct. This finding is consistent with the note bloat phenomenon, which has been maligned as a consequence of EHR implementation.[7, 14, 18, 21, 22]

One interesting finding was that only 5% of interns rated the notes of other housestaff as fair or poor. One possible explanation for this may be the tendency for an individual to enhance or augment the status or performance of the group to which he or she belongs as a mechanism to increase self‐image, known as the social identity theory.[23] Thus, housestaff may not criticize their peers to allow for identification with a group that is not deficient in note writing.

The more positive assessment of overall note quality among housestaff could be related to the different roles of housestaff and attendings on a teaching service. On a teaching service, housestaff are typically the writer, whereas attendings are almost exclusively the reader of progress notes. Housestaff may reap benefits, including efficiency, beyond the finished product. A perception of higher quality may reflect the process of note writing, data gathering, and critical thinking required to build an assessment and plan. The scores on the PDQI‐9 support this notion, as housestaff rated all 9 domains significantly higher than attendings.

Housestaff and attendings held greater differences of opinion with respect to the EHR's impact on note quality. Generally, housestaff perceived the EHR to have improved progress note quality, whereas attendings perceived the opposite. One explanation could be that these results reflect changing stages of development of physicians well described through the RIME framework (reporter, interpreter, manager, educator). Attendings may expect notes to reflect synthesis and analysis, whereas trainees may be satisfied with the data gathering that an EHR facilitates. In our survey, the trend of answers from intern to resident to attending suggests an evolving process of attitudes toward note quality.

The above reasons may also explain why housestaff were generally more positive than attendings about the effect of copy forward and autopopulation functions on critical thinking. Perhaps, as these functions can potentially increase efficiency and decrease time spent at the computer, although data are mixed on this finding, housestaff may have more time to spend with patients or develop a thorough plan and thus rate these functions positively.

Notably, housestaff and attendings had excellent agreement on the purposes of a progress note. They agreed that the 2 most important purposes were communication with other providers and documenting important events and the plan for the day. These are the 2 listed purposes that are most directly related to patient care. If future interventions to improve note quality require housestaff and attendings to significantly change their behavior, a focus on the impact on patient care might yield the best results.

There were several limitations in our study. Any study based on self‐assessment is subject to bias. A previous meta‐analysis and review described poor to moderate correlations between self‐assessed and external measures of performance.[24, 25] The survey data were aggregated from 4 institutions despite somewhat different, though relatively high, response rates between the institutions. There could be a response bias; those who did not respond may have systematically different perceptions of note quality. It should be noted that the general demographics of the respondents reflected those of the housestaff and attendings at 4 academic centers. All 4 of the participating institutions adopted the Epic EHR within the last several years of the survey being administered, and perceptions of note quality may be biased depending on the prior system used (ie, change from handwritten to electronic vs electronic to other electronic system). In addition, the survey results reflect experience with only 1 EHR, and our results may not apply to other EHR vendors or institutions like the VA, which have a long‐standing system in place. Last, we did not explore the impact of perceived note quality on the measured or perceived quality of care. One previous study found no direct correlation between note quality and clinical quality.[26]

There are several future directions for research based on our findings. First, potential differences between housestaff and attending perceptions of note quality could be further teased apart by studying the perceptions of attendings on a nonteaching service who write their own daily progress notes. Second, housestaff perceptions on why copy forward and autopopulation may increase critical thinking could be explored further with more direct questioning. Finally, although our study captured only perceptions of note quality, validated tools could be used to objectively measure note quality; these measurements could then be compared to perception of note quality as well as clinical outcomes.

Given the prevalence and the apparent belief that the benefits of an EHR outweigh the hazards, institutions should embrace these innovations but take steps to mitigate the potential errors and problems associated with copy forward and autopopulation. The results of our study should help inform future interventions.

Acknowledgements

The authors acknowledge the contributions of Russell Leslie from the University of Iowa.

Disclosure: Nothing to report.

References
  1. Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144(10):742752.
  2. Amarasingham R, Plantinga L, Diener‐West M, Gaskin DJ, Powe NR. Clinical information technologies and inpatient outcomes: a multiple hospital study. Arch Intern Med. 2009;169(2):108114.
  3. Bates DW, Leape LL, Cullen DJ, et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA. 1998;280(15):13111316.
  4. Cebul RD, Love TE, Jain AK, Hebert CJ. Electronic health records and quality of diabetes care. N Engl J Med. 2011;365(9):825833.
  5. Donati A, Gabbanelli V, Pantanetti S, et al. The impact of a clinical information system in an intensive care unit. J Clin Monit Comput. 2008;22(1):3136.
  6. Schiff GD, Bates DW. Can electronic clinical documentation help prevent diagnostic errors? N Engl J Med. 2010;362(12):10661069.
  7. Hartzband P, Groopman J. Off the record—avoiding the pitfalls of going electronic. N Eng J Med. 2008;358(16):16561658.
  8. Thielke S, Hammond K, Helbig S. Copying and pasting of examinations within the electronic medical record. Int J Med Inform. 2007;76(suppl 1):S122S128.
  9. Siegler EL, Adelman R. Copy and paste: a remediable hazard of electronic health records. Am J Med. 2009;122(6):495496.
  10. Sheehy AM, Weissburg DJ, Dean SM. The role of copy‐and‐paste in the hospital electronic health record. JAMA Intern Med. 2014;174(8):12171218.
  11. O'Donnell HC, Kaushal R, Barrón Y, Callahan MA, Adelman RD, Siegler EL. Physicians’ attitudes towards copy and pasting in electronic note writing. J Gen Intern Med. 2009;24(1):6368.
  12. Tierney MJ, Pageler NM, Kahana M, Pantaleoni JL, Longhurst CA. Medical education in the electronic medical record (EMR) era: benefits, challenges, and future directions. Acad Med. 2013;88(6):748752.
  13. Schenarts PJ, Schenarts KD. Educational impact of the electronic medical record. J Surg Educ. 2012;69(1):105112.
  14. Weir CR, Hurdle JF, Felgar MA, Hoffman JM, Roth B, Nebeker JR. Direct text entry in electronic progress notes. An evaluation of input errors. Methods Inf Med. 2003;42(1):6167.
  15. Barr MS. The clinical record: a 200‐year‐old 21st‐century challenge. Ann Intern Med. 2010;153(10):682683.
  16. Hirschtick R. Sloppy and paste. Morbidity and Mortality Rounds on the Web. Available at: http://www.webmm.ahrq.gov/case.aspx?caseID=274. Published July 2012. Accessed September 26, 2014.
  17. Hammond KW, Helbig ST, Benson CC, Brathwaite‐Sketoe BM. Are electronic medical records trustworthy? Observations on copying, pasting and duplication. AMIA Annu Symp Proc. 2003:269273.
  18. Hirschtick RE. A piece of my mind. John Lennon's elbow. JAMA. 2012;308(5):463464.
  19. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata‐driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377381.
  20. Whelan H, Latimore D, Murin S. ACGME competency note checklist. Available at: http://www.im.org/p/cm/ld/fid=831. Accessed August 8, 2013.
  21. Stetson PD, Bakken S, Wrenn JO, Siegler EL. Assessing electronic note quality using the Physician Documentation Quality Instrument (PDQI‐9). Appl Clin Inform. 2012;3(2):164174.
  22. Wrenn JO, Stein DM, Bakken S, Stetson PD. Quantifying clinical narrative redundancy in an electronic health record. J Am Med Inform Assoc. 2010;17(1):4953.
  23. Tajfel H, Turner JC. The social identity theory of intergroup behavior. In: Psychology of Intergroup Relations. 2nd ed. Chicago, IL: Nelson‐Hall Publishers; 1986:724.
  24. Falchikov N, Boud D. Student self‐assessment in higher education: a meta‐analysis. Rev Educ Res. 1989;59:395430.
  25. Gordon MJ. A review of the validity and accuracy of self‐assessments in health professions training. Acad Med. 1991;66:762769.
  26. Edwards ST, Neri PM, Volk LA, Schiff GD, Bates DW. Association of note quality and quality of care: a cross‐sectional study. BMJ Qual Saf. 2014;23(5):406413.
References
  1. Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144(10):742752.
  2. Amarasingham R, Plantinga L, Diener‐West M, Gaskin DJ, Powe NR. Clinical information technologies and inpatient outcomes: a multiple hospital study. Arch Intern Med. 2009;169(2):108114.
  3. Bates DW, Leape LL, Cullen DJ, et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA. 1998;280(15):13111316.
  4. Cebul RD, Love TE, Jain AK, Hebert CJ. Electronic health records and quality of diabetes care. N Engl J Med. 2011;365(9):825833.
  5. Donati A, Gabbanelli V, Pantanetti S, et al. The impact of a clinical information system in an intensive care unit. J Clin Monit Comput. 2008;22(1):3136.
  6. Schiff GD, Bates DW. Can electronic clinical documentation help prevent diagnostic errors? N Engl J Med. 2010;362(12):10661069.
  7. Hartzband P, Groopman J. Off the record—avoiding the pitfalls of going electronic. N Eng J Med. 2008;358(16):16561658.
  8. Thielke S, Hammond K, Helbig S. Copying and pasting of examinations within the electronic medical record. Int J Med Inform. 2007;76(suppl 1):S122S128.
  9. Siegler EL, Adelman R. Copy and paste: a remediable hazard of electronic health records. Am J Med. 2009;122(6):495496.
  10. Sheehy AM, Weissburg DJ, Dean SM. The role of copy‐and‐paste in the hospital electronic health record. JAMA Intern Med. 2014;174(8):12171218.
  11. O'Donnell HC, Kaushal R, Barrón Y, Callahan MA, Adelman RD, Siegler EL. Physicians’ attitudes towards copy and pasting in electronic note writing. J Gen Intern Med. 2009;24(1):6368.
  12. Tierney MJ, Pageler NM, Kahana M, Pantaleoni JL, Longhurst CA. Medical education in the electronic medical record (EMR) era: benefits, challenges, and future directions. Acad Med. 2013;88(6):748752.
  13. Schenarts PJ, Schenarts KD. Educational impact of the electronic medical record. J Surg Educ. 2012;69(1):105112.
  14. Weir CR, Hurdle JF, Felgar MA, Hoffman JM, Roth B, Nebeker JR. Direct text entry in electronic progress notes. An evaluation of input errors. Methods Inf Med. 2003;42(1):6167.
  15. Barr MS. The clinical record: a 200‐year‐old 21st‐century challenge. Ann Intern Med. 2010;153(10):682683.
  16. Hirschtick R. Sloppy and paste. Morbidity and Mortality Rounds on the Web. Available at: http://www.webmm.ahrq.gov/case.aspx?caseID=274. Published July 2012. Accessed September 26, 2014.
  17. Hammond KW, Helbig ST, Benson CC, Brathwaite‐Sketoe BM. Are electronic medical records trustworthy? Observations on copying, pasting and duplication. AMIA Annu Symp Proc. 2003:269273.
  18. Hirschtick RE. A piece of my mind. John Lennon's elbow. JAMA. 2012;308(5):463464.
  19. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata‐driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377381.
  20. Whelan H, Latimore D, Murin S. ACGME competency note checklist. Available at: http://www.im.org/p/cm/ld/fid=831. Accessed August 8, 2013.
  21. Stetson PD, Bakken S, Wrenn JO, Siegler EL. Assessing electronic note quality using the Physician Documentation Quality Instrument (PDQI‐9). Appl Clin Inform. 2012;3(2):164174.
  22. Wrenn JO, Stein DM, Bakken S, Stetson PD. Quantifying clinical narrative redundancy in an electronic health record. J Am Med Inform Assoc. 2010;17(1):4953.
  23. Tajfel H, Turner JC. The social identity theory of intergroup behavior. In: Psychology of Intergroup Relations. 2nd ed. Chicago, IL: Nelson‐Hall Publishers; 1986:724.
  24. Falchikov N, Boud D. Student self‐assessment in higher education: a meta‐analysis. Rev Educ Res. 1989;59:395430.
  25. Gordon MJ. A review of the validity and accuracy of self‐assessments in health professions training. Acad Med. 1991;66:762769.
  26. Edwards ST, Neri PM, Volk LA, Schiff GD, Bates DW. Association of note quality and quality of care: a cross‐sectional study. BMJ Qual Saf. 2014;23(5):406413.
Issue
Journal of Hospital Medicine - 10(8)
Issue
Journal of Hospital Medicine - 10(8)
Page Number
525-529
Page Number
525-529
Publications
Publications
Article Type
Display Headline
Internal medicine progress note writing attitudes and practices in an electronic health record
Display Headline
Internal medicine progress note writing attitudes and practices in an electronic health record
Sections
Article Source

© 2015 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Elizabeth Stewart, MD, Department of Medicine, Division of Hospital Medicine, Alameda Health System, 411 E. 31st St., A2, Room 7, Oakland, CA 94602; Telephone: 510‐437‐8500; Fax: 510‐437‐5174; E‐mail: estewart@alamedahealthsystem.org
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files